id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
666 | https://en.wikipedia.org/wiki/Alkali%20metal | |-
! colspan=2 style="text-align:left;" | ↓ Period
|-
! 2
|
|-
! 3
|
|-
! 4
|
|-
! 5
|
|-
! 6
|
|-
! 7
|
|-
| colspan="2"|
Legend
|}
The alkali metals consist of the chemical elements lithium (Li), sodium (Na), potassium (K), rubidium (Rb), caesium (Cs), and francium (Fr). Together with hydrogen they constitute group 1, which lies in the s-block of the periodic table. All alkali metals have their outermost electron in an s-orbital: this shared electron configuration results in their having very similar characteristic properties. Indeed, the alkali metals provide the best example of group trends in properties in the periodic table, with elements exhibiting well-characterised homologous behaviour. This family of elements is also known as the lithium family after its leading element.
The alkali metals are all shiny, soft, highly reactive metals at standard temperature and pressure and readily lose their outermost electron to form cations with charge +1. They can all be cut easily with a knife due to their softness, exposing a shiny surface that tarnishes rapidly in air due to oxidation by atmospheric moisture and oxygen (and in the case of lithium, nitrogen). Because of their high reactivity, they must be stored under oil to prevent reaction with air, and are found naturally only in salts and never as the free elements. Caesium, the fifth alkali metal, is the most reactive of all the metals. All the alkali metals react with water, with the heavier alkali metals reacting more vigorously than the lighter ones.
All of the discovered alkali metals occur in nature as their compounds: in order of abundance, sodium is the most abundant, followed by potassium, lithium, rubidium, caesium, and finally francium, which is very rare due to its extremely high radioactivity; francium occurs only in minute traces in nature as an intermediate step in some obscure side branches of the natural decay chains. Experiments have been conducted to attempt the synthesis of element 119, which is likely to be the next member of the group; none were successful. However, ununennium may not be an alkali metal due to relativistic effects, which are predicted to have a large influence on the chemical properties of superheavy elements; even if it does turn out to be an alkali metal, it is predicted to have some differences in physical and chemical properties from its lighter homologues.
Most alkali metals have many different applications. One of the best-known applications of the pure elements is the use of rubidium and caesium in atomic clocks, of which caesium atomic clocks form the basis of the second. A common application of the compounds of sodium is the sodium-vapour lamp, which emits light very efficiently. Table salt, or sodium chloride, has been used since antiquity. Lithium finds use as a psychiatric medication and as an anode in lithium batteries. Sodium, potassium and possibly lithium are essential elements, having major biological roles as electrolytes, and although the other alkali metals are not essential, they also have various effects on the body, both beneficial and harmful.
History
Sodium compounds have been known since ancient times; salt (sodium chloride) has been an important commodity in human activities. While potash has been used since ancient times, it was not understood for most of its history to be a fundamentally different substance from sodium mineral salts. Georg Ernst Stahl obtained experimental evidence which led him to suggest the fundamental difference of sodium and potassium salts in 1702, and Henri-Louis Duhamel du Monceau was able to prove this difference in 1736. The exact chemical composition of potassium and sodium compounds, and the status as chemical element of potassium and sodium, was not known then, and thus Antoine Lavoisier did not include either alkali in his list of chemical elements in 1789.
Pure potassium was first isolated in 1807 in England by Humphry Davy, who derived it from caustic potash (KOH, potassium hydroxide) by the use of electrolysis of the molten salt with the newly invented voltaic pile. Previous attempts at electrolysis of the aqueous salt were unsuccessful due to potassium's extreme reactivity. Potassium was the first metal that was isolated by electrolysis. Later that same year, Davy reported extraction of sodium from the similar substance caustic soda (NaOH, lye) by a similar technique, demonstrating the elements, and thus the salts, to be different.
Petalite () was discovered in 1800 by the Brazilian chemist José Bonifácio de Andrada in a mine on the island of Utö, Sweden. However, it was not until 1817 that Johan August Arfwedson, then working in the laboratory of the chemist Jöns Jacob Berzelius, detected the presence of a new element while analysing petalite ore. This new element was noted by him to form compounds similar to those of sodium and potassium, though its carbonate and hydroxide were less soluble in water and more alkaline than the other alkali metals. Berzelius gave the unknown material the name lithion/lithina, from the Greek word λιθoς (transliterated as lithos, meaning "stone"), to reflect its discovery in a solid mineral, as opposed to potassium, which had been discovered in plant ashes, and sodium, which was known partly for its high abundance in animal blood. He named the metal inside the material lithium. Lithium, sodium, and potassium were part of the discovery of periodicity, as they are among a series of triads of elements in the same group that were noted by Johann Wolfgang Döbereiner in 1850 as having similar properties.
Rubidium and caesium were the first elements to be discovered using the spectroscope, invented in 1859 by Robert Bunsen and Gustav Kirchhoff. The next year, they discovered caesium in the mineral water from Bad Dürkheim, Germany. Their discovery of rubidium came the following year in Heidelberg, Germany, finding it in the mineral lepidolite. The names of rubidium and caesium come from the most prominent lines in their emission spectra: a bright red line for rubidium (from the Latin word rubidus, meaning dark red or bright red), and a sky-blue line for caesium (derived from the Latin word caesius, meaning sky-blue).
Around 1865 John Newlands produced a series of papers where he listed the elements in order of increasing atomic weight and similar physical and chemical properties that recurred at intervals of eight; he likened such periodicity to the octaves of music, where notes an octave apart have similar musical functions. His version put all the alkali metals then known (lithium to caesium), as well as copper, silver, and thallium (which show the +1 oxidation state characteristic of the alkali metals), together into a group. His table placed hydrogen with the halogens.
After 1869, Dmitri Mendeleev proposed his periodic table placing lithium at the top of a group with sodium, potassium, rubidium, caesium, and thallium. Two years later, Mendeleev revised his table, placing hydrogen in group 1 above lithium, and also moving thallium to the boron group. In this 1871 version, copper, silver, and gold were placed twice, once as part of group IB, and once as part of a "group VIII" encompassing today's groups 8 to 11. After the introduction of the 18-column table, the group IB elements were moved to their current position in the d-block, while alkali metals were left in group IA. Later the group's name was changed to group 1 in 1988. The trivial name "alkali metals" comes from the fact that the hydroxides of the group 1 elements are all strong alkalis when dissolved in water.
There were at least four erroneous and incomplete discoveries before Marguerite Perey of the Curie Institute in Paris, France discovered francium in 1939 by purifying a sample of actinium-227, which had been reported to have a decay energy of 220 keV. However, Perey noticed decay particles with an energy level below 80 keV. Perey thought this decay activity might have been caused by a previously unidentified decay product, one that was separated during purification, but emerged again out of the pure actinium-227. Various tests eliminated the possibility of the unknown element being thorium, radium, lead, bismuth, or thallium. The new product exhibited chemical properties of an alkali metal (such as coprecipitating with caesium salts), which led Perey to believe that it was element 87, caused by the alpha decay of actinium-227. Perey then attempted to determine the proportion of beta decay to alpha decay in actinium-227. Her first test put the alpha branching at 0.6%, a figure that she later revised to 1%.
The next element below francium (eka-francium) in the periodic table would be ununennium (Uue), element 119. The synthesis of ununennium was first attempted in 1985 by bombarding a target of einsteinium-254 with calcium-48 ions at the superHILAC accelerator at the Lawrence Berkeley National Laboratory in Berkeley, California. No atoms were identified, leading to a limiting yield of 300 nb.
+ → * → no atoms
It is highly unlikely that this reaction will be able to create any atoms of ununennium in the near future, given the extremely difficult task of making sufficient amounts of einsteinium-254, which is favoured for production of ultraheavy elements because of its large mass, relatively long half-life of 270 days, and availability in significant amounts of several micrograms, to make a large enough target to increase the sensitivity of the experiment to the required level; einsteinium has not been found in nature and has only been produced in laboratories, and in quantities smaller than those needed for effective synthesis of superheavy elements. However, given that ununennium is only the first period 8 element on the extended periodic table, it may well be discovered in the near future through other reactions, and indeed an attempt to synthesise it is currently ongoing in Japan. Currently, none of the period 8 elements has been discovered yet, and it is also possible, due to drip instabilities, that only the lower period 8 elements, up to around element 128, are physically possible. No attempts at synthesis have been made for any heavier alkali metals: due to their extremely high atomic number, they would require new, more powerful methods and technology to make.
Occurrence
In the Solar System
The Oddo–Harkins rule holds that elements with even atomic numbers are more common that those with odd atomic numbers, with the exception of hydrogen. This rule argues that elements with odd atomic numbers have one unpaired proton and are more likely to capture another, thus increasing their atomic number. In elements with even atomic numbers, protons are paired, with each member of the pair offsetting the spin of the other, enhancing stability. All the alkali metals have odd atomic numbers and they are not as common as the elements with even atomic numbers adjacent to them (the noble gases and the alkaline earth metals) in the Solar System. The heavier alkali metals are also less abundant than the lighter ones as the alkali metals from rubidium onward can only be synthesised in supernovae and not in stellar nucleosynthesis. Lithium is also much less abundant than sodium and potassium as it is poorly synthesised in both Big Bang nucleosynthesis and in stars: the Big Bang could only produce trace quantities of lithium, beryllium and boron due to the absence of a stable nucleus with 5 or 8 nucleons, and stellar nucleosynthesis could only pass this bottleneck by the triple-alpha process, fusing three helium nuclei to form carbon, and skipping over those three elements.
On Earth
The Earth formed from the same cloud of matter that formed the Sun, but the planets acquired different compositions during the formation and evolution of the solar system. In turn, the natural history of the Earth caused parts of this planet to have differing concentrations of the elements. The mass of the Earth is approximately 5.98 kg. It is composed mostly of iron (32.1%), oxygen (30.1%), silicon (15.1%), magnesium (13.9%), sulfur (2.9%), nickel (1.8%), calcium (1.5%), and aluminium (1.4%); with the remaining 1.2% consisting of trace amounts of other elements. Due to planetary differentiation, the core region is believed to be primarily composed of iron (88.8%), with smaller amounts of nickel (5.8%), sulfur (4.5%), and less than 1% trace elements.
The alkali metals, due to their high reactivity, do not occur naturally in pure form in nature. They are lithophiles and therefore remain close to the Earth's surface because they combine readily with oxygen and so associate strongly with silica, forming relatively low-density minerals that do not sink down into the Earth's core. Potassium, rubidium and caesium are also incompatible elements due to their large ionic radii.
Sodium and potassium are very abundant on Earth, both being among the ten most common elements in Earth's crust; sodium makes up approximately 2.6% of the Earth's crust measured by weight, making it the sixth most abundant element overall and the most abundant alkali metal. Potassium makes up approximately 1.5% of the Earth's crust and is the seventh most abundant element. Sodium is found in many different minerals, of which the most common is ordinary salt (sodium chloride), which occurs in vast quantities dissolved in seawater. Other solid deposits include halite, amphibole, cryolite, nitratine, and zeolite. Many of these solid deposits occur as a result of ancient seas evaporating, which still occurs now in places such as Utah's Great Salt Lake and the Dead Sea. Despite their near-equal abundance in Earth's crust, sodium is far more common than potassium in the ocean, both because potassium's larger size makes its salts less soluble, and because potassium is bound by silicates in soil and what potassium leaches is absorbed far more readily by plant life than sodium.
Despite its chemical similarity, lithium typically does not occur together with sodium or potassium due to its smaller size. Due to its relatively low reactivity, it can be found in seawater in large amounts; it is estimated that lithium concentration in seawater is approximately 0.14 to 0.25 parts per million (ppm) or 25 micromolar. Its diagonal relationship with magnesium often allows it to replace magnesium in ferromagnesium minerals, where its crustal concentration is about 18 ppm, comparable to that of gallium and niobium. Commercially, the most important lithium mineral is spodumene, which occurs in large deposits worldwide.
Rubidium is approximately as abundant as zinc and more abundant than copper. It occurs naturally in the minerals leucite, pollucite, carnallite, zinnwaldite, and lepidolite, although none of these contain only rubidium and no other alkali metals. Caesium is more abundant than some commonly known elements, such as antimony, cadmium, tin, and tungsten, but is much less abundant than rubidium.
Francium-223, the only naturally occurring isotope of francium, is the product of the alpha decay of actinium-227 and can be found in trace amounts in uranium minerals. In a given sample of uranium, there is estimated to be only one francium atom for every 1018 uranium atoms. It has been calculated that there are at most 30 grams of francium in the earth's crust at any time, due to its extremely short half-life of 22 minutes.
Properties
Physical and chemical
The physical and chemical properties of the alkali metals can be readily explained by their having an ns1 valence electron configuration, which results in weak metallic bonding. Hence, all the alkali metals are soft and have low densities, melting and boiling points, as well as heats of sublimation, vaporisation, and dissociation. They all crystallise in the body-centered cubic crystal structure, and have distinctive flame colours because their outer s electron is very easily excited. Indeed, these flame test colours are the most common way of identifying them since all their salts with common ions are soluble. The ns1 configuration also results in the alkali metals having very large atomic and ionic radii, as well as very high thermal and electrical conductivity. Their chemistry is dominated by the loss of their lone valence electron in the outermost s-orbital to form the +1 oxidation state, due to the ease of ionising this electron and the very high second ionisation energy. Most of the chemistry has been observed only for the first five members of the group. The chemistry of francium is not well established due to its extreme radioactivity; thus, the presentation of its properties here is limited. What little is known about francium shows that it is very close in behaviour to caesium, as expected. The physical properties of francium are even sketchier because the bulk element has never been observed; hence any data that may be found in the literature are certainly speculative extrapolations.
The alkali metals are more similar to each other than the elements in any other group are to each other. Indeed, the similarity is so great that it is quite difficult to separate potassium, rubidium, and caesium, due to their similar ionic radii; lithium and sodium are more distinct. For instance, when moving down the table, all known alkali metals show increasing atomic radius, decreasing electronegativity, increasing reactivity, and decreasing melting and boiling points as well as heats of fusion and vaporisation. In general, their densities increase when moving down the table, with the exception that potassium is less dense than sodium. One of the very few properties of the alkali metals that does not display a very smooth trend is their reduction potentials: lithium's value is anomalous, being more negative than the others. This is because the Li+ ion has a very high hydration energy in the gas phase: though the lithium ion disrupts the structure of water significantly, causing a higher change in entropy, this high hydration energy is enough to make the reduction potentials indicate it as being the most electropositive alkali metal, despite the difficulty of ionising it in the gas phase.
The stable alkali metals are all silver-coloured metals except for caesium, which has a pale golden tint: it is one of only three metals that are clearly coloured (the other two being copper and gold). Additionally, the heavy alkaline earth metals calcium, strontium, and barium, as well as the divalent lanthanides europium and ytterbium, are pale yellow, though the colour is much less prominent than it is for caesium. Their lustre tarnishes rapidly in air due to oxidation.
All the alkali metals are highly reactive and are never found in elemental forms in nature. Because of this, they are usually stored in mineral oil or kerosene (paraffin oil). They react aggressively with the halogens to form the alkali metal halides, which are white ionic crystalline compounds that are all soluble in water except lithium fluoride (LiF). The alkali metals also react with water to form strongly alkaline hydroxides and thus should be handled with great care. The heavier alkali metals react more vigorously than the lighter ones; for example, when dropped into water, caesium produces a larger explosion than potassium if the same number of moles of each metal is used. The alkali metals have the lowest first ionisation energies in their respective periods of the periodic table because of their low effective nuclear charge and the ability to attain a noble gas configuration by losing just one electron. Not only do the alkali metals react with water, but also with proton donors like alcohols and phenols, gaseous ammonia, and alkynes, the last demonstrating the phenomenal degree of their reactivity. Their great power as reducing agents makes them very useful in liberating other metals from their oxides or halides.
The second ionisation energy of all of the alkali metals is very high as it is in a full shell that is also closer to the nucleus; thus, they almost always lose a single electron, forming cations. The alkalides are an exception: they are unstable compounds which contain alkali metals in a −1 oxidation state, which is very unusual as before the discovery of the alkalides, the alkali metals were not expected to be able to form anions and were thought to be able to appear in salts only as cations. The alkalide anions have filled s-subshells, which gives them enough stability to exist. All the stable alkali metals except lithium are known to be able to form alkalides, and the alkalides have much theoretical interest due to their unusual stoichiometry and low ionisation potentials. Alkalides are chemically similar to the electrides, which are salts with trapped electrons acting as anions. A particularly striking example of an alkalide is "inverse sodium hydride", H+Na− (both ions being complexed), as opposed to the usual sodium hydride, Na+H−: it is unstable in isolation, due to its high energy resulting from the displacement of two electrons from hydrogen to sodium, although several derivatives are predicted to be metastable or stable.
In aqueous solution, the alkali metal ions form aqua ions of the formula [M(H2O)n]+, where n is the solvation number. Their coordination numbers and shapes agree well with those expected from their ionic radii. In aqueous solution the water molecules directly attached to the metal ion are said to belong to the first coordination sphere, also known as the first, or primary, solvation shell. The bond between a water molecule and the metal ion is a dative covalent bond, with the oxygen atom donating both electrons to the bond. Each coordinated water molecule may be attached by hydrogen bonds to other water molecules. The latter are said to reside in the second coordination sphere. However, for the alkali metal cations, the second coordination sphere is not well-defined as the +1 charge on the cation is not high enough to polarise the water molecules in the primary solvation shell enough for them to form strong hydrogen bonds with those in the second coordination sphere, producing a more stable entity. The solvation number for Li+ has been experimentally determined to be 4, forming the tetrahedral [Li(H2O)4]+: while solvation numbers of 3 to 6 have been found for lithium aqua ions, solvation numbers less than 4 may be the result of the formation of contact ion pairs, and the higher solvation numbers may be interpreted in terms of water molecules that approach [Li(H2O)4]+ through a face of the tetrahedron, though molecular dynamic simulations may indicate the existence of an octahedral hexaaqua ion. There are also probably six water molecules in the primary solvation sphere of the sodium ion, forming the octahedral [Na(H2O)6]+ ion. While it was previously thought that the heavier alkali metals also formed octahedral hexaaqua ions, it has since been found that potassium and rubidium probably form the [K(H2O)8]+ and [Rb(H2O)8]+ ions, which have the square antiprismatic structure, and that caesium forms the 12-coordinate [Cs(H2O)12]+ ion.
Lithium
The chemistry of lithium shows several differences from that of the rest of the group as the small Li+ cation polarises anions and gives its compounds a more covalent character. Lithium and magnesium have a diagonal relationship due to their similar atomic radii, so that they show some similarities. For example, lithium forms a stable nitride, a property common among all the alkaline earth metals (magnesium's group) but unique among the alkali metals. In addition, among their respective groups, only lithium and magnesium form organometallic compounds with significant covalent character (e.g. LiMe and MgMe2).
Lithium fluoride is the only alkali metal halide that is poorly soluble in water, and lithium hydroxide is the only alkali metal hydroxide that is not deliquescent. Conversely, lithium perchlorate and other lithium salts with large anions that cannot be polarised are much more stable than the analogous compounds of the other alkali metals, probably because Li+ has a high solvation energy. This effect also means that most simple lithium salts are commonly encountered in hydrated form, because the anhydrous forms are extremely hygroscopic: this allows salts like lithium chloride and lithium bromide to be used in dehumidifiers and air-conditioners.
Francium
Francium is also predicted to show some differences due to its high atomic weight, causing its electrons to travel at considerable fractions of the speed of light and thus making relativistic effects more prominent. In contrast to the trend of decreasing electronegativities and ionisation energies of the alkali metals, francium's electronegativity and ionisation energy are predicted to be higher than caesium's due to the relativistic stabilisation of the 7s electrons; also, its atomic radius is expected to be abnormally low. Thus, contrary to expectation, caesium is the most reactive of the alkali metals, not francium. All known physical properties of francium also deviate from the clear trends going from lithium to caesium, such as the first ionisation energy, electron affinity, and anion polarisability, though due to the paucity of known data about francium many sources give extrapolated values, ignoring that relativistic effects make the trend from lithium to caesium become inapplicable at francium. Some of the few properties of francium that have been predicted taking relativity into account are the electron affinity (47.2 kJ/mol) and the enthalpy of dissociation of the Fr2 molecule (42.1 kJ/mol). The CsFr molecule is polarised as Cs+Fr−, showing that the 7s subshell of francium is much more strongly affected by relativistic effects than the 6s subshell of caesium. Additionally, francium superoxide (FrO2) is expected to have significant covalent character, unlike the other alkali metal superoxides, because of bonding contributions from the 6p electrons of francium.
Nuclear
All the alkali metals have odd atomic numbers; hence, their isotopes must be either odd–odd (both proton and neutron number are odd) or odd–even (proton number is odd, but neutron number is even). Odd–odd nuclei have even mass numbers, whereas odd–even nuclei have odd mass numbers. Odd–odd primordial nuclides are rare because most odd–odd nuclei are highly unstable with respect to beta decay, because the decay products are even–even, and are therefore more strongly bound, due to nuclear pairing effects.
Due to the great rarity of odd–odd nuclei, almost all the primordial isotopes of the alkali metals are odd–even (the exceptions being the light stable isotope lithium-6 and the long-lived radioisotope potassium-40). For a given odd mass number, there can be only a single beta-stable nuclide, since there is not a difference in binding energy between even–odd and odd–even comparable to that between even–even and odd–odd, leaving other nuclides of the same mass number (isobars) free to beta decay toward the lowest-mass nuclide. An effect of the instability of an odd number of either type of nucleons is that odd-numbered elements, such as the alkali metals, tend to have fewer stable isotopes than even-numbered elements. Of the 26 monoisotopic elements that have only a single stable isotope, all but one have an odd atomic number and all but one also have an even number of neutrons. Beryllium is the single exception to both rules, due to its low atomic number.
All of the alkali metals except lithium and caesium have at least one naturally occurring radioisotope: sodium-22 and sodium-24 are trace radioisotopes produced cosmogenically, potassium-40 and rubidium-87 have very long half-lives and thus occur naturally, and all isotopes of francium are radioactive. Caesium was also thought to be radioactive in the early 20th century, although it has no naturally occurring radioisotopes. (Francium had not been discovered yet at that time.) The natural long-lived radioisotope of potassium, potassium-40, makes up about 0.012% of natural potassium, and thus natural potassium is weakly radioactive. This natural radioactivity became a basis for a mistaken claim of the discovery for element 87 (the next alkali metal after caesium) in 1925. Natural rubidium is similarly slightly radioactive, with 27.83% being the long-lived radioisotope rubidium-87.
Caesium-137, with a half-life of 30.17 years, is one of the two principal medium-lived fission products, along with strontium-90, which are responsible for most of the radioactivity of spent nuclear fuel after several years of cooling, up to several hundred years after use. It constitutes most of the radioactivity still left from the Chernobyl accident. Caesium-137 undergoes high-energy beta decay and eventually becomes stable barium-137. It is a strong emitter of gamma radiation. Caesium-137 has a very low rate of neutron capture and cannot be feasibly disposed of in this way, but must be allowed to decay. Caesium-137 has been used as a tracer in hydrologic studies, analogous to the use of tritium. Small amounts of caesium-134 and caesium-137 were released into the environment during nearly all nuclear weapon tests and some nuclear accidents, most notably the Goiânia accident and the Chernobyl disaster. As of 2005, caesium-137 is the principal source of radiation in the zone of alienation around the Chernobyl nuclear power plant. Its chemical properties as one of the alkali metals make it one of the most problematic of the short-to-medium-lifetime fission products because it easily moves and spreads in nature due to the high water solubility of its salts, and is taken up by the body, which mistakes it for its essential congeners sodium and potassium.
Periodic trends
The alkali metals are more similar to each other than the elements in any other group are to each other. For instance, when moving down the table, all known alkali metals show increasing atomic radius, decreasing electronegativity, increasing reactivity, and decreasing melting and boiling points as well as heats of fusion and vaporisation. In general, their densities increase when moving down the table, with the exception that potassium is less dense than sodium.
Atomic and ionic radii
The atomic radii of the alkali metals increase going down the group. Because of the shielding effect, when an atom has more than one electron shell, each electron feels electric repulsion from the other electrons as well as electric attraction from the nucleus. In the alkali metals, the outermost electron only feels a net charge of +1, as some of the nuclear charge (which is equal to the atomic number) is cancelled by the inner electrons; the number of inner electrons of an alkali metal is always one less than the nuclear charge. Therefore, the only factor which affects the atomic radius of the alkali metals is the number of electron shells. Since this number increases down the group, the atomic radius must also increase down the group.
The ionic radii of the alkali metals are much smaller than their atomic radii. This is because the outermost electron of the alkali metals is in a different electron shell than the inner electrons, and thus when it is removed the resulting atom has one fewer electron shell and is smaller. Additionally, the effective nuclear charge has increased, and thus the electrons are attracted more strongly towards the nucleus and the ionic radius decreases.
First ionisation energy
The first ionisation energy of an element or molecule is the energy required to move the most loosely held electron from one mole of gaseous atoms of the element or molecules to form one mole of gaseous ions with electric charge +1. The factors affecting the first ionisation energy are the nuclear charge, the amount of shielding by the inner electrons and the distance from the most loosely held electron from the nucleus, which is always an outer electron in main group elements. The first two factors change the effective nuclear charge the most loosely held electron feels. Since the outermost electron of alkali metals always feels the same effective nuclear charge (+1), the only factor which affects the first ionisation energy is the distance from the outermost electron to the nucleus. Since this distance increases down the group, the outermost electron feels less attraction from the nucleus and thus the first ionisation energy decreases. This trend is broken in francium due to the relativistic stabilisation and contraction of the 7s orbital, bringing francium's valence electron closer to the nucleus than would be expected from non-relativistic calculations. This makes francium's outermost electron feel more attraction from the nucleus, increasing its first ionisation energy slightly beyond that of caesium.
The second ionisation energy of the alkali metals is much higher than the first as the second-most loosely held electron is part of a fully filled electron shell and is thus difficult to remove.
Reactivity
The reactivities of the alkali metals increase going down the group. This is the result of a combination of two factors: the first ionisation energies and atomisation energies of the alkali metals. Because the first ionisation energy of the alkali metals decreases down the group, it is easier for the outermost electron to be removed from the atom and participate in chemical reactions, thus increasing reactivity down the group. The atomisation energy measures the strength of the metallic bond of an element, which falls down the group as the atoms increase in radius and thus the metallic bond must increase in length, making the delocalised electrons further away from the attraction of the nuclei of the heavier alkali metals. Adding the atomisation and first ionisation energies gives a quantity closely related to (but not equal to) the activation energy of the reaction of an alkali metal with another substance. This quantity decreases going down the group, and so does the activation energy; thus, chemical reactions can occur faster and the reactivity increases down the group.
Electronegativity
Electronegativity is a chemical property that describes the tendency of an atom or a functional group to attract electrons (or electron density) towards itself. If the bond between sodium and chlorine in sodium chloride were covalent, the pair of shared electrons would be attracted to the chlorine because the effective nuclear charge on the outer electrons is +7 in chlorine but is only +1 in sodium. The electron pair is attracted so close to the chlorine atom that they are practically transferred to the chlorine atom (an ionic bond). However, if the sodium atom was replaced by a lithium atom, the electrons will not be attracted as close to the chlorine atom as before because the lithium atom is smaller, making the electron pair more strongly attracted to the closer effective nuclear charge from lithium. Hence, the larger alkali metal atoms (further down the group) will be less electronegative as the bonding pair is less strongly attracted towards them. As mentioned previously, francium is expected to be an exception.
Because of the higher electronegativity of lithium, some of its compounds have a more covalent character. For example, lithium iodide (LiI) will dissolve in organic solvents, a property of most covalent compounds. Lithium fluoride (LiF) is the only alkali halide that is not soluble in water, and lithium hydroxide (LiOH) is the only alkali metal hydroxide that is not deliquescent.
Melting and boiling points
The melting point of a substance is the point where it changes state from solid to liquid while the boiling point of a substance (in liquid state) is the point where the vapour pressure of the liquid equals the environmental pressure surrounding the liquid and all the liquid changes state to gas. As a metal is heated to its melting point, the metallic bonds keeping the atoms in place weaken so that the atoms can move around, and the metallic bonds eventually break completely at the metal's boiling point. Therefore, the falling melting and boiling points of the alkali metals indicate that the strength of the metallic bonds of the alkali metals decreases down the group. This is because metal atoms are held together by the electromagnetic attraction from the positive ions to the delocalised electrons. As the atoms increase in size going down the group (because their atomic radius increases), the nuclei of the ions move further away from the delocalised electrons and hence the metallic bond becomes weaker so that the metal can more easily melt and boil, thus lowering the melting and boiling points. The increased nuclear charge is not a relevant factor due to the shielding effect.
Density
The alkali metals all have the same crystal structure (body-centred cubic) and thus the only relevant factors are the number of atoms that can fit into a certain volume and the mass of one of the atoms, since density is defined as mass per unit volume. The first factor depends on the volume of the atom and thus the atomic radius, which increases going down the group; thus, the volume of an alkali metal atom increases going down the group. The mass of an alkali metal atom also increases going down the group. Thus, the trend for the densities of the alkali metals depends on their atomic weights and atomic radii; if figures for these two factors are known, the ratios between the densities of the alkali metals can then be calculated. The resultant trend is that the densities of the alkali metals increase down the table, with an exception at potassium. Due to having the lowest atomic weight and the largest atomic radius of all the elements in their periods, the alkali metals are the least dense metals in the periodic table. Lithium, sodium, and potassium are the only three metals in the periodic table that are less dense than water: in fact, lithium is the least dense known solid at room temperature.
Compounds
The alkali metals form complete series of compounds with all usually encountered anions, which well illustrate group trends. These compounds can be described as involving the alkali metals losing electrons to acceptor species and forming monopositive ions. This description is most accurate for alkali halides and becomes less and less accurate as cationic and anionic charge increase, and as the anion becomes larger and more polarisable. For instance, ionic bonding gives way to metallic bonding along the series NaCl, Na2O, Na2S, Na3P, Na3As, Na3Sb, Na3Bi, Na.
Hydroxides
All the alkali metals react vigorously or explosively with cold water, producing an aqueous solution of a strongly basic alkali metal hydroxide and releasing hydrogen gas. This reaction becomes more vigorous going down the group: lithium reacts steadily with effervescence, but sodium and potassium can ignite, and rubidium and caesium sink in water and generate hydrogen gas so rapidly that shock waves form in the water that may shatter glass containers. When an alkali metal is dropped into water, it produces an explosion, of which there are two separate stages. The metal reacts with the water first, breaking the hydrogen bonds in the water and producing hydrogen gas; this takes place faster for the more reactive heavier alkali metals. Second, the heat generated by the first part of the reaction often ignites the hydrogen gas, causing it to burn explosively into the surrounding air. This secondary hydrogen gas explosion produces the visible flame above the bowl of water, lake or other body of water, not the initial reaction of the metal with water (which tends to happen mostly under water). The alkali metal hydroxides are the most basic known hydroxides.
Recent research has suggested that the explosive behavior of alkali metals in water is driven by a Coulomb explosion rather than solely by rapid generation of hydrogen itself. All alkali metals melt as a part of the reaction with water. Water molecules ionise the bare metallic surface of the liquid metal, leaving a positively charged metal surface and negatively charged water ions. The attraction between the charged metal and water ions will rapidly increase the surface area, causing an exponential increase of ionisation. When the repulsive forces within the liquid metal surface exceeds the forces of the surface tension, it vigorously explodes.
The hydroxides themselves are the most basic hydroxides known, reacting with acids to give salts and with alcohols to give oligomeric alkoxides. They easily react with carbon dioxide to form carbonates or bicarbonates, or with hydrogen sulfide to form sulfides or bisulfides, and may be used to separate thiols from petroleum. They react with amphoteric oxides: for example, the oxides of aluminium, zinc, tin, and lead react with the alkali metal hydroxides to give aluminates, zincates, stannates, and plumbates. Silicon dioxide is acidic, and thus the alkali metal hydroxides can also attack silicate glass.
Intermetallic compounds
The alkali metals form many intermetallic compounds with each other and the elements from groups 2 to 13 in the periodic table of varying stoichiometries, such as the sodium amalgams with mercury, including Na5Hg8 and Na3Hg. Some of these have ionic characteristics: taking the alloys with gold, the most electronegative of metals, as an example, NaAu and KAu are metallic, but RbAu and CsAu are semiconductors. NaK is an alloy of sodium and potassium that is very useful because it is liquid at room temperature, although precautions must be taken due to its extreme reactivity towards water and air. The eutectic mixture melts at −12.6 °C. An alloy of 41% caesium, 47% sodium, and 12% potassium has the lowest known melting point of any metal or alloy, −78 °C.
Compounds with the group 13 elements
The intermetallic compounds of the alkali metals with the heavier group 13 elements (aluminium, gallium, indium, and thallium), such as NaTl, are poor conductors or semiconductors, unlike the normal alloys with the preceding elements, implying that the alkali metal involved has lost an electron to the Zintl anions involved. Nevertheless, while the elements in group 14 and beyond tend to form discrete anionic clusters, group 13 elements tend to form polymeric ions with the alkali metal cations located between the giant ionic lattice. For example, NaTl consists of a polymeric anion (—Tl−—)n with a covalent diamond cubic structure with Na+ ions located between the anionic lattice. The larger alkali metals cannot fit similarly into an anionic lattice and tend to force the heavier group 13 elements to form anionic clusters.
Boron is a special case, being the only nonmetal in group 13. The alkali metal borides tend to be boron-rich, involving appreciable boron–boron bonding involving deltahedral structures, and are thermally unstable due to the alkali metals having a very high vapour pressure at elevated temperatures. This makes direct synthesis problematic because the alkali metals do not react with boron below 700 °C, and thus this must be accomplished in sealed containers with the alkali metal in excess. Furthermore, exceptionally in this group, reactivity with boron decreases down the group: lithium reacts completely at 700 °C, but sodium at 900 °C and potassium not until 1200 °C, and the reaction is instantaneous for lithium but takes hours for potassium. Rubidium and caesium borides have not even been characterised. Various phases are known, such as LiB10, NaB6, NaB15, and KB6. Under high pressure the boron–boron bonding in the lithium borides changes from following Wade's rules to forming Zintl anions like the rest of group 13.
Compounds with the group 14 elements
Lithium and sodium react with carbon to form acetylides, Li2C2 and Na2C2, which can also be obtained by reaction of the metal with acetylene. Potassium, rubidium, and caesium react with graphite; their atoms are intercalated between the hexagonal graphite layers, forming graphite intercalation compounds of formulae MC60 (dark grey, almost black), MC48 (dark grey, almost black), MC36 (blue), MC24 (steel blue), and MC8 (bronze) (M = K, Rb, or Cs). These compounds are over 200 times more electrically conductive than pure graphite, suggesting that the valence electron of the alkali metal is transferred to the graphite layers (e.g. ). Upon heating of KC8, the elimination of potassium atoms results in the conversion in sequence to KC24, KC36, KC48 and finally KC60. KC8 is a very strong reducing agent and is pyrophoric and explodes on contact with water. While the larger alkali metals (K, Rb, and Cs) initially form MC8, the smaller ones initially form MC6, and indeed they require reaction of the metals with graphite at high temperatures around 500 °C to form. Apart from this, the alkali metals are such strong reducing agents that they can even reduce buckminsterfullerene to produce solid fullerides MnC60; sodium, potassium, rubidium, and caesium can form fullerides where n = 2, 3, 4, or 6, and rubidium and caesium additionally can achieve n = 1.
When the alkali metals react with the heavier elements in the carbon group (silicon, germanium, tin, and lead), ionic substances with cage-like structures are formed, such as the silicides M4Si4 (M = K, Rb, or Cs), which contains M+ and tetrahedral ions. The chemistry of alkali metal germanides, involving the germanide ion Ge4− and other cluster (Zintl) ions such as , , , and [(Ge9)2]6−, is largely analogous to that of the corresponding silicides. Alkali metal stannides are mostly ionic, sometimes with the stannide ion (Sn4−), and sometimes with more complex Zintl ions such as , which appears in tetrapotassium nonastannide (K4Sn9). The monatomic plumbide ion (Pb4−) is unknown, and indeed its formation is predicted to be energetically unfavourable; alkali metal plumbides have complex Zintl ions, such as . These alkali metal germanides, stannides, and plumbides may be produced by reducing germanium, tin, and lead with sodium metal in liquid ammonia.
Nitrides and pnictides
Lithium, the lightest of the alkali metals, is the only alkali metal which reacts with nitrogen at standard conditions, and its nitride is the only stable alkali metal nitride. Nitrogen is an unreactive gas because breaking the strong triple bond in the dinitrogen molecule (N2) requires a lot of energy. The formation of an alkali metal nitride would consume the ionisation energy of the alkali metal (forming M+ ions), the energy required to break the triple bond in N2 and the formation of N3− ions, and all the energy released from the formation of an alkali metal nitride is from the lattice energy of the alkali metal nitride. The lattice energy is maximised with small, highly charged ions; the alkali metals do not form highly charged ions, only forming ions with a charge of +1, so only lithium, the smallest alkali metal, can release enough lattice energy to make the reaction with nitrogen exothermic, forming lithium nitride. The reactions of the other alkali metals with nitrogen would not release enough lattice energy and would thus be endothermic, so they do not form nitrides at standard conditions. Sodium nitride (Na3N) and potassium nitride (K3N), while existing, are extremely unstable, being prone to decomposing back into their constituent elements, and cannot be produced by reacting the elements with each other at standard conditions. Steric hindrance forbids the existence of rubidium or caesium nitride. However, sodium and potassium form colourless azide salts involving the linear anion; due to the large size of the alkali metal cations, they are thermally stable enough to be able to melt before decomposing.
All the alkali metals react readily with phosphorus and arsenic to form phosphides and arsenides with the formula M3Pn (where M represents an alkali metal and Pn represents a pnictogen – phosphorus, arsenic, antimony, or bismuth). This is due to the greater size of the P3− and As3− ions, so that less lattice energy needs to be released for the salts to form. These are not the only phosphides and arsenides of the alkali metals: for example, potassium has nine different known phosphides, with formulae K3P, K4P3, K5P4, KP, K4P6, K3P7, K3P11, KP10.3, and KP15. While most metals form arsenides, only the alkali and alkaline earth metals form mostly ionic arsenides. The structure of Na3As is complex with unusually short Na–Na distances of 328–330 pm which are shorter than in sodium metal, and this indicates that even with these electropositive metals the bonding cannot be straightforwardly ionic. Other alkali metal arsenides not conforming to the formula M3As are known, such as LiAs, which has a metallic lustre and electrical conductivity indicating the presence of some metallic bonding. The antimonides are unstable and reactive as the Sb3− ion is a strong reducing agent; reaction of them with acids form the toxic and unstable gas stibine (SbH3). Indeed, they have some metallic properties, and the alkali metal antimonides of stoichiometry MSb involve antimony atoms bonded in a spiral Zintl structure. Bismuthides are not even wholly ionic; they are intermetallic compounds containing partially metallic and partially ionic bonds.
Oxides and chalcogenides
All the alkali metals react vigorously with oxygen at standard conditions. They form various types of oxides, such as simple oxides (containing the O2− ion), peroxides (containing the ion, where there is a single bond between the two oxygen atoms), superoxides (containing the ion), and many others. Lithium burns in air to form lithium oxide, but sodium reacts with oxygen to form a mixture of sodium oxide and sodium peroxide. Potassium forms a mixture of potassium peroxide and potassium superoxide, while rubidium and caesium form the superoxide exclusively. Their reactivity increases going down the group: while lithium, sodium and potassium merely burn in air, rubidium and caesium are pyrophoric (spontaneously catch fire in air).
The smaller alkali metals tend to polarise the larger anions (the peroxide and superoxide) due to their small size. This attracts the electrons in the more complex anions towards one of its constituent oxygen atoms, forming an oxide ion and an oxygen atom. This causes lithium to form the oxide exclusively on reaction with oxygen at room temperature. This effect becomes drastically weaker for the larger sodium and potassium, allowing them to form the less stable peroxides. Rubidium and caesium, at the bottom of the group, are so large that even the least stable superoxides can form. Because the superoxide releases the most energy when formed, the superoxide is preferentially formed for the larger alkali metals where the more complex anions are not polarised. The oxides and peroxides for these alkali metals do exist, but do not form upon direct reaction of the metal with oxygen at standard conditions. In addition, the small size of the Li+ and O2− ions contributes to their forming a stable ionic lattice structure. Under controlled conditions, however, all the alkali metals, with the exception of francium, are known to form their oxides, peroxides, and superoxides. The alkali metal peroxides and superoxides are powerful oxidising agents. Sodium peroxide and potassium superoxide react with carbon dioxide to form the alkali metal carbonate and oxygen gas, which allows them to be used in submarine air purifiers; the presence of water vapour, naturally present in breath, makes the removal of carbon dioxide by potassium superoxide even more efficient. All the stable alkali metals except lithium can form red ozonides (MO3) through low-temperature reaction of the powdered anhydrous hydroxide with ozone: the ozonides may be then extracted using liquid ammonia. They slowly decompose at standard conditions to the superoxides and oxygen, and hydrolyse immediately to the hydroxides when in contact with water. Potassium, rubidium, and caesium also form sesquioxides M2O3, which may be better considered peroxide disuperoxides, .
Rubidium and caesium can form a great variety of suboxides with the metals in formal oxidation states below +1. Rubidium can form Rb6O and Rb9O2 (copper-coloured) upon oxidation in air, while caesium forms an immense variety of oxides, such as the ozonide CsO3 and several brightly coloured suboxides, such as Cs7O (bronze), Cs4O (red-violet), Cs11O3 (violet), Cs3O (dark green), CsO, Cs3O2, as well as Cs7O2. The last of these may be heated under vacuum to generate Cs2O.
The alkali metals can also react analogously with the heavier chalcogens (sulfur, selenium, tellurium, and polonium), and all the alkali metal chalcogenides are known (with the exception of francium's). Reaction with an excess of the chalcogen can similarly result in lower chalcogenides, with chalcogen ions containing chains of the chalcogen atoms in question. For example, sodium can react with sulfur to form the sulfide (Na2S) and various polysulfides with the formula Na2Sx (x from 2 to 6), containing the ions. Due to the basicity of the Se2− and Te2− ions, the alkali metal selenides and tellurides are alkaline in solution; when reacted directly with selenium and tellurium, alkali metal polyselenides and polytellurides are formed along with the selenides and tellurides with the and ions. They may be obtained directly from the elements in liquid ammonia or when air is not present, and are colourless, water-soluble compounds that air oxidises quickly back to selenium or tellurium. The alkali metal polonides are all ionic compounds containing the Po2− ion; they are very chemically stable and can be produced by direct reaction of the elements at around 300–400 °C.
Halides, hydrides, and pseudohalides
The alkali metals are among the most electropositive elements on the periodic table and thus tend to bond ionically to the most electronegative elements on the periodic table, the halogens (fluorine, chlorine, bromine, iodine, and astatine), forming salts known as the alkali metal halides. The reaction is very vigorous and can sometimes result in explosions. All twenty stable alkali metal halides are known; the unstable ones are not known, with the exception of sodium astatide, because of the great instability and rarity of astatine and francium. The most well-known of the twenty is certainly sodium chloride, otherwise known as common salt. All of the stable alkali metal halides have the formula MX where M is an alkali metal and X is a halogen. They are all white ionic crystalline solids that have high melting points. All the alkali metal halides are soluble in water except for lithium fluoride (LiF), which is insoluble in water due to its very high lattice enthalpy. The high lattice enthalpy of lithium fluoride is due to the small sizes of the Li+ and F− ions, causing the electrostatic interactions between them to be strong: a similar effect occurs for magnesium fluoride, consistent with the diagonal relationship between lithium and magnesium.
The alkali metals also react similarly with hydrogen to form ionic alkali metal hydrides, where the hydride anion acts as a pseudohalide: these are often used as reducing agents, producing hydrides, complex metal hydrides, or hydrogen gas. Other pseudohalides are also known, notably the cyanides. These are isostructural to the respective halides except for lithium cyanide, indicating that the cyanide ions may rotate freely. Ternary alkali metal halide oxides, such as Na3ClO, K3BrO (yellow), Na4Br2O, Na4I2O, and K4Br2O, are also known. The polyhalides are rather unstable, although those of rubidium and caesium are greatly stabilised by the feeble polarising power of these extremely large cations.
Coordination complexes
Alkali metal cations do not usually form coordination complexes with simple Lewis bases due to their low charge of just +1 and their relatively large size; thus the Li+ ion forms most complexes and the heavier alkali metal ions form less and less (though exceptions occur for weak complexes). Lithium in particular has a very rich coordination chemistry in which it exhibits coordination numbers from 1 to 12, although octahedral hexacoordination is its preferred mode. In aqueous solution, the alkali metal ions exist as octahedral hexahydrate complexes [M(H2O)6]+, with the exception of the lithium ion, which due to its small size forms tetrahedral tetrahydrate complexes [Li(H2O)4]+; the alkali metals form these complexes because their ions are attracted by electrostatic forces of attraction to the polar water molecules. Because of this, anhydrous salts containing alkali metal cations are often used as desiccants. Alkali metals also readily form complexes with crown ethers (e.g. 12-crown-4 for Li+, 15-crown-5 for Na+, 18-crown-6 for K+, and 21-crown-7 for Rb+) and cryptands due to electrostatic attraction.
Ammonia solutions
The alkali metals dissolve slowly in liquid ammonia, forming ammoniacal solutions of solvated metal cation M+ and solvated electron e−, which react to form hydrogen gas and the alkali metal amide (MNH2, where M represents an alkali metal): this was first noted by Humphry Davy in 1809 and rediscovered by W. Weyl in 1864. The process may be speeded up by a catalyst. Similar solutions are formed by the heavy divalent alkaline earth metals calcium, strontium, barium, as well as the divalent lanthanides, europium and ytterbium. The amide salt is quite insoluble and readily precipitates out of solution, leaving intensely coloured ammonia solutions of the alkali metals. In 1907, Charles A. Kraus identified the colour as being due to the presence of solvated electrons, which contribute to the high electrical conductivity of these solutions. At low concentrations (below 3 M), the solution is dark blue and has ten times the conductivity of aqueous sodium chloride; at higher concentrations (above 3 M), the solution is copper-coloured and has approximately the conductivity of liquid metals like mercury. In addition to the alkali metal amide salt and solvated electrons, such ammonia solutions also contain the alkali metal cation (M+), the neutral alkali metal atom (M), diatomic alkali metal molecules (M2) and alkali metal anions (M−). These are unstable and eventually become the more thermodynamically stable alkali metal amide and hydrogen gas. Solvated electrons are powerful reducing agents and are often used in chemical synthesis.
Organometallic
Organolithium
Being the smallest alkali metal, lithium forms the widest variety of and most stable organometallic compounds, which are bonded covalently. Organolithium compounds are electrically non-conducting volatile solids or liquids that melt at low temperatures, and tend to form oligomers with the structure (RLi)x where R is the organic group. As the electropositive nature of lithium puts most of the charge density of the bond on the carbon atom, effectively creating a carbanion, organolithium compounds are extremely powerful bases and nucleophiles. For use as bases, butyllithiums are often used and are commercially available. An example of an organolithium compound is methyllithium ((CH3Li)x), which exists in tetrameric (x = 4, tetrahedral) and hexameric (x = 6, octahedral) forms. Organolithium compounds, especially n-butyllithium, are useful reagents in organic synthesis, as might be expected given lithium's diagonal relationship with magnesium, which plays an important role in the Grignard reaction. For example, alkyllithiums and aryllithiums may be used to synthesise aldehydes and ketones by reaction with metal carbonyls. The reaction with nickel tetracarbonyl, for example, proceeds through an unstable acyl nickel carbonyl complex which then undergoes electrophilic substitution to give the desired aldehyde (using H+ as the electrophile) or ketone (using an alkyl halide) product.
LiR \ + \ Ni(CO)4 \ \longrightarrow Li^{+}[RCONi(CO)3]^{-}
Li^{+}[RCONi(CO)3]^{-}->[\ce{H^{+}}][\ce{solvent}] \ Li^{+} \ + \ RCHO \ + \ [(solvent)Ni(CO)3]
Li^{+}[RCONi(CO)3]^{-}->[\ce{R^{'}Br}][\ce{solvent}] \ Li^{+} \ + \ RR^{'}CO \ + \ [(solvent)Ni(CO)3]
Alkyllithiums and aryllithiums may also react with N,N-disubstituted amides to give aldehydes and ketones, and symmetrical ketones by reacting with carbon monoxide. They thermally decompose to eliminate a β-hydrogen, producing alkenes and lithium hydride: another route is the reaction of ethers with alkyl- and aryllithiums that act as strong bases. In non-polar solvents, aryllithiums react as the carbanions they effectively are, turning carbon dioxide to aromatic carboxylic acids (ArCO2H) and aryl ketones to tertiary carbinols (Ar'2C(Ar)OH). Finally, they may be used to synthesise other organometallic compounds through metal-halogen exchange.
Heavier alkali metals
Unlike the organolithium compounds, the organometallic compounds of the heavier alkali metals are predominantly ionic. The application of organosodium compounds in chemistry is limited in part due to competition from organolithium compounds, which are commercially available and exhibit more convenient reactivity. The principal organosodium compound of commercial importance is sodium cyclopentadienide. Sodium tetraphenylborate can also be classified as an organosodium compound since in the solid state sodium is bound to the aryl groups. Organometallic compounds of the higher alkali metals are even more reactive than organosodium compounds and of limited utility. A notable reagent is Schlosser's base, a mixture of n-butyllithium and potassium tert-butoxide. This reagent reacts with propene to form the compound allylpotassium (KCH2CHCH2). cis-2-Butene and trans-2-butene equilibrate when in contact with alkali metals. Whereas isomerisation is fast with lithium and sodium, it is slow with the heavier alkali metals. The heavier alkali metals also favour the sterically congested conformation. Several crystal structures of organopotassium compounds have been reported, establishing that they, like the sodium compounds, are polymeric. Organosodium, organopotassium, organorubidium and organocaesium compounds are all mostly ionic and are insoluble (or nearly so) in nonpolar solvents.
Alkyl and aryl derivatives of sodium and potassium tend to react with air. They cause the cleavage of ethers, generating alkoxides. Unlike alkyllithium compounds, alkylsodiums and alkylpotassiums cannot be made by reacting the metals with alkyl halides because Wurtz coupling occurs:
RM + R'X → R–R' + MX
As such, they have to be made by reacting alkylmercury compounds with sodium or potassium metal in inert hydrocarbon solvents. While methylsodium forms tetramers like methyllithium, methylpotassium is more ionic and has the nickel arsenide structure with discrete methyl anions and potassium cations.
The alkali metals and their hydrides react with acidic hydrocarbons, for example cyclopentadienes and terminal alkynes, to give salts. Liquid ammonia, ether, or hydrocarbon solvents are used, the most common of which being tetrahydrofuran. The most important of these compounds is sodium cyclopentadienide, NaC5H5, an important precursor to many transition metal cyclopentadienyl derivatives. Similarly, the alkali metals react with cyclooctatetraene in tetrahydrofuran to give alkali metal cyclooctatetraenides; for example, dipotassium cyclooctatetraenide (K2C8H8) is an important precursor to many metal cyclooctatetraenyl derivatives, such as uranocene. The large and very weakly polarising alkali metal cations can stabilise large, aromatic, polarisable radical anions, such as the dark-green sodium naphthalenide, Na+[C10H8•]−, a strong reducing agent.
Representative reactions of alkali metals
Reaction with oxygen
Upon reacting with oxygen, alkali metals form oxides, peroxides, superoxides and suboxides. However, the first three are more common. The table below shows the types of compounds formed in reaction with oxygen. The compound in brackets represents the minor product of combustion.
The alkali metal peroxides are ionic compounds that are unstable in water. The peroxide anion is weakly bound to the cation, and it is hydrolysed, forming stronger covalent bonds.
Na2O2 + 2H2O → 2NaOH + H2O2
The other oxygen compounds are also unstable in water.
2KO2 + 2H2O → 2KOH + H2O2 + O2
Li2O + H2O → 2LiOH
Reaction with sulfur
With sulfur, they form sulfides and polysulfides.
2Na + 1/8S8 → Na2S + 1/8S8 → Na2S2...Na2S7
Because alkali metal sulfides are essentially salts of a weak acid and a strong base, they form basic solutions.
S2- + H2O → HS− + HO−
HS− + H2O → H2S + HO−
Reaction with nitrogen
Lithium is the only metal that combines directly with nitrogen at room temperature.
3Li + 1/2N2 → Li3N
Li3N can react with water to liberate ammonia.
Li3N + 3H2O → 3LiOH + NH3
Reaction with hydrogen
With hydrogen, alkali metals form saline hydrides that hydrolyse in water.
2 Na \ + H2 \ ->[\ce{\Delta}] \ 2 NaH
2 NaH \ + \ 2 H2O \ \longrightarrow \ 2 NaOH \ + \ H2 \uparrow
Reaction with carbon
Lithium is the only metal that reacts directly with carbon to give dilithium acetylide. Na and K can react with acetylene to give acetylides.
2 Li \ + \ 2 C \ \longrightarrow \ Li2C2
2 Na \ + \ 2 C2H2 \ ->[\ce{150 \ ^{o}C}] \ 2 NaC2H \ + \ H2
2 Na \ + \ 2 NaC2H \ ->[\ce{220 \ ^{o}C}] \ 2 Na2C2 \ + \ H2
Reaction with water
On reaction with water, they generate hydroxide ions and hydrogen gas. This reaction is vigorous and highly exothermic and the hydrogen resulted may ignite in air or even explode in the case of Rb and Cs.
Na + H2O → NaOH + 1/2H2
Reaction with other salts
The alkali metals are very good reducing agents. They can reduce metal cations that are less electropositive. Titanium is produced industrially by the reduction of titanium tetrachloride with Na at 400 °C (van Arkel–de Boer process).
TiCl4 + 4Na → 4NaCl + Ti
Reaction with organohalide compounds
Alkali metals react with halogen derivatives to generate hydrocarbon via the Wurtz reaction.
2CH3-Cl + 2Na → H3C-CH3 + 2NaCl
Alkali metals in liquid ammonia
Alkali metals dissolve in liquid ammonia or other donor solvents like aliphatic amines or hexamethylphosphoramide to give blue solutions. These solutions are believed to contain free electrons.
Na + xNH3 → Na+ + e(NH3)x−
Due to the presence of solvated electrons, these solutions are very powerful reducing agents used in organic synthesis.
Reaction 1) is known as Birch reduction.
Other reductions that can be carried by these solutions are:
S8 + 2e− → S82-
Fe(CO)5 + 2e− → Fe(CO)42- + CO
Extensions
Although francium is the heaviest alkali metal that has been discovered, there has been some theoretical work predicting the physical and chemical characteristics of hypothetical heavier alkali metals. Being the first period 8 element, the undiscovered element ununennium (element 119) is predicted to be the next alkali metal after francium and behave much like their lighter congeners; however, it is also predicted to differ from the lighter alkali metals in some properties. Its chemistry is predicted to be closer to that of potassium or rubidium instead of caesium or francium. This is unusual as periodic trends, ignoring relativistic effects would predict ununennium to be even more reactive than caesium and francium. This lowered reactivity is due to the relativistic stabilisation of ununennium's valence electron, increasing ununennium's first ionisation energy and decreasing the metallic and ionic radii; this effect is already seen for francium. This assumes that ununennium will behave chemically as an alkali metal, which, although likely, may not be true due to relativistic effects. The relativistic stabilisation of the 8s orbital also increases ununennium's electron affinity far beyond that of caesium and francium; indeed, ununennium is expected to have an electron affinity higher than all the alkali metals lighter than it. Relativistic effects also cause a very large drop in the polarisability of ununennium. On the other hand, ununennium is predicted to continue the trend of melting points decreasing going down the group, being expected to have a melting point between 0 °C and 30 °C.
The stabilisation of ununennium's valence electron and thus the contraction of the 8s orbital cause its atomic radius to be lowered to 240 pm, very close to that of rubidium (247 pm), so that the chemistry of ununennium in the +1 oxidation state should be more similar to the chemistry of rubidium than to that of francium. On the other hand, the ionic radius of the Uue+ ion is predicted to be larger than that of Rb+, because the 7p orbitals are destabilised and are thus larger than the p-orbitals of the lower shells. Ununennium may also show the +3 and +5 oxidation states, which are not seen in any other alkali metal, in addition to the +1 oxidation state that is characteristic of the other alkali metals and is also the main oxidation state of all the known alkali metals: this is because of the destabilisation and expansion of the 7p3/2 spinor, causing its outermost electrons to have a lower ionisation energy than what would otherwise be expected. Indeed, many ununennium compounds are expected to have a large covalent character, due to the involvement of the 7p3/2 electrons in the bonding.
Not as much work has been done predicting the properties of the alkali metals beyond ununennium. Although a simple extrapolation of the periodic table (by the Aufbau principle) would put element 169, unhexennium, under ununennium, Dirac-Fock calculations predict that the next element after ununennium with alkali-metal-like properties may be element 165, unhexpentium, which is predicted to have the electron configuration [Og] 5g18 6f14 7d10 8s2 8p1/22 9s1. This element would be intermediate in properties between an alkali metal and a group 11 element, and while its physical and atomic properties would be closer to the former, its chemistry may be closer to that of the latter. Further calculations show that unhexpentium would follow the trend of increasing ionisation energy beyond caesium, having an ionisation energy comparable to that of sodium, and that it should also continue the trend of decreasing atomic radii beyond caesium, having an atomic radius comparable to that of potassium. However, the 7d electrons of unhexpentium may also be able to participate in chemical reactions along with the 9s electron, possibly allowing oxidation states beyond +1, whence the likely transition metal behaviour of unhexpentium. Due to the alkali and alkaline earth metals both being s-block elements, these predictions for the trends and properties of ununennium and unhexpentium also mostly hold quite similarly for the corresponding alkaline earth metals unbinilium (Ubn) and unhexhexium (Uhh). Unsepttrium, element 173, may be an even better heavier homologue of ununennium; with a predicted electron configuration of [Usb] 6g1, it returns to the alkali-metal-like situation of having one easily removed electron far above a closed p-shell in energy, and is expected to be even more reactive than caesium.
The probable properties of further alkali metals beyond unsepttrium have not been explored yet as of 2019, and they may or may not be able to exist. In periods 8 and above of the periodic table, relativistic and shell-structure effects become so strong that extrapolations from lighter congeners become completely inaccurate. In addition, the relativistic and shell-structure effects (which stabilise the s-orbitals and destabilise and expand the d-, f-, and g-orbitals of higher shells) have opposite effects, causing even larger difference between relativistic and non-relativistic calculations of the properties of elements with such high atomic numbers. Interest in the chemical properties of ununennium, unhexpentium, and unsepttrium stems from the fact that they are located close to the expected locations of islands of stability, centered at elements 122 (306Ubb) and 164 (482Uhq).
Pseudo-alkali metals
Many other substances are similar to the alkali metals in their tendency to form monopositive cations. Analogously to the pseudohalogens, they have sometimes been called "pseudo-alkali metals". These substances include some elements and many more polyatomic ions; the polyatomic ions are especially similar to the alkali metals in their large size and weak polarising power.
Hydrogen
The element hydrogen, with one electron per neutral atom, is usually placed at the top of Group 1 of the periodic table because of its electron configuration. But hydrogen is not normally considered to be an alkali metal. Metallic hydrogen, which only exists at very high pressures, is known for its electrical and magnetic properties, not its chemical properties. Under typical conditions, pure hydrogen exists as a diatomic gas consisting of two atoms per molecule (H2); however, the alkali metals form diatomic molecules (such as dilithium, Li2) only at high temperatures, when they are in the gaseous state.
Hydrogen, like the alkali metals, has one valence electron and reacts easily with the halogens, but the similarities mostly end there because of the small size of a bare proton H+ compared to the alkali metal cations. Its placement above lithium is primarily due to its electron configuration. It is sometimes placed above fluorine due to their similar chemical properties, though the resemblance is likewise not absolute.
The first ionisation energy of hydrogen (1312.0 kJ/mol) is much higher than that of the alkali metals. As only one additional electron is required to fill in the outermost shell of the hydrogen atom, hydrogen often behaves like a halogen, forming the negative hydride ion, and is very occasionally considered to be a halogen on that basis. (The alkali metals can also form negative ions, known as alkalides, but these are little more than laboratory curiosities, being unstable.) An argument against this placement is that formation of hydride from hydrogen is endothermic, unlike the exothermic formation of halides from halogens. The radius of the H− anion also does not fit the trend of increasing size going down the halogens: indeed, H− is very diffuse because its single proton cannot easily control both electrons. It was expected for some time that liquid hydrogen would show metallic properties; while this has been shown to not be the case, under extremely high pressures, such as those found at the cores of Jupiter and Saturn, hydrogen does become metallic and behaves like an alkali metal; in this phase, it is known as metallic hydrogen. The electrical resistivity of liquid metallic hydrogen at 3000 K is approximately equal to that of liquid rubidium and caesium at 2000 K at the respective pressures when they undergo a nonmetal-to-metal transition.
The 1s1 electron configuration of hydrogen, while analogous to that of the alkali metals (ns1), is unique because there is no 1p subshell. Hence it can lose an electron to form the hydron H+, or gain one to form the hydride ion H−. In the former case it resembles superficially the alkali metals; in the latter case, the halogens, but the differences due to the lack of a 1p subshell are important enough that neither group fits the properties of hydrogen well. Group 14 is also a good fit in terms of thermodynamic properties such as ionisation energy and electron affinity, but hydrogen cannot be tetravalent. Thus none of the three placements are entirely satisfactory, although group 1 is the most common placement (if one is chosen) because of the electron configuration and the fact that the hydron is by far the most important of all monatomic hydrogen species, being the foundation of acid-base chemistry. As an example of hydrogen's unorthodox properties stemming from its unusual electron configuration and small size, the hydrogen ion is very small (radius around 150 fm compared to the 50–220 pm size of most other atoms and ions) and so is nonexistent in condensed systems other than in association with other atoms or molecules. Indeed, transferring of protons between chemicals is the basis of acid-base chemistry. Also unique is hydrogen's ability to form hydrogen bonds, which are an effect of charge-transfer, electrostatic, and electron correlative contributing phenomena. While analogous lithium bonds are also known, they are mostly electrostatic. Nevertheless, hydrogen can take on the same structural role as the alkali metals in some molecular crystals, and has a close relationship with the lightest alkali metals (especially lithium).
Ammonium and derivatives
The ammonium ion () has very similar properties to the heavier alkali metals, acting as an alkali metal intermediate between potassium and rubidium, and is often considered a close relative. For example, most alkali metal salts are soluble in water, a property which ammonium salts share. Ammonium is expected to behave stably as a metal ( ions in a sea of delocalised electrons) at very high pressures (though less than the typical pressure where transitions from insulating to metallic behaviour occur around, 100 GPa), and could possibly occur inside the ice giants Uranus and Neptune, which may have significant impacts on their interior magnetic fields. It has been estimated that the transition from a mixture of ammonia and dihydrogen molecules to metallic ammonium may occur at pressures just below 25 GPa. Under standard conditions, ammonium can form a metallic amalgam with mercury.
Other "pseudo-alkali metals" include the alkylammonium cations, in which some of the hydrogen atoms in the ammonium cation are replaced by alkyl or aryl groups. In particular, the quaternary ammonium cations () are very useful since they are permanently charged, and they are often used as an alternative to the expensive Cs+ to stabilise very large and very easily polarisable anions such as . Tetraalkylammonium hydroxides, like alkali metal hydroxides, are very strong bases that react with atmospheric carbon dioxide to form carbonates. Furthermore, the nitrogen atom may be replaced by a phosphorus, arsenic, or antimony atom (the heavier nonmetallic pnictogens), creating a phosphonium () or arsonium () cation that can itself be substituted similarly; while stibonium () itself is not known, some of its organic derivatives are characterised.
Cobaltocene and derivatives
Cobaltocene, Co(C5H5)2, is a metallocene, the cobalt analogue of ferrocene. It is a dark purple solid. Cobaltocene has 19 valence electrons, one more than usually found in organotransition metal complexes, such as its very stable relative, ferrocene, in accordance with the 18-electron rule. This additional electron occupies an orbital that is antibonding with respect to the Co–C bonds. Consequently, many chemical reactions of Co(C5H5)2 are characterized by its tendency to lose this "extra" electron, yielding a very stable 18-electron cation known as cobaltocenium. Many cobaltocenium salts coprecipitate with caesium salts, and cobaltocenium hydroxide is a strong base that absorbs atmospheric carbon dioxide to form cobaltocenium carbonate. Like the alkali metals, cobaltocene is a strong reducing agent, and decamethylcobaltocene is stronger still due to the combined inductive effect of the ten methyl groups. Cobalt may be substituted by its heavier congener rhodium to give rhodocene, an even stronger reducing agent. Iridocene (involving iridium) would presumably be still more potent, but is not very well-studied due to its instability.
Thallium
Thallium is the heaviest stable element in group 13 of the periodic table. At the bottom of the periodic table, the inert-pair effect is quite strong, because of the relativistic stabilisation of the 6s orbital and the decreasing bond energy as the atoms increase in size so that the amount of energy released in forming two more bonds is not worth the high ionisation energies of the 6s electrons. It displays the +1 oxidation state that all the known alkali metals display, and thallium compounds with thallium in its +1 oxidation state closely resemble the corresponding potassium or silver compounds stoichiometrically due to the similar ionic radii of the Tl+ (164 pm), K+ (152 pm) and Ag+ (129 pm) ions. It was sometimes considered an alkali metal in continental Europe (but not in England) in the years immediately following its discovery, and was placed just after caesium as the sixth alkali metal in Dmitri Mendeleev's 1869 periodic table and Julius Lothar Meyer's 1868 periodic table. Mendeleev's 1871 periodic table and Meyer's 1870 periodic table put thallium in its current position in the boron group and left the space below caesium blank. However, thallium also displays the oxidation state +3, which no known alkali metal displays (although ununennium, the undiscovered seventh alkali metal, is predicted to possibly display the +3 oxidation state). The sixth alkali metal is now considered to be francium. While Tl+ is stabilised by the inert-pair effect, this inert pair of 6s electrons is still able to participate chemically, so that these electrons are stereochemically active in aqueous solution. Additionally, the thallium halides (except TlF) are quite insoluble in water, and TlI has an unusual structure because of the presence of the stereochemically active inert pair in thallium.
Copper, silver, and gold
The group 11 metals (or coinage metals), copper, silver, and gold, are typically categorised as transition metals given they can form ions with incomplete d-shells. Physically, they have the relatively low melting points and high electronegativity values associated with post-transition metals. "The filled d subshell and free s electron of Cu, Ag, and Au contribute to their high electrical and thermal conductivity. Transition metals to the left of group 11 experience interactions between s electrons and the partially filled d subshell that lower electron mobility." Chemically, the group 11 metals behave like main-group metals in their +1 valence states, and are hence somewhat related to the alkali metals: this is one reason for their previously being labelled as "group IB", paralleling the alkali metals' "group IA". They are occasionally classified as post-transition metals. Their spectra are analogous to those of the alkali metals. Their monopositive ions are paramagnetic and contribute no colour to their salts, like those of the alkali metals.
In Mendeleev's 1871 periodic table, copper, silver, and gold are listed twice, once under group VIII (with the iron triad and platinum group metals), and once under group IB. Group IB was nonetheless parenthesised to note that it was tentative. Mendeleev's main criterion for group assignment was the maximum oxidation state of an element: on that basis, the group 11 elements could not be classified in group IB, due to the existence of copper(II) and gold(III) compounds being known at that time. However, eliminating group IB would make group I the only main group (group VIII was labelled a transition group) to lack an A–B bifurcation. Soon afterward, a majority of chemists chose to classify these elements in group IB and remove them from group VIII for the resulting symmetry: this was the predominant classification until the rise of the modern medium-long 18-column periodic table, which separated the alkali metals and group 11 metals.
The coinage metals were traditionally regarded as a subdivision of the alkali metal group, due to them sharing the characteristic s1 electron configuration of the alkali metals (group 1: p6s1; group 11: d10s1). However, the similarities are largely confined to the stoichiometries of the +1 compounds of both groups, and not their chemical properties. This stems from the filled d subshell providing a much weaker shielding effect on the outermost s electron than the filled p subshell, so that the coinage metals have much higher first ionisation energies and smaller ionic radii than do the corresponding alkali metals. Furthermore, they have higher melting points, hardnesses, and densities, and lower reactivities and solubilities in liquid ammonia, as well as having more covalent character in their compounds. Finally, the alkali metals are at the top of the electrochemical series, whereas the coinage metals are almost at the very bottom. The coinage metals' filled d shell is much more easily disrupted than the alkali metals' filled p shell, so that the second and third ionisation energies are lower, enabling higher oxidation states than +1 and a richer coordination chemistry, thus giving the group 11 metals clear transition metal character. Particularly noteworthy is gold forming ionic compounds with rubidium and caesium, in which it forms the auride ion (Au−) which also occurs in solvated form in liquid ammonia solution: here gold behaves as a pseudohalogen because its 5d106s1 configuration has one electron less than the quasi-closed shell 5d106s2 configuration of mercury.
Production and isolation
The production of pure alkali metals is somewhat complicated due to their extreme reactivity with commonly used substances, such as water. From their silicate ores, all the stable alkali metals may be obtained the same way: sulfuric acid is first used to dissolve the desired alkali metal ion and aluminium(III) ions from the ore (leaching), whereupon basic precipitation removes aluminium ions from the mixture by precipitating it as the hydroxide. The remaining insoluble alkali metal carbonate is then precipitated selectively; the salt is then dissolved in hydrochloric acid to produce the chloride. The result is then left to evaporate and the alkali metal can then be isolated. Lithium and sodium are typically isolated through electrolysis from their liquid chlorides, with calcium chloride typically added to lower the melting point of the mixture. The heavier alkali metals, however, are more typically isolated in a different way, where a reducing agent (typically sodium for potassium and magnesium or calcium for the heaviest alkali metals) is used to reduce the alkali metal chloride. The liquid or gaseous product (the alkali metal) then undergoes fractional distillation for purification. Most routes to the pure alkali metals require the use of electrolysis due to their high reactivity; one of the few which does not is the pyrolysis of the corresponding alkali metal azide, which yields the metal for sodium, potassium, rubidium, and caesium and the nitride for lithium.
Lithium salts have to be extracted from the water of mineral springs, brine pools, and brine deposits. The metal is produced electrolytically from a mixture of fused lithium chloride and potassium chloride.
Sodium occurs mostly in seawater and dried seabed, but is now produced through electrolysis of sodium chloride by lowering the melting point of the substance to below 700 °C through the use of a Downs cell. Extremely pure sodium can be produced through the thermal decomposition of sodium azide. Potassium occurs in many minerals, such as sylvite (potassium chloride). Previously, potassium was generally made from the electrolysis of potassium chloride or potassium hydroxide, found extensively in places such as Canada, Russia, Belarus, Germany, Israel, United States, and Jordan, in a method similar to how sodium was produced in the late 1800s and early 1900s. It can also be produced from seawater. However, these methods are problematic because the potassium metal tends to dissolve in its molten chloride and vaporises significantly at the operating temperatures, potentially forming the explosive superoxide. As a result, pure potassium metal is now produced by reducing molten potassium chloride with sodium metal at 850 °C.
Na (g) + KCl (l) NaCl (l) + K (g)
Although sodium is less reactive than potassium, this process works because at such high temperatures potassium is more volatile than sodium and can easily be distilled off, so that the equilibrium shifts towards the right to produce more potassium gas and proceeds almost to completion.
Metals like sodium are obtained by electrolysis of molten salts. Rb & Cs obtained mainly as by products of Li processing. To make pure caesium, ores of caesium and rubidium are crushed and heated to 650 °C with sodium metal, generating an alloy that can then be separated via a fractional distillation technique. Because metallic caesium is too reactive to handle, it is normally offered as caesium azide (CsN3). Caesium hydroxide is formed when caesium interacts aggressively with water and ice (CsOH).
Rubidium is the 16th most abundant element in the earth's crust; however, it is quite rare. Some minerals found in North America, South Africa, Russia, and Canada contain rubidium. Some potassium minerals (lepidolites, biotites, feldspar, carnallite) contain it, together with caesium. Pollucite, carnallite, leucite, and lepidolite are all minerals that contain rubidium. As a by-product of lithium extraction, it is commercially obtained from lepidolite. Rubidium is also found in potassium rocks and brines, which is a commercial supply. The majority of rubidium is now obtained as a byproduct of refining lithium. Rubidium is used in vacuum tubes as a getter, a material that combines with and removes trace gases from vacuum tubes.
For several years in the 1950s and 1960s, a by-product of the potassium production called Alkarb was a main source for rubidium. Alkarb contained 21% rubidium while the rest was potassium and a small fraction of caesium. Today the largest producers of caesium, for example the Tanco Mine in Manitoba, Canada, produce rubidium as by-product from pollucite. Today, a common method for separating rubidium from potassium and caesium is the fractional crystallisation of a rubidium and caesium alum (Cs, Rb)Al(SO4)2·12H2O, which yields pure rubidium alum after approximately 30 recrystallisations. The limited applications and the lack of a mineral rich in rubidium limit the production of rubidium compounds to 2 to 4 tonnes per year. Caesium, however, is not produced from the above reaction. Instead, the mining of pollucite ore is the main method of obtaining pure caesium, extracted from the ore mainly by three methods: acid digestion, alkaline decomposition, and direct reduction. Both metals are produced as by-products of lithium production: after 1958, when interest in lithium's thermonuclear properties increased sharply, the production of rubidium and caesium also increased correspondingly. Pure rubidium and caesium metals are produced by reducing their chlorides with calcium metal at 750 °C and low pressure.
As a result of its extreme rarity in nature, most francium is synthesised in the nuclear reaction 197Au + 18O → 210Fr + 5 n, yielding francium-209, francium-210, and francium-211. The greatest quantity of francium ever assembled to date is about 300,000 neutral atoms, which were synthesised using the nuclear reaction given above. When the only natural isotope francium-223 is specifically required, it is produced as the alpha daughter of actinium-227, itself produced synthetically from the neutron irradiation of natural radium-226, one of the daughters of natural uranium-238.
Applications
Lithium, sodium, and potassium have many useful applications, while rubidium and caesium are very notable in academic contexts but do not have many applications yet. Lithium is the key ingredient for a range of lithium-based batteries, and lithium oxide can help process silica. Lithium stearate is a thickener and can be used to make lubricating greases; it is produced from lithium hydroxide, which is also used to absorb carbon dioxide in space capsules and submarines. Lithium chloride is used as a brazing alloy for aluminium parts. In medicine, some lithium salts are used as mood-stabilising pharmaceuticals. Metallic lithium is used in alloys with magnesium and aluminium to give very tough and light alloys.
Sodium compounds have many applications, the most well-known being sodium chloride as table salt. Sodium salts of fatty acids are used as soap. Pure sodium metal also has many applications, including use in sodium-vapour lamps, which produce very efficient light compared to other types of lighting, and can help smooth the surface of other metals. Being a strong reducing agent, it is often used to reduce many other metals, such as titanium and zirconium, from their chlorides. Furthermore, it is very useful as a heat-exchange liquid in fast breeder nuclear reactors due to its low melting point, viscosity, and cross-section towards neutron absorption. Sodium-ion batteries may provide cheaper alternatives to their equivalent lithium-based cells. Both sodium and potassium are commonly used as GRAS counterions to create more water-soluble and hence more bioavailable salt forms of acidic pharmaceuticals.
Potassium compounds are often used as fertilisers as potassium is an important element for plant nutrition. Potassium hydroxide is a very strong base, and is used to control the pH of various substances. Potassium nitrate and potassium permanganate are often used as powerful oxidising agents. Potassium superoxide is used in breathing masks, as it reacts with carbon dioxide to give potassium carbonate and oxygen gas. Pure potassium metal is not often used, but its alloys with sodium may substitute for pure sodium in fast breeder nuclear reactors.
Rubidium and caesium are often used in atomic clocks. Caesium atomic clocks are extraordinarily accurate; if a clock had been made at the time of the dinosaurs, it would be off by less than four seconds (after 80 million years). For that reason, caesium atoms are used as the definition of the second. Rubidium ions are often used in purple fireworks, and caesium is often used in drilling fluids in the petroleum industry.
Francium has no commercial applications, but because of francium's relatively simple atomic structure, among other things, it has been used in spectroscopy experiments, leading to more information regarding energy levels and the coupling constants between subatomic particles. Studies on the light emitted by laser-trapped francium-210 ions have provided accurate data on transitions between atomic energy levels, similar to those predicted by quantum theory.
Biological role and precautions
Metals
Pure alkali metals are dangerously reactive with air and water and must be kept away from heat, fire, oxidising agents, acids, most organic compounds, halocarbons, plastics, and moisture. They also react with carbon dioxide and carbon tetrachloride, so that normal fire extinguishers are counterproductive when used on alkali metal fires. Some Class D dry powder extinguishers designed for metal fires are effective, depriving the fire of oxygen and cooling the alkali metal.
Experiments are usually conducted using only small quantities of a few grams in a fume hood. Small quantities of lithium may be disposed of by reaction with cool water, but the heavier alkali metals should be dissolved in the less reactive isopropanol. The alkali metals must be stored under mineral oil or an inert atmosphere. The inert atmosphere used may be argon or nitrogen gas, except for lithium, which reacts with nitrogen. Rubidium and caesium must be kept away from air, even under oil, because even a small amount of air diffused into the oil may trigger formation of the dangerously explosive peroxide; for the same reason, potassium should not be stored under oil in an oxygen-containing atmosphere for longer than 6 months.
Ions
The bioinorganic chemistry of the alkali metal ions has been extensively reviewed.
Solid state crystal structures have been determined for many complexes of alkali metal ions in small peptides, nucleic acid constituents, carbohydrates and ionophore complexes.
Lithium naturally only occurs in traces in biological systems and has no known biological role, but does have effects on the body when ingested. Lithium carbonate is used as a mood stabiliser in psychiatry to treat bipolar disorder (manic-depression) in daily doses of about 0.5 to 2 grams, although there are side-effects. Excessive ingestion of lithium causes drowsiness, slurred speech and vomiting, among other symptoms, and poisons the central nervous system, which is dangerous as the required dosage of lithium to treat bipolar disorder is only slightly lower than the toxic dosage. Its biochemistry, the way it is handled by the human body and studies using rats and goats suggest that it is an essential trace element, although the natural biological function of lithium in humans has yet to be identified.
Sodium and potassium occur in all known biological systems, generally functioning as electrolytes inside and outside cells. Sodium is an essential nutrient that regulates blood volume, blood pressure, osmotic equilibrium and pH; the minimum physiological requirement for sodium is 500 milligrams per day. Sodium chloride (also known as common salt) is the principal source of sodium in the diet, and is used as seasoning and preservative, such as for pickling and jerky; most of it comes from processed foods. The Dietary Reference Intake for sodium is 1.5 grams per day, but most people in the United States consume more than 2.3 grams per day, the minimum amount that promotes hypertension; this in turn causes 7.6 million premature deaths worldwide.
Potassium is the major cation (positive ion) inside animal cells, while sodium is the major cation outside animal cells. The concentration differences of these charged particles causes a difference in electric potential between the inside and outside of cells, known as the membrane potential. The balance between potassium and sodium is maintained by ion transporter proteins in the cell membrane. The cell membrane potential created by potassium and sodium ions allows the cell to generate an action potential—a "spike" of electrical discharge. The ability of cells to produce electrical discharge is critical for body functions such as neurotransmission, muscle contraction, and heart function. Disruption of this balance may thus be fatal: for example, ingestion of large amounts of potassium compounds can lead to hyperkalemia strongly influencing the cardiovascular system. Potassium chloride is used in the United States for lethal injection executions.
Due to their similar atomic radii, rubidium and caesium in the body mimic potassium and are taken up similarly. Rubidium has no known biological role, but may help stimulate metabolism, and, similarly to caesium, replace potassium in the body causing potassium deficiency. Partial substitution is quite possible and rather non-toxic: a 70 kg person contains on average 0.36 g of rubidium, and an increase in this value by 50 to 100 times did not show negative effects in test persons. Rats can survive up to 50% substitution of potassium by rubidium. Rubidium (and to a much lesser extent caesium) can function as temporary cures for hypokalemia; while rubidium can adequately physiologically substitute potassium in some systems, caesium is never able to do so. There is only very limited evidence in the form of deficiency symptoms for rubidium being possibly essential in goats; even if this is true, the trace amounts usually present in food are more than enough.
Caesium compounds are rarely encountered by most people, but most caesium compounds are mildly toxic. Like rubidium, caesium tends to substitute potassium in the body, but is significantly larger and is therefore a poorer substitute. Excess caesium can lead to hypokalemia, arrhythmia, and acute cardiac arrest, but such amounts would not ordinarily be encountered in natural sources. As such, caesium is not a major chemical environmental pollutant. The median lethal dose (LD50) value for caesium chloride in mice is 2.3 g per kilogram, which is comparable to the LD50 values of potassium chloride and sodium chloride. Caesium chloride has been promoted as an alternative cancer therapy, but has been linked to the deaths of over 50 patients, on whom it was used as part of a scientifically unvalidated cancer treatment.
Radioisotopes of caesium require special precautions: the improper handling of caesium-137 gamma ray sources can lead to release of this radioisotope and radiation injuries. Perhaps the best-known case is the Goiânia accident of 1987, in which an improperly-disposed-of radiation therapy system from an abandoned clinic in the city of Goiânia, Brazil, was scavenged from a junkyard, and the glowing caesium salt sold to curious, uneducated buyers. This led to four deaths and serious injuries from radiation exposure. Together with caesium-134, iodine-131, and strontium-90, caesium-137 was among the isotopes distributed by the Chernobyl disaster which constitute the greatest risk to health. Radioisotopes of francium would presumably be dangerous as well due to their high decay energy and short half-life, but none have been produced in large enough amounts to pose any serious risk.
Notes
References
A
Groups (periodic table)
Periodic table
Articles containing video clips | Alkali metal | [
"Chemistry"
] | 22,635 | [
"Periodic table",
"Groups (periodic table)"
] |
673 | https://en.wikipedia.org/wiki/Atomic%20number | The atomic number or nuclear charge number (symbol Z) of a chemical element is the charge number of its atomic nucleus. For ordinary nuclei composed of protons and neutrons, this is equal to the proton number (np) or the number of protons found in the nucleus of every atom of that element. The atomic number can be used to uniquely identify ordinary chemical elements. In an ordinary uncharged atom, the atomic number is also equal to the number of electrons.
For an ordinary atom which contains protons, neutrons and electrons, the sum of the atomic number Z and the neutron number N gives the atom's atomic mass number A. Since protons and neutrons have approximately the same mass (and the mass of the electrons is negligible for many purposes) and the mass defect of the nucleon binding is always small compared to the nucleon mass, the atomic mass of any atom, when expressed in daltons (making a quantity called the "relative isotopic mass"), is within 1% of the whole number A.
Atoms with the same atomic number but different neutron numbers, and hence different mass numbers, are known as isotopes. A little more than three-quarters of naturally occurring elements exist as a mixture of isotopes (see monoisotopic elements), and the average isotopic mass of an isotopic mixture for an element (called the relative atomic mass) in a defined environment on Earth determines the element's standard atomic weight. Historically, it was these atomic weights of elements (in comparison to hydrogen) that were the quantities measurable by chemists in the 19th century.
The conventional symbol Z comes from the German word 'number', which, before the modern synthesis of ideas from chemistry and physics, merely denoted an element's numerical place in the periodic table, whose order was then approximately, but not completely, consistent with the order of the elements by atomic weights. Only after 1915, with the suggestion and evidence that this Z number was also the nuclear charge and a physical characteristic of atoms, did the word (and its English equivalent atomic number) come into common use in this context.
The rules above do not always apply to exotic atoms which contain short-lived elementary particles other than protons, neutrons and electrons.
History
In the 19th century, the term "atomic number" typically meant the number of atoms in a given volume. Modern chemists prefer to use the concept of molar concentration.
In 1913, Antonius van den Broek proposed that the electric charge of an atomic nucleus, expressed as a multiplier of the elementary charge, was equal to the element's sequential position on the periodic table. Ernest Rutherford, in various articles in which he discussed van den Broek's idea, used the term "atomic number" to refer to an element's position on the periodic table. No writer before Rutherford is known to have used the term "atomic number" in this way, so it was probably he who established this definition.
After Rutherford deduced the existence of the proton in 1920, "atomic number" customarily referred to the proton number of an atom. In 1921, the German Atomic Weight Commission based its new periodic table on the nuclear charge number and in 1923 the International Committee on Chemical Elements followed suit.
The periodic table and a natural number for each element
The periodic table of elements creates an ordering of the elements, and so they can be numbered in order.
Dmitri Mendeleev arranged his first periodic tables (first published on March 6, 1869) in order of atomic weight ("Atomgewicht"). However, in consideration of the elements' observed chemical properties, he changed the order slightly and placed tellurium (atomic weight 127.6) ahead of iodine (atomic weight 126.9). This placement is consistent with the modern practice of ordering the elements by proton number, Z, but that number was not known or suspected at the time.
A simple numbering based on atomic weight position was never entirely satisfactory. In addition to the case of iodine and tellurium, several other pairs of elements (such as argon and potassium, cobalt and nickel) were later shown to have nearly identical or reversed atomic weights, thus requiring their placement in the periodic table to be determined by their chemical properties. However the gradual identification of more and more chemically similar lanthanide elements, whose atomic number was not obvious, led to inconsistency and uncertainty in the periodic numbering of elements at least from lutetium (element 71) onward (hafnium was not known at this time).
The Rutherford-Bohr model and van den Broek
In 1911, Ernest Rutherford gave a model of the atom in which a central nucleus held most of the atom's mass and a positive charge which, in units of the electron's charge, was to be approximately equal to half of the atom's atomic weight, expressed in numbers of hydrogen atoms. This central charge would thus be approximately half the atomic weight (though it was almost 25% different from the atomic number of gold , ), the single element from which Rutherford made his guess). Nevertheless, in spite of Rutherford's estimation that gold had a central charge of about 100 (but was element on the periodic table), a month after Rutherford's paper appeared, Antonius van den Broek first formally suggested that the central charge and number of electrons in an atom were exactly equal to its place in the periodic table (also known as element number, atomic number, and symbolized Z). This eventually proved to be the case.
Moseley's 1913 experiment
The experimental position improved dramatically after research by Henry Moseley in 1913. Moseley, after discussions with Bohr who was at the same lab (and who had used Van den Broek's hypothesis in his Bohr model of the atom), decided to test Van den Broek's and Bohr's hypothesis directly, by seeing if spectral lines emitted from excited atoms fitted the Bohr theory's postulation that the frequency of the spectral lines be proportional to the square of Z.
To do this, Moseley measured the wavelengths of the innermost photon transitions (K and L lines) produced by the elements from aluminium (Z = 13) to gold (Z = 79) used as a series of movable anodic targets inside an x-ray tube. The square root of the frequency of these photons increased from one target to the next in an arithmetic progression. This led to the conclusion (Moseley's law) that the atomic number does closely correspond (with an offset of one unit for K-lines, in Moseley's work) to the calculated electric charge of the nucleus, i.e. the element number Z. Among other things, Moseley demonstrated that the lanthanide series (from lanthanum to lutetium inclusive) must have 15 members—no fewer and no more—which was far from obvious from known chemistry at that time.
Missing elements
After Moseley's death in 1915, the atomic numbers of all known elements from hydrogen to uranium (Z = 92) were examined by his method. There were seven elements (with Z < 92) which were not found and therefore identified as still undiscovered, corresponding to atomic numbers 43, 61, 72, 75, 85, 87 and 91. From 1918 to 1947, all seven of these missing elements were discovered. By this time, the first four transuranium elements had also been discovered, so that the periodic table was complete with no gaps as far as curium (Z = 96).
The proton and the idea of nuclear electrons
In 1915, the reason for nuclear charge being quantized in units of Z, which were now recognized to be the same as the element number, was not understood. An old idea called Prout's hypothesis had postulated that the elements were all made of residues (or "protyles") of the lightest element hydrogen, which in the Bohr-Rutherford model had a single electron and a nuclear charge of one. However, as early as 1907, Rutherford and Thomas Royds had shown that alpha particles, which had a charge of +2, were the nuclei of helium atoms, which had a mass four times that of hydrogen, not two times. If Prout's hypothesis were true, something had to be neutralizing some of the charge of the hydrogen nuclei present in the nuclei of heavier atoms.
In 1917, Rutherford succeeded in generating hydrogen nuclei from a nuclear reaction between alpha particles and nitrogen gas, and believed he had proven Prout's law. He called the new heavy nuclear particles protons in 1920 (alternate names being proutons and protyles). It had been immediately apparent from the work of Moseley that the nuclei of heavy atoms have more than twice as much mass as would be expected from their being made of hydrogen nuclei, and thus there was required a hypothesis for the neutralization of the extra protons presumed present in all heavy nuclei. A helium nucleus was presumed to have four protons plus two "nuclear electrons" (electrons bound inside the nucleus) to cancel two charges. At the other end of the periodic table, a nucleus of gold with a mass 197 times that of hydrogen was thought to contain 118 nuclear electrons in the nucleus to give it a residual charge of +79, consistent with its atomic number.
Discovery of the neutron makes Z the proton number
All consideration of nuclear electrons ended with James Chadwick's discovery of the neutron in 1932. An atom of gold now was seen as containing 118 neutrons rather than 118 nuclear electrons, and its positive nuclear charge now was realized to come entirely from a content of 79 protons. Since Moseley had previously shown that the atomic number Z of an element equals this positive charge, it was now clear that Z is identical to the number of protons of its nuclei.
Chemical properties
Each element has a specific set of chemical properties as a consequence of the number of electrons present in the neutral atom, which is Z (the atomic number). The configuration of these electrons follows from the principles of quantum mechanics. The number of electrons in each element's electron shells, particularly the outermost valence shell, is the primary factor in determining its chemical bonding behavior. Hence, it is the atomic number alone that determines the chemical properties of an element; and it is for this reason that an element can be defined as consisting of any mixture of atoms with a given atomic number.
New elements
The quest for new elements is usually described using atomic numbers. As of , all elements with atomic numbers 1 to 118 have been observed. Synthesis of new elements is accomplished by bombarding target atoms of heavy elements with ions, such that the sum of the atomic numbers of the target and ion elements equals the atomic number of the element being created. In general, the half-life of a nuclide becomes shorter as atomic number increases, though undiscovered nuclides with certain "magic" numbers of protons and neutrons may have relatively longer half-lives and comprise an island of stability.
A hypothetical element composed only of neutrons, neutronium, has also been proposed and would have atomic number 0, but has never been observed.
See also
References
Chemical properties
Nuclear physics
Atoms
Dimensionless numbers of chemistry
Numbers | Atomic number | [
"Physics",
"Chemistry",
"Mathematics"
] | 2,311 | [
"Quantity",
"Chemical quantities",
"Mathematical objects",
"Numbers",
"Arithmetic",
"nan",
"Nuclear physics",
"Atoms",
"Dimensionless numbers of chemistry",
"Matter"
] |
674 | https://en.wikipedia.org/wiki/Anatomy | Anatomy () is the branch of morphology concerned with the study of the internal structure of organisms and their parts. Anatomy is a branch of natural science that deals with the structural organization of living things. It is an old science, having its beginnings in prehistoric times. Anatomy is inherently tied to developmental biology, embryology, comparative anatomy, evolutionary biology, and phylogeny, as these are the processes by which anatomy is generated, both over immediate and long-term timescales. Anatomy and physiology, which study the structure and function of organisms and their parts respectively, make a natural pair of related disciplines, and are often studied together. Human anatomy is one of the essential basic sciences that are applied in medicine, and is often studied alongside physiology.
Anatomy is a complex and dynamic field that is constantly evolving as discoveries are made. In recent years, there has been a significant increase in the use of advanced imaging techniques, such as MRI and CT scans, which allow for more detailed and accurate visualizations of the body's structures.
The discipline of anatomy is divided into macroscopic and microscopic parts. Macroscopic anatomy, or gross anatomy, is the examination of an animal's body parts using unaided eyesight. Gross anatomy also includes the branch of superficial anatomy. Microscopic anatomy involves the use of optical instruments in the study of the tissues of various structures, known as histology, and also in the study of cells.
The history of anatomy is characterized by a progressive understanding of the functions of the organs and structures of the human body. Methods have also improved dramatically, advancing from the examination of animals by dissection of carcasses and cadavers (corpses) to 20th-century medical imaging techniques, including X-ray, ultrasound, and magnetic resonance imaging.
Etymology and definition
Derived from the Greek anatomē "dissection" (from anatémnō "I cut up, cut open" from ἀνά aná "up", and τέμνω témnō "I cut"), anatomy is the scientific study of the structure of organisms including their systems, organs and tissues. It includes the appearance and position of the various parts, the materials from which they are composed, and their relationships with other parts. Anatomy is quite distinct from physiology and biochemistry, which deal respectively with the functions of those parts and the chemical processes involved. For example, an anatomist is concerned with the shape, size, position, structure, blood supply and innervation of an organ such as the liver; while a physiologist is interested in the production of bile, the role of the liver in nutrition and the regulation of bodily functions.
The discipline of anatomy can be subdivided into a number of branches, including gross or macroscopic anatomy and microscopic anatomy. Gross anatomy is the study of structures large enough to be seen with the naked eye, and also includes superficial anatomy or surface anatomy, the study by sight of the external body features. Microscopic anatomy is the study of structures on a microscopic scale, along with histology (the study of tissues), and embryology (the study of an organism in its immature condition). Regional anatomy is the study of the interrelationships of all of the structures in a specific body region, such as the abdomen. In contrast, systemic anatomy is the study of the structures that make up a discrete body system—that is, a group of structures that work together to perform a unique body function, such as the digestive system.
Anatomy can be studied using both invasive and non-invasive methods with the goal of obtaining information about the structure and organization of organs and systems. Methods used include dissection, in which a body is opened and its organs studied, and endoscopy, in which a video camera-equipped instrument is inserted through a small incision in the body wall and used to explore the internal organs and other structures. Angiography using X-rays or magnetic resonance angiography are methods to visualize blood vessels.
The term "anatomy" is commonly taken to refer to human anatomy. However, substantially similar structures and tissues are found throughout the rest of the animal kingdom, and the term also includes the anatomy of other animals. The term zootomy is also sometimes used to specifically refer to non-human animals. The structure and tissues of plants are of a dissimilar nature and they are studied in plant anatomy.
Animal tissues
The kingdom Animalia contains multicellular organisms that are heterotrophic and motile (although some have secondarily adopted a sessile lifestyle). Most animals have bodies differentiated into separate tissues and these animals are also known as eumetazoans. They have an internal digestive chamber, with one or two openings; the gametes are produced in multicellular sex organs, and the zygotes include a blastula stage in their embryonic development. Metazoans do not include the sponges, which have undifferentiated cells.
Unlike plant cells, animal cells have neither a cell wall nor chloroplasts. Vacuoles, when present, are more in number and much smaller than those in the plant cell. The body tissues are composed of numerous types of cells, including those found in muscles, nerves and skin. Each typically has a cell membrane formed of phospholipids, cytoplasm and a nucleus. All of the different cells of an animal are derived from the embryonic germ layers. Those simpler invertebrates which are formed from two germ layers of ectoderm and endoderm are called diploblastic and the more developed animals whose structures and organs are formed from three germ layers are called triploblastic. All of a triploblastic animal's tissues and organs are derived from the three germ layers of the embryo, the ectoderm, mesoderm and endoderm.
Animal tissues can be grouped into four basic types: connective, epithelial, muscle and nervous tissue.
Connective tissue
Connective tissues are fibrous and made up of cells scattered among inorganic material called the extracellular matrix. Often called fascia (from the Latin "fascia," meaning "band" or "bandage"), connective tissues give shape to organs and holds them in place. The main types are loose connective tissue, adipose tissue, fibrous connective tissue, cartilage and bone. The extracellular matrix contains proteins, the chief and most abundant of which is collagen. Collagen plays a major part in organizing and maintaining tissues. The matrix can be modified to form a skeleton to support or protect the body. An exoskeleton is a thickened, rigid cuticle which is stiffened by mineralization, as in crustaceans or by the cross-linking of its proteins as in insects. An endoskeleton is internal and present in all developed animals, as well as in many of those less developed.
Epithelium
Epithelial tissue is composed of closely packed cells, bound to each other by cell adhesion molecules, with little intercellular space. Epithelial cells can be squamous (flat), cuboidal or columnar and rest on a basal lamina, the upper layer of the basement membrane, the lower layer is the reticular lamina lying next to the connective tissue in the extracellular matrix secreted by the epithelial cells. There are many different types of epithelium, modified to suit a particular function. In the respiratory tract there is a type of ciliated epithelial lining; in the small intestine there are microvilli on the epithelial lining and in the large intestine there are intestinal villi. Skin consists of an outer layer of keratinized stratified squamous epithelium that covers the exterior of the vertebrate body. Keratinocytes make up to 95% of the cells in the skin. The epithelial cells on the external surface of the body typically secrete an extracellular matrix in the form of a cuticle. In simple animals this may just be a coat of glycoproteins. In more advanced animals, many glands are formed of epithelial cells.
Muscle tissue
Muscle cells (myocytes) form the active contractile tissue of the body. Muscle tissue functions to produce force and cause motion, either locomotion or movement within internal organs. Muscle is formed of contractile filaments and is separated into three main types; smooth muscle, skeletal muscle and cardiac muscle. Smooth muscle has no striations when examined microscopically. It contracts slowly but maintains contractibility over a wide range of stretch lengths. It is found in such organs as sea anemone tentacles and the body wall of sea cucumbers. Skeletal muscle contracts rapidly but has a limited range of extension. It is found in the movement of appendages and jaws. Obliquely striated muscle is intermediate between the other two. The filaments are staggered and this is the type of muscle found in earthworms that can extend slowly or make rapid contractions. In higher animals striated muscles occur in bundles attached to bone to provide movement and are often arranged in antagonistic sets. Smooth muscle is found in the walls of the uterus, bladder, intestines, stomach, oesophagus, respiratory airways, and blood vessels. Cardiac muscle is found only in the heart, allowing it to contract and pump blood round the body.
Nervous tissue
Nervous tissue is composed of many nerve cells known as neurons which transmit information. In some slow-moving radially symmetrical marine animals such as ctenophores and cnidarians (including sea anemones and jellyfish), the nerves form a nerve net, but in most animals they are organized longitudinally into bundles. In simple animals, receptor neurons in the body wall cause a local reaction to a stimulus. In more complex animals, specialized receptor cells such as chemoreceptors and photoreceptors are found in groups and send messages along neural networks to other parts of the organism. Neurons can be connected together in ganglia. In higher animals, specialized receptors are the basis of sense organs and there is a central nervous system (brain and spinal cord) and a peripheral nervous system. The latter consists of sensory nerves that transmit information from sense organs and motor nerves that influence target organs. The peripheral nervous system is divided into the somatic nervous system which conveys sensation and controls voluntary muscle, and the autonomic nervous system which involuntarily controls smooth muscle, certain glands and internal organs, including the stomach.
Vertebrate anatomy
All vertebrates have a similar basic body plan and at some point in their lives, mostly in the embryonic stage, share the major chordate characteristics: a stiffening rod, the notochord; a dorsal hollow tube of nervous material, the neural tube; pharyngeal arches; and a tail posterior to the anus. The spinal cord is protected by the vertebral column and is above the notochord, and the gastrointestinal tract is below it. Nervous tissue is derived from the ectoderm, connective tissues are derived from mesoderm, and gut is derived from the endoderm. At the posterior end is a tail which continues the spinal cord and vertebrae but not the gut. The mouth is found at the anterior end of the animal, and the anus at the base of the tail. The defining characteristic of a vertebrate is the vertebral column, formed in the development of the segmented series of vertebrae. In most vertebrates the notochord becomes the nucleus pulposus of the intervertebral discs. However, a few vertebrates, such as the sturgeon and the coelacanth, retain the notochord into adulthood. Jawed vertebrates are typified by paired appendages, fins or legs, which may be secondarily lost. The limbs of vertebrates are considered to be homologous because the same underlying skeletal structure was inherited from their last common ancestor. This is one of the arguments put forward by Charles Darwin to support his theory of evolution.
Fish anatomy
The body of a fish is divided into a head, trunk and tail, although the divisions between the three are not always externally visible. The skeleton, which forms the support structure inside the fish, is either made of cartilage, in cartilaginous fish, or bone in bony fish. The main skeletal element is the vertebral column, composed of articulating vertebrae which are lightweight yet strong. The ribs attach to the spine and there are no limbs or limb girdles. The main external features of the fish, the fins, are composed of either bony or soft spines called rays, which with the exception of the caudal fins, have no direct connection with the spine. They are supported by the muscles which compose the main part of the trunk. The heart has two chambers and pumps the blood through the respiratory surfaces of the gills and on round the body in a single circulatory loop. The eyes are adapted for seeing underwater and have only local vision. There is an inner ear but no external or middle ear. Low frequency vibrations are detected by the lateral line system of sense organs that run along the length of the sides of fish, and these respond to nearby movements and to changes in water pressure.
Sharks and rays are basal fish with numerous primitive anatomical features similar to those of ancient fish, including skeletons composed of cartilage. Their bodies tend to be dorso-ventrally flattened, they usually have five pairs of gill slits and a large mouth set on the underside of the head. The dermis is covered with separate dermal placoid scales. They have a cloaca into which the urinary and genital passages open, but not a swim bladder. Cartilaginous fish produce a small number of large, yolky eggs. Some species are ovoviviparous and the young develop internally but others are oviparous and the larvae develop externally in egg cases.
The bony fish lineage shows more derived anatomical traits, often with major evolutionary changes from the features of ancient fish. They have a bony skeleton, are generally laterally flattened, have five pairs of gills protected by an operculum, and a mouth at or near the tip of the snout. The dermis is covered with overlapping scales. Bony fish have a swim bladder which helps them maintain a constant depth in the water column, but not a cloaca. They mostly spawn a large number of small eggs with little yolk which they broadcast into the water column.
Amphibian anatomy
Amphibians are a class of animals comprising frogs, salamanders and caecilians. They are tetrapods, but the caecilians and a few species of salamander have either no limbs or their limbs are much reduced in size. Their main bones are hollow and lightweight and are fully ossified and the vertebrae interlock with each other and have articular processes. Their ribs are usually short and may be fused to the vertebrae. Their skulls are mostly broad and short, and are often incompletely ossified. Their skin contains little keratin and lacks scales, but contains many mucous glands and in some species, poison glands. The hearts of amphibians have three chambers, two atria and one ventricle. They have a urinary bladder and nitrogenous waste products are excreted primarily as urea. Amphibians breathe by means of buccal pumping, a pump action in which air is first drawn into the buccopharyngeal region through the nostrils. These are then closed and the air is forced into the lungs by contraction of the throat. They supplement this with gas exchange through the skin which needs to be kept moist.
In frogs the pelvic girdle is robust and the hind legs are much longer and stronger than the forelimbs. The feet have four or five digits and the toes are often webbed for swimming or have suction pads for climbing. Frogs have large eyes and no tail. Salamanders resemble lizards in appearance; their short legs project sideways, the belly is close to or in contact with the ground and they have a long tail. Caecilians superficially resemble earthworms and are limbless. They burrow by means of zones of muscle contractions which move along the body and they swim by undulating their body from side to side.
Reptile anatomy
Reptiles are a class of animals comprising turtles, tuataras, lizards, snakes and crocodiles. They are tetrapods, but the snakes and a few species of lizard either have no limbs or their limbs are much reduced in size. Their bones are better ossified and their skeletons stronger than those of amphibians. The teeth are conical and mostly uniform in size. The surface cells of the epidermis are modified into horny scales which create a waterproof layer. Reptiles are unable to use their skin for respiration as do amphibians and have a more efficient respiratory system drawing air into their lungs by expanding their chest walls. The heart resembles that of the amphibian but there is a septum which more completely separates the oxygenated and deoxygenated bloodstreams. The reproductive system has evolved for internal fertilization, with a copulatory organ present in most species. The eggs are surrounded by amniotic membranes which prevents them from drying out and are laid on land, or develop internally in some species. The bladder is small as nitrogenous waste is excreted as uric acid.
Turtles are notable for their protective shells. They have an inflexible trunk encased in a horny carapace above and a plastron below. These are formed from bony plates embedded in the dermis which are overlain by horny ones and are partially fused with the ribs and spine. The neck is long and flexible and the head and the legs can be drawn back inside the shell. Turtles are vegetarians and the typical reptile teeth have been replaced by sharp, horny plates. In aquatic species, the front legs are modified into flippers.
Tuataras superficially resemble lizards but the lineages diverged in the Triassic period. There is one living species, Sphenodon punctatus. The skull has two openings (fenestrae) on either side and the jaw is rigidly attached to the skull. There is one row of teeth in the lower jaw and this fits between the two rows in the upper jaw when the animal chews. The teeth are merely projections of bony material from the jaw and eventually wear down. The brain and heart are more primitive than those of other reptiles, and the lungs have a single chamber and lack bronchi. The tuatara has a well-developed parietal eye on its forehead.
Lizards have skulls with only one fenestra on each side, the lower bar of bone below the second fenestra having been lost. This results in the jaws being less rigidly attached which allows the mouth to open wider. Lizards are mostly quadrupeds, with the trunk held off the ground by short, sideways-facing legs, but a few species have no limbs and resemble snakes. Lizards have moveable eyelids, eardrums are present and some species have a central parietal eye.
Snakes are closely related to lizards, having branched off from a common ancestral lineage during the Cretaceous period, and they share many of the same features. The skeleton consists of a skull, a hyoid bone, spine and ribs though a few species retain a vestige of the pelvis and rear limbs in the form of pelvic spurs. The bar under the second fenestra has also been lost and the jaws have extreme flexibility allowing the snake to swallow its prey whole. Snakes lack moveable eyelids, the eyes being covered by transparent "spectacle" scales. They do not have eardrums but can detect ground vibrations through the bones of their skull. Their forked tongues are used as organs of taste and smell and some species have sensory pits on their heads enabling them to locate warm-blooded prey.
Crocodilians are large, low-slung aquatic reptiles with long snouts and large numbers of teeth. The head and trunk are dorso-ventrally flattened and the tail is laterally compressed. It undulates from side to side to force the animal through the water when swimming. The tough keratinized scales provide body armour and some are fused to the skull. The nostrils, eyes and ears are elevated above the top of the flat head enabling them to remain above the surface of the water when the animal is floating. Valves seal the nostrils and ears when it is submerged. Unlike other reptiles, crocodilians have hearts with four chambers allowing complete separation of oxygenated and deoxygenated blood.
Bird anatomy
Birds are tetrapods but though their hind limbs are used for walking or hopping, their front limbs are wings covered with feathers and adapted for flight. Birds are endothermic, have a high metabolic rate, a light skeletal system and powerful muscles. The long bones are thin, hollow and very light. Air sac extensions from the lungs occupy the centre of some bones. The sternum is wide and usually has a keel and the caudal vertebrae are fused. There are no teeth and the narrow jaws are adapted into a horn-covered beak. The eyes are relatively large, particularly in nocturnal species such as owls. They face forwards in predators and sideways in ducks.
The feathers are outgrowths of the epidermis and are found in localized bands from where they fan out over the skin. Large flight feathers are found on the wings and tail, contour feathers cover the bird's surface and fine down occurs on young birds and under the contour feathers of water birds. The only cutaneous gland is the single uropygial gland near the base of the tail. This produces an oily secretion that waterproofs the feathers when the bird preens. There are scales on the legs, feet and claws on the tips of the toes.
Mammal anatomy
Mammals are a diverse class of animals, mostly terrestrial but some are aquatic and others have evolved flapping or gliding flight. They mostly have four limbs, but some aquatic mammals have no limbs or limbs modified into fins, and the forelimbs of bats are modified into wings. The legs of most mammals are situated below the trunk, which is held well clear of the ground. The bones of mammals are well ossified and their teeth, which are usually differentiated, are coated in a layer of prismatic enamel. The teeth are shed once (milk teeth) during the animal's lifetime or not at all, as is the case in cetaceans. Mammals have three bones in the middle ear and a cochlea in the inner ear. They are clothed in hair and their skin contains glands which secrete sweat. Some of these glands are specialized as mammary glands, producing milk to feed the young. Mammals breathe with lungs and have a muscular diaphragm separating the thorax from the abdomen which helps them draw air into the lungs. The mammalian heart has four chambers, and oxygenated and deoxygenated blood are kept entirely separate. Nitrogenous waste is excreted primarily as urea.
Mammals are amniotes, and most are viviparous, giving birth to live young. Exceptions to this are the egg-laying monotremes, the platypus and the echidnas of Australia. Most other mammals have a placenta through which the developing foetus obtains nourishment, but in marsupials, the foetal stage is very short and the immature young is born and finds its way to its mother's pouch where it latches on to a teat and completes its development.
Human anatomy
Humans have the overall body plan of a mammal. Humans have a head, neck, trunk (which includes the thorax and abdomen), two arms and hands, and two legs and feet.
Generally, students of certain biological sciences, paramedics, prosthetists and orthotists, physiotherapists, occupational therapists, nurses, podiatrists, and medical students learn gross anatomy and microscopic anatomy from anatomical models, skeletons, textbooks, diagrams, photographs, lectures and tutorials and in addition, medical students generally also learn gross anatomy through practical experience of dissection and inspection of cadavers. The study of microscopic anatomy (or histology) can be aided by practical experience examining histological preparations (or slides) under a microscope.
Human anatomy, physiology and biochemistry are complementary basic medical sciences, which are generally taught to medical students in their first year at medical school. Human anatomy can be taught regionally or systemically; that is, respectively, studying anatomy by bodily regions such as the head and chest, or studying by specific systems, such as the nervous or respiratory systems. The major anatomy textbook, Gray's Anatomy, has been reorganized from a systems format to a regional format, in line with modern teaching methods. A thorough working knowledge of anatomy is required by physicians, especially surgeons and doctors working in some diagnostic specialties, such as histopathology and radiology.
Academic anatomists are usually employed by universities, medical schools or teaching hospitals. They are often involved in teaching anatomy, and research into certain systems, organs, tissues or cells.
Invertebrate anatomy
Invertebrates constitute a vast array of living organisms ranging from the simplest unicellular eukaryotes such as Paramecium to such complex multicellular animals as the octopus, lobster and dragonfly. They constitute about 95% of the animal species. By definition, none of these creatures has a backbone. The cells of single-cell protozoans have the same basic structure as those of multicellular animals but some parts are specialized into the equivalent of tissues and organs. Locomotion is often provided by cilia or flagella or may proceed via the advance of pseudopodia, food may be gathered by phagocytosis, energy needs may be supplied by photosynthesis and the cell may be supported by an endoskeleton or an exoskeleton. Some protozoans can form multicellular colonies.
Metazoans are a multicellular organism, with different groups of cells serving different functions. The most basic types of metazoan tissues are epithelium and connective tissue, both of which are present in nearly all invertebrates. The outer surface of the epidermis is normally formed of epithelial cells and secretes an extracellular matrix which provides support to the organism. An endoskeleton derived from the mesoderm is present in echinoderms, sponges and some cephalopods. Exoskeletons are derived from the epidermis and is composed of chitin in arthropods (insects, spiders, ticks, shrimps, crabs, lobsters). Calcium carbonate constitutes the shells of molluscs, brachiopods and some tube-building polychaete worms and silica forms the exoskeleton of the microscopic diatoms and radiolaria. Other invertebrates may have no rigid structures but the epidermis may secrete a variety of surface coatings such as the pinacoderm of sponges, the gelatinous cuticle of cnidarians (polyps, sea anemones, jellyfish) and the collagenous cuticle of annelids. The outer epithelial layer may include cells of several types including sensory cells, gland cells and stinging cells. There may also be protrusions such as microvilli, cilia, bristles, spines and tubercles.
Marcello Malpighi, the father of microscopical anatomy, discovered that plants had tubules similar to those he saw in insects like the silk worm. He observed that when a ring-like portion of bark was removed on a trunk a swelling occurred in the tissues above the ring, and he unmistakably interpreted this as growth stimulated by food coming down from the leaves, and being captured above the ring.
Arthropod anatomy
Arthropods comprise the largest phylum of invertebrates in the animal kingdom with over a million known species.
Insects possess segmented bodies supported by a hard-jointed outer covering, the exoskeleton, made mostly of chitin. The segments of the body are organized into three distinct parts, a head, a thorax and an abdomen. The head typically bears a pair of sensory antennae, a pair of compound eyes, one to three simple eyes (ocelli) and three sets of modified appendages that form the mouthparts. The thorax has three pairs of segmented legs, one pair each for the three segments that compose the thorax and one or two pairs of wings. The abdomen is composed of eleven segments, some of which may be fused and houses the digestive, respiratory, excretory and reproductive systems. There is considerable variation between species and many adaptations to the body parts, especially wings, legs, antennae and mouthparts.
Spiders a class of arachnids have four pairs of legs; a body of two segments—a cephalothorax and an abdomen. Spiders have no wings and no antennae. They have mouthparts called chelicerae which are often connected to venom glands as most spiders are venomous. They have a second pair of appendages called pedipalps attached to the cephalothorax. These have similar segmentation to the legs and function as taste and smell organs. At the end of each male pedipalp is a spoon-shaped cymbium that acts to support the copulatory organ.
Other branches of anatomy
Surface anatomy is important as the study of anatomical landmarks that can be readily seen from the exterior contours of the body. It enables medics and veterinarians to gauge the position and anatomy of the associated deeper structures. Superficial is a directional term that indicates that structures are located relatively close to the surface of the body.
Comparative anatomy relates to the comparison of anatomical structures (both gross and microscopic) in different animals.
Artistic anatomy relates to anatomic studies of body proportions for artistic reasons.
History
Ancient
In 1600 BCE, the Edwin Smith Papyrus, an Ancient Egyptian medical text, described the heart and its vessels, as well as the brain and its meninges and cerebrospinal fluid, and the liver, spleen, kidneys, uterus and bladder. It showed the blood vessels diverging from the heart. The Ebers Papyrus () features a "treatise on the heart", with vessels carrying all the body's fluids to or from every member of the body.
Ancient Greek anatomy and physiology underwent great changes and advances throughout the early medieval world. Over time, this medical practice expanded due to a continually developing understanding of the functions of organs and structures in the body. Phenomenal anatomical observations of the human body were made, which contributed to the understanding of the brain, eye, liver, reproductive organs, and nervous system.
The Hellenistic Egyptian city of Alexandria was the stepping-stone for Greek anatomy and physiology. Alexandria not only housed the biggest library for medical records and books of the liberal arts in the world during the time of the Greeks but was also home to many medical practitioners and philosophers. Great patronage of the arts and sciences from the Ptolemaic dynasty of Egypt helped raise Alexandria up, further rivalling other Greek states' cultural and scientific achievements.
Some of the most striking advances in early anatomy and physiology took place in Hellenistic Alexandria. Two of the most famous anatomists and physiologists of the third century were Herophilus and Erasistratus. These two physicians helped pioneer human dissection for medical research, using the cadavers of condemned criminals, which was considered taboo until the Renaissance—Herophilus was recognized as the first person to perform systematic dissections. Herophilus became known for his anatomical works, making impressive contributions to many branches of anatomy and many other aspects of medicine. Some of the works included classifying the system of the pulse, the discovery that human arteries had thicker walls than veins, and that the atria were parts of the heart. Herophilus's knowledge of the human body has provided vital input towards understanding the brain, eye, liver, reproductive organs, and nervous system and characterizing the course of the disease. Erasistratus accurately described the structure of the brain, including the cavities and membranes, and made a distinction between its cerebrum and cerebellum During his study in Alexandria, Erasistratus was particularly concerned with studies of the circulatory and nervous systems. He could distinguish the human body's sensory and motor nerves and believed air entered the lungs and heart, which was then carried throughout the body. His distinction between the arteries and veins—the arteries carrying the air through the body, while the veins carry the blood from the heart was a great anatomical discovery. Erasistratus was also responsible for naming and describing the function of the epiglottis and the heart's valves, including the tricuspid. During the third century, Greek physicians were able to differentiate nerves from blood vessels and tendons and to realize that the nerves convey neural impulses. It was Herophilus who made the point that damage to motor nerves induced paralysis. Herophilus named the meninges and ventricles in the brain, appreciated the division between cerebellum and cerebrum and recognized that the brain was the "seat of intellect" and not a "cooling chamber" as propounded by Aristotle Herophilus is also credited with describing the optic, oculomotor, motor division of the trigeminal, facial, vestibulocochlear and hypoglossal nerves.
Incredible feats were made during the third century BCE in both the digestive and reproductive systems. Herophilus discovered and described not only the salivary glands but also the small intestine and liver. He showed that the uterus is a hollow organ and described the ovaries and uterine tubes. He recognized that spermatozoa were produced by the testes and was the first to identify the prostate gland.
The anatomy of the muscles and skeleton is described in the Hippocratic Corpus, an Ancient Greek medical work written by unknown authors. Aristotle described vertebrate anatomy based on animal dissection. Praxagoras identified the difference between arteries and veins. Also in the 4th century BCE, Herophilos and Erasistratus produced more accurate anatomical descriptions based on vivisection of criminals in Alexandria during the Ptolemaic period.
In the 2nd century, Galen of Pergamum, an anatomist, clinician, writer, and philosopher, wrote the final and highly influential anatomy treatise of ancient times. He compiled existing knowledge and studied anatomy through the dissection of animals. He was one of the first experimental physiologists through his vivisection experiments on animals. Galen's drawings, based mostly on dog anatomy, became effectively the only anatomical textbook for the next thousand years. His work was known to Renaissance doctors only through Islamic Golden Age medicine until it was translated from Greek sometime in the 15th century.
Medieval to early modern
Anatomy developed little from classical times until the sixteenth century; as the historian Marie Boas writes, "Progress in anatomy before the sixteenth century is as mysteriously slow as its development after 1500 is startlingly rapid". Between 1275 and 1326, the anatomists Mondino de Luzzi, Alessandro Achillini and Antonio Benivieni at Bologna carried out the first systematic human dissections since ancient times. Mondino's Anatomy of 1316 was the first textbook in the medieval rediscovery of human anatomy. It describes the body in the order followed in Mondino's dissections, starting with the abdomen, thorax, head, and limbs. It was the standard anatomy textbook for the next century.
Leonardo da Vinci (1452–1519) was trained in anatomy by Andrea del Verrocchio. He made use of his anatomical knowledge in his artwork, making many sketches of skeletal structures, muscles and organs of humans and other vertebrates that he dissected.
Andreas Vesalius (1514–1564), professor of anatomy at the University of Padua, is considered the founder of modern human anatomy. Originally from Brabant, Vesalius published the influential book De humani corporis fabrica ("the structure of the human body"), a large format book in seven volumes, in 1543. The accurate and intricately detailed illustrations, often in allegorical poses against Italianate landscapes, are thought to have been made by the artist Jan van Calcar, a pupil of Titian.
In England, anatomy was the subject of the first public lectures given in any science; these were provided by the Company of Barbers and Surgeons in the 16th century, joined in 1583 by the Lumleian lectures in surgery at the Royal College of Physicians.
Late modern
Medical schools began to be set up in the United States towards the end of the 18th century. Classes in anatomy needed a continual stream of cadavers for dissection, and these were difficult to obtain. Philadelphia, Baltimore, and New York were all renowned for body snatching activity as criminals raided graveyards at night, removing newly buried corpses from their coffins. A similar problem existed in Britain where demand for bodies became so great that grave-raiding and even anatomy murder were practised to obtain cadavers. Some graveyards were, in consequence, protected with watchtowers. The practice was halted in Britain by the Anatomy Act of 1832, while in the United States, similar legislation was enacted after the physician William S. Forbes of Jefferson Medical College was found guilty in 1882 of "complicity with resurrectionists in the despoliation of graves in Lebanon Cemetery".
The teaching of anatomy in Britain was transformed by Sir John Struthers, Regius Professor of Anatomy at the University of Aberdeen from 1863 to 1889. He was responsible for setting up the system of three years of "pre-clinical" academic teaching in the sciences underlying medicine, including especially anatomy. This system lasted until the reform of medical training in 1993 and 2003. As well as teaching, he collected many vertebrate skeletons for his museum of comparative anatomy, published over 70 research papers, and became famous for his public dissection of the Tay Whale. From 1822 the Royal College of Surgeons regulated the teaching of anatomy in medical schools. Medical museums provided examples in comparative anatomy, and were often used in teaching. Ignaz Semmelweis investigated puerperal fever and he discovered how it was caused. He noticed that the frequently fatal fever occurred more often in mothers examined by medical students than by midwives. The students went from the dissecting room to the hospital ward and examined women in childbirth. Semmelweis showed that when the trainees washed their hands in chlorinated lime before each clinical examination, the incidence of puerperal fever among the mothers could be reduced dramatically.
Before the modern medical era, the primary means for studying the internal structures of the body were dissection of the dead and inspection, palpation, and auscultation of the living. The advent of microscopy opened up an understanding of the building blocks that constituted living tissues. Technical advances in the development of achromatic lenses increased the resolving power of the microscope, and around 1839, Matthias Jakob Schleiden and Theodor Schwann identified that cells were the fundamental unit of organization of all living things. The study of small structures involved passing light through them, and the microtome was invented to provide sufficiently thin slices of tissue to examine. Staining techniques using artificial dyes were established to help distinguish between different tissue types. Advances in the fields of histology and cytology began in the late 19th century along with advances in surgical techniques allowing for the painless and safe removal of biopsy specimens. The invention of the electron microscope brought a significant advance in resolution power and allowed research into the ultrastructure of cells and the organelles and other structures within them. About the same time, in the 1950s, the use of X-ray diffraction for studying the crystal structures of proteins, nucleic acids, and other biological molecules gave rise to a new field of molecular anatomy.
Equally important advances have occurred in non-invasive techniques for examining the body's interior structures. X-rays can be passed through the body and used in medical radiography and fluoroscopy to differentiate interior structures that have varying degrees of opaqueness. Magnetic resonance imaging, computed tomography, and ultrasound imaging have all enabled the examination of internal structures in unprecedented detail to a degree far beyond the imagination of earlier generations.
See also
Anatomical model
Outline of human anatomy
Plastination
Evelyn tables
References
External links
Anatomy, In Our Time. BBC Radio 4. Melvyn Bragg with guests Ruth Richardson, Andrew Cunningham and Harold Ellis.
"Anatomy of the Human Body". 20th edition. 1918. Henry Gray
Anatomia Collection: anatomical plates 1522 to 1867 (digitized books and images)
Lyman, Henry Munson. The Book of Health (1898). Science History Institute Digital Collections .
Gunther von Hagens True Anatomy for New Ways of Teaching.
Sources
Anatomical terminology
Branches of biology
Morphology (biology) | Anatomy | [
"Biology"
] | 8,503 | [
"Anatomy",
"nan",
"Morphology (biology)"
] |
677 | https://en.wikipedia.org/wiki/Ambiguity | Ambiguity is the type of meaning in which a phrase, statement, or resolution is not explicitly defined, making for several interpretations; others describe it as a concept or statement that has no real reference. A common aspect of ambiguity is uncertainty. It is thus an attribute of any idea or statement whose intended meaning cannot be definitively resolved, according to a rule or process with a finite number of steps. (The prefix ambi- reflects the idea of "two", as in "two meanings").
The concept of ambiguity is generally contrasted with vagueness. In ambiguity, specific and distinct interpretations are permitted (although some may not be immediately obvious), whereas with vague information it is difficult to form any interpretation at the desired level of specificity.
Linguistic forms
Lexical ambiguity is contrasted with semantic ambiguity. The former represents a choice between a finite number of known and meaningful context-dependent interpretations. The latter represents a choice between any number of possible interpretations, none of which may have a standard agreed-upon meaning. This form of ambiguity is closely related to vagueness.
Ambiguity in human language is argued to reflect principles of efficient communication. Languages that communicate efficiently will avoid sending information that is redundant with information provided in the context. This can be shown mathematically to result in a system that is ambiguous when context is neglected. In this way, ambiguity is viewed as a generally useful feature of a linguistic system.
Linguistic ambiguity can be a problem in law, because the interpretation of written documents and oral agreements is often of paramount importance.
Lexical ambiguity
The lexical ambiguity of a word or phrase applies to it having more than one meaning in the language to which the word belongs. "Meaning" here refers to whatever should be represented by a good dictionary. For instance, the word "bank" has several distinct lexical definitions, including "financial institution" and "edge of a river". Or consider "apothecary". One could say "I bought herbs from the apothecary". This could mean one actually spoke to the apothecary (pharmacist) or went to the apothecary (pharmacy).
The context in which an ambiguous word is used often makes it clearer which of the meanings is intended. If, for instance, someone says "I put $100 in the bank", most people would not think someone used a shovel to dig in the mud. However, some linguistic contexts do not provide sufficient information to make a used word clearer.
Lexical ambiguity can be addressed by algorithmic methods that automatically associate the appropriate meaning with a word in context, a task referred to as word-sense disambiguation.
The use of multi-defined words requires the author or speaker to clarify their context, and sometimes elaborate on their specific intended meaning (in which case, a less ambiguous term should have been used). The goal of clear concise communication is that the receiver(s) have no misunderstanding about what was meant to be conveyed. An exception to this could include a politician whose "weasel words" and obfuscation are necessary to gain support from multiple constituents with mutually exclusive conflicting desires from his or her candidate of choice. Ambiguity is a powerful tool of political science.
More problematic are words whose multiple meanings express closely related concepts. "Good", for example, can mean "useful" or "functional" (That's a good hammer), "exemplary" (She's a good student), "pleasing" (This is good soup), "moral" (a good person versus the lesson to be learned from a story), "righteous", etc. "I have a good daughter" is not clear about which sense is intended. The various ways to apply prefixes and suffixes can also create ambiguity ("unlockable" can mean "capable of being opened" or "impossible to lock").
Semantic and syntactic ambiguity
Semantic ambiguity occurs when a word, phrase or sentence, taken out of context, has more than one interpretation. In "We saw her duck" (example due to Richard Nordquist), the words "her duck" can refer either
to the person's bird (the noun "duck", modified by the possessive pronoun "her"), or
to a motion she made (the verb "duck", the subject of which is the objective pronoun "her", object of the verb "saw").
Syntactic ambiguity arises when a sentence can have two (or more) different meanings because of the structure of the sentence—its syntax. This is often due to a modifying expression, such as a prepositional phrase, the application of which is unclear. "He ate the cookies on the couch", for example, could mean that he ate those cookies that were on the couch (as opposed to those that were on the table), or it could mean that he was sitting on the couch when he ate the cookies. "To get in, you will need an entrance fee of $10 or your voucher and your drivers' license." This could mean that you need EITHER ten dollars OR BOTH your voucher and your license. Or it could mean that you need your license AND you need EITHER ten dollars OR a voucher. Only rewriting the sentence, or placing appropriate punctuation can resolve a syntactic ambiguity.
For the notion of, and theoretic results about, syntactic ambiguity in artificial, formal languages (such as computer programming languages), see Ambiguous grammar.
Usually, semantic and syntactic ambiguity go hand in hand. The sentence "We saw her duck" is also syntactically ambiguous. Conversely, a sentence like "He ate the cookies on the couch" is also semantically ambiguous. Rarely, but occasionally, the different parsings of a syntactically ambiguous phrase result in the same meaning. For example, the command "Cook, cook!" can be parsed as "Cook (noun used as vocative), cook (imperative verb form)!", but also as "Cook (imperative verb form), cook (noun used as vocative)!". It is more common that a syntactically unambiguous phrase has a semantic ambiguity; for example, the lexical ambiguity in "Your boss is a funny man" is purely semantic, leading to the response "Funny ha-ha or funny peculiar?"
Spoken language can contain many more types of ambiguities that are called phonological ambiguities, where there is more than one way to compose a set of sounds into words. For example, "ice cream" and "I scream". Such ambiguity is generally resolved according to the context. A mishearing of such, based on incorrectly resolved ambiguity, is called a mondegreen.
Philosophy
Philosophers (and other users of logic) spend a lot of time and effort searching for and removing (or intentionally adding) ambiguity in arguments because it can lead to incorrect conclusions and can be used to deliberately conceal bad arguments. For example, a politician might say, "I oppose taxes which hinder economic growth", an example of a glittering generality. Some will think they oppose taxes in general because they hinder economic growth. Others may think they oppose only those taxes that they believe will hinder economic growth. In writing, the sentence can be rewritten to reduce possible misinterpretation, either by adding a comma after "taxes" (to convey the first sense) or by changing "which" to "that" (to convey the second sense) or by rewriting it in other ways. The devious politician hopes that each constituent will interpret the statement in the most desirable way, and think the politician supports everyone's opinion. However, the opposite can also be true—an opponent can turn a positive statement into a bad one if the speaker uses ambiguity (intentionally or not). The logical fallacies of amphiboly and equivocation rely heavily on the use of ambiguous words and phrases.
In continental philosophy (particularly phenomenology and existentialism), there is much greater tolerance of ambiguity, as it is generally seen as an integral part of the human condition. Martin Heidegger argued that the relation between the subject and object is ambiguous, as is the relation of mind and body, and part and whole. In Heidegger's phenomenology, Dasein is always in a meaningful world, but there is always an underlying background for every instance of signification. Thus, although some things may be certain, they have little to do with Dasein's sense of care and existential anxiety, e.g., in the face of death. In calling his work Being and Nothingness an "essay in phenomenological ontology" Jean-Paul Sartre follows Heidegger in defining the human essence as ambiguous, or relating fundamentally to such ambiguity. Simone de Beauvoir tries to base an ethics on Heidegger's and Sartre's writings (The Ethics of Ambiguity), where she highlights the need to grapple with ambiguity: "as long as there have been philosophers and they have thought, most of them have tried to mask it ... And the ethics which they have proposed to their disciples has always pursued the same goal. It has been a matter of eliminating the ambiguity by making oneself pure inwardness or pure externality, by escaping from the sensible world or being engulfed by it, by yielding to eternity or enclosing oneself in the pure moment." Ethics cannot be based on the authoritative certainty given by mathematics and logic, or prescribed directly from the empirical findings of science. She states: "Since we do not succeed in fleeing it, let us, therefore, try to look the truth in the face. Let us try to assume our fundamental ambiguity. It is in the knowledge of the genuine conditions of our life that we must draw our strength to live and our reason for acting". Other continental philosophers suggest that concepts such as life, nature, and sex are ambiguous. Corey Anton has argued that we cannot be certain what is separate from or unified with something else: language, he asserts, divides what is not, in fact, separate. Following Ernest Becker, he argues that the desire to 'authoritatively disambiguate' the world and existence has led to numerous ideologies and historical events such as genocide. On this basis, he argues that ethics must focus on 'dialectically integrating opposites' and balancing tension, rather than seeking a priori validation or certainty. Like the existentialists and phenomenologists, he sees the ambiguity of life as the basis of creativity.
Literature and rhetoric
In literature and rhetoric, ambiguity can be a useful tool. Groucho Marx's classic joke depends on a grammatical ambiguity for its humor, for example: "Last night I shot an elephant in my pajamas. How he got in my pajamas, I'll never know". An additional example of ambiguous humor comes from Shakespeare's Othello: Cassio: Dost thou hear, my honest friend?
Clown: No, I hear not your honest friend. I hear you. (Othello, III, i)Songs and poetry often rely on ambiguous words for artistic effect, as in the song title "Don't It Make My Brown Eyes Blue" (where "blue" can refer to the color, or to sadness).
In the narrative, ambiguity can be introduced in several ways: motive, plot, character. F. Scott Fitzgerald uses the latter type of ambiguity with notable effect in his novel The Great Gatsby.
Mathematical notation
Mathematical notation is a helpful tool that eliminates a lot of misunderstandings associated with natural language in physics and other sciences. Nonetheless, there are still some inherent ambiguities due to lexical, syntactic, and semantic reasons that persist in mathematical notation.
Names of functions
The ambiguity in the style of writing a function should not be confused with a multivalued function, which can (and should) be defined in a deterministic and unambiguous way. Several special functions still do not have established notations. Usually, the conversion to another notation requires to scale the argument or the resulting value; sometimes, the same name of the function is used, causing confusions. Examples of such underestablished functions:
Sinc function
Elliptic integral of the third kind; translating elliptic integral form MAPLE to Mathematica, one should replace the second argument to its square; dealing with complex values, this may cause problems.
Exponential integral
Hermite polynomial
Expressions
Ambiguous expressions often appear in physical and mathematical texts.
It is common practice to omit multiplication signs in mathematical expressions. Also, it is common to give the same name to a variable and a function, for example, Then, if one sees there is no way to distinguish whether it means multiplied by or function evaluated at argument equal to In each case of use of such notations, the reader is supposed to be able to perform the deduction and reveal the true meaning.
Creators of algorithmic languages try to avoid ambiguities. Many algorithmic languages (C++ and Fortran) require the character * as symbol of multiplication. The Wolfram Language used in Mathematica allows the user to omit the multiplication symbol, but requires square brackets to indicate the argument of a function; square brackets are not allowed for grouping of expressions. Fortran, in addition, does not allow use of the same name (identifier) for different objects, for example, function and variable; in particular, the expression is qualified as an error.
The order of operations may depend on the context. In most programming languages, the operations of division and multiplication have equal priority and are executed from left to right. Until the last century, many editorials assumed that multiplication is performed first, for example, is interpreted as in this case, the insertion of parentheses is required when translating the formulas to an algorithmic language. In addition, it is common to write an argument of a function without parenthesis, which also may lead to ambiguity.
In the scientific journal style, one uses roman letters to denote elementary functions, whereas variables are written using italics.
For example, in mathematical journals the expression does not denote the sine function, but the product of the three variables although in the informal notation of a slide presentation it may stand for
Commas in multi-component subscripts and superscripts are sometimes omitted; this is also potentially ambiguous notation.
For example, in the notation the reader can only infer from the context whether it means a single-index object, taken with the subscript equal to product of variables and or it is an indication to a trivalent tensor.
Examples of potentially confusing ambiguous mathematical expressions
An expression such as can be understood to mean either or Often the author's intention can be understood from the context, in cases where only one of the two makes sense, but an ambiguity like this should be avoided, for example by writing
The expression means in several texts, though it might be thought to mean since commonly means Conversely, might seem to mean as this exponentiation notation usually denotes function iteration: in general, means However, for trigonometric and hyperbolic functions, this notation conventionally means exponentiation of the result of function application.
The expression can be interpreted as meaning however, it is more commonly understood to mean
Notations in quantum optics and quantum mechanics
It is common to define the coherent states in quantum optics with and states with fixed number of photons with Then, there is an "unwritten rule": the state is coherent if there are more Greek characters than Latin characters in the argument, and -photon state if the Latin characters dominate. The ambiguity becomes even worse, if is used for the states with certain value of the coordinate, and means the state with certain value of the momentum, which may be used in books on quantum mechanics. Such ambiguities easily lead to confusions, especially if some normalized adimensional, dimensionless variables are used. Expression may mean a state with single photon, or the coherent state with mean amplitude equal to 1, or state with momentum equal to unity, and so on. The reader is supposed to guess from the context.
Ambiguous terms in physics and mathematics
Some physical quantities do not yet have established notations; their value (and sometimes even dimension, as in the case of the Einstein coefficients), depends on the system of notations. Many terms are ambiguous. Each use of an ambiguous term should be preceded by the definition, suitable for a specific case. Just like Ludwig Wittgenstein states in Tractatus Logico-Philosophicus: "... Only in the context of a proposition has a name meaning."
A highly confusing term is gain. For example, the sentence "the gain of a system should be doubled", without context, means close to nothing.
It may mean that the ratio of the output voltage of an electric circuit to the input voltage should be doubled.
It may mean that the ratio of the output power of an electric or optical circuit to the input power should be doubled.
It may mean that the gain of the laser medium should be doubled, for example, doubling the population of the upper laser level in a quasi-two level system (assuming negligible absorption of the ground-state).
The term intensity is ambiguous when applied to light. The term can refer to any of irradiance, luminous intensity, radiant intensity, or radiance, depending on the background of the person using the term.
Also, confusions may be related with the use of atomic percent as measure of concentration of a dopant, or resolution of an imaging system, as measure of the size of the smallest detail that still can be resolved at the background of statistical noise. See also Accuracy and precision.
The Berry paradox arises as a result of systematic ambiguity in the meaning of terms such as "definable" or "nameable". Terms of this kind give rise to vicious circle fallacies. Other terms with this type of ambiguity are: satisfiable, true, false, function, property, class, relation, cardinal, and ordinal.
Mathematical interpretation of ambiguity
In mathematics and logic, ambiguity can be considered to be an instance of the logical concept of underdetermination—for example, leaves open what the value of is—while overdetermination, except when like , is a self-contradiction, also called inconsistency, paradoxicalness, or oxymoron, or in mathematics an inconsistent system—such as which has no solution.
Logical ambiguity and self-contradiction is analogous to visual ambiguity and impossible objects, such as the Necker cube and impossible cube, or many of the drawings of M. C. Escher.
Constructed language
Some languages have been created with the intention of avoiding ambiguity, especially lexical ambiguity. Lojban and Loglan are two related languages that have been created for this, focusing chiefly on syntactic ambiguity as well. The languages can be both spoken and written. These languages are intended to provide a greater technical precision over big natural languages, although historically, such attempts at language improvement have been criticized. Languages composed from many diverse sources contain much ambiguity and inconsistency. The many exceptions to syntax and semantic rules are time-consuming and difficult to learn.
Biology
In structural biology, ambiguity has been recognized as a problem for studying protein conformations. The analysis of a protein three-dimensional structure consists in dividing the macromolecule into subunits called domains. The difficulty of this task arises from the fact that different definitions of what a domain is can be used (e.g. folding autonomy, function, thermodynamic stability, or domain motions), which sometimes results in a single protein having different—yet equally valid—domain assignments.
Christianity and Judaism
Christianity and Judaism employ the concept of paradox synonymously with "ambiguity". Many Christians and Jews endorse Rudolf Otto's description of the sacred as 'mysterium tremendum et fascinans', the awe-inspiring mystery that fascinates humans. The apocryphal Book of Judith is noted for the "ingenious ambiguity" expressed by its heroine; for example, she says to the villain of the story, Holofernes, "my lord will not fail to achieve his purposes", without specifying whether my lord refers to the villain or to God.
The orthodox Catholic writer G. K. Chesterton regularly employed paradox to tease out the meanings in common concepts that he found ambiguous or to reveal meaning often overlooked or forgotten in common phrases: the title of one of his most famous books, Orthodoxy (1908), itself employed such a paradox.
Music
In music, pieces or sections that confound expectations and may be or are interpreted simultaneously in different ways are ambiguous, such as some polytonality, polymeter, other ambiguous meters or rhythms, and ambiguous phrasing, or (Stein 2005, p. 79) any aspect of music. The music of Africa is often purposely ambiguous. To quote Sir Donald Francis Tovey (1935, p. 195), "Theorists are apt to vex themselves with vain efforts to remove uncertainty just where it has a high aesthetic value."
Visual art
In visual art, certain images are visually ambiguous, such as the Necker cube, which can be interpreted in two ways. Perceptions of such objects remain stable for a time, then may flip, a phenomenon called multistable perception.
The opposite of such ambiguous images are impossible objects.
Pictures or photographs may also be ambiguous at the semantic level: the visual image is unambiguous, but the meaning and narrative may be ambiguous: is a certain facial expression one of excitement or fear, for instance?
Social psychology and the bystander effect
In social psychology, ambiguity is a factor used in determining peoples' responses to various situations. High levels of ambiguity in an emergency (e.g. an unconscious man lying on a park bench) make witnesses less likely to offer any sort of assistance, due to the fear that they may have misinterpreted the situation and acted unnecessarily. Alternately, non-ambiguous emergencies (e.g. an injured person verbally asking for help) elicit more consistent intervention and assistance. With regard to the bystander effect, studies have shown that emergencies deemed ambiguous trigger the appearance of the classic bystander effect (wherein more witnesses decrease the likelihood of any of them helping) far more than non-ambiguous emergencies.
Computer science
In computer science, the SI prefixes kilo-, mega- and giga- were historically used in certain contexts to mean either the first three powers of 1024 (1024, 10242 and 10243) contrary to the metric system in which these units unambiguously mean one thousand, one million, and one billion. This usage is particularly prevalent with electronic memory devices (e.g. DRAM) addressed directly by a binary machine register where a decimal interpretation makes no practical sense.
Subsequently, the Ki, Mi, and Gi prefixes were introduced so that binary prefixes could be written explicitly, also rendering k, M, and G unambiguous in texts conforming to the new standard—this led to a new ambiguity in engineering documents lacking outward trace of the binary prefixes (necessarily indicating the new style) as to whether the usage of k, M, and G remains ambiguous (old style) or not (new style). 1 M (where M is ambiguously or ) is less uncertain than the engineering value (defined to designate the interval ). As non-volatile storage devices begin to exceed 1 GB in capacity (where the ambiguity begins to routinely impact the second significant digit), GB and TB almost always mean 109 and 1012 bytes.
See also
References
External links
Collection of Ambiguous or Inconsistent/Incomplete Statements
Leaving out ambiguities when writing
Semantics
Mathematical notation
Concepts in epistemology
Barriers to critical thinking
Formal semantics (natural language) | Ambiguity | [
"Mathematics"
] | 4,904 | [
"nan"
] |
681 | https://en.wikipedia.org/wiki/Aardwolf | The aardwolf (Proteles cristatus) is an insectivorous hyaenid species, native to East and Southern Africa. Its name means "earth-wolf" in Afrikaans and Dutch. It is also called the maanhaar-jackal (Afrikaans for "mane-jackal"), termite-eating hyena and civet hyena, based on its habit of secreting substances from its anal gland, a characteristic shared with the African civet.
Unlike many of its relatives in the order Carnivora, the aardwolf does not hunt large animals. It eats insects and their larvae, mainly termites; one aardwolf can lap up as many as 300,000 termites during a single night using its long, sticky tongue. The aardwolf's tongue has adapted to be tough enough to withstand the strong bite of termites.
The aardwolf lives in the shrublands of eastern and southern Africa – open lands covered with stunted trees and shrubs. It is nocturnal, resting in burrows during the day and emerging at night to seek food.
Taxonomy
The aardwolf is generally classified as part of the hyena family Hyaenidae. However, it was formerly placed in its own family Protelidae. Early on, scientists felt that it was merely mimicking the striped hyena, which subsequently led to the creation of Protelidae. Recent studies have suggested that the aardwolf probably diverged from other hyaenids early on; how early is still unclear, as the fossil record and genetic studies disagree by 10 million years.
The aardwolf is the only surviving species in the subfamily Protelinae. There is disagreement as to whether the species is monotypic, or can be divided into subspecies. A 2021 study found the genetic differences in eastern and southern aardwolves may be pronounced enough to categorize them as species.
A 2006 molecular analysis indicates it is phylogenetically the most basal of the four extant hyaenidae species.
Etymology
The generic name proteles comes from two words both of Greek origin, protos and teleos which combined means "complete in front" based on the fact that they have five toes on their front feet and four on the rear. The specific name, cristatus, comes from Latin and means "provided with a comb", relating to their mane.
Description
The aardwolf resembles a much smaller and thinner striped hyena, with a more slender muzzle, black vertical stripes on a coat of yellowish fur, and a long, distinct mane down the midline of the neck and back. It also has one or two diagonal stripes down the fore and hindquarters and several stripes on its legs. The mane is raised during confrontations to make the aardwolf appear larger. It is missing the throat spot that others in the family have. Its lower leg (from the knee down) is all black, and its tail is bushy with a black tip.
The aardwolf is about long, excluding its bushy tail, which is about long, and stands about tall at the shoulders. An adult aardwolf weighs approximately , sometimes reaching . The aardwolves in the south of the continent tend to be smaller (about ) than the eastern version (around ). This makes the aardwolf the smallest extant member of the Hyaenidae family. The front feet have five toes each, unlike the four-toed hyena. The skull is similar in shape to those of other hyenas, though much smaller, and its cheek teeth are specialised for eating insects. It still has canines, but unlike other hyenas, these teeth are used primarily for fighting and defense. Its ears, which are large, are very similar to those of the striped hyena.
As an aardwolf ages, it will typically lose some of its teeth, though this has little impact on its feeding habits due to the softness of the insects that it eats.
Distribution and habitat
Aardwolves live in open, dry plains and bushland, avoiding mountainous areas. Due to their specific food requirements, they are found only in regions where termites of the family Hodotermitidae occur. Termites of this family depend on dead and withered grass and are most populous in heavily grazed grasslands and savannahs, including farmland. For most of the year, aardwolves spend time in shared territories consisting of up to a dozen dens, which are occupied for six weeks at a time.
There are two distinct populations: one in Southern Africa, and another in East and Northeast Africa. The species does not occur in the intermediary miombo forests.
An adult pair, along with their most-recent offspring, occupies a territory of .
Behavior and ecology
Aardwolves are shy and nocturnal, sleeping in burrows by day. They will, on occasion during the winter, become diurnal feeders. This happens during the coldest periods as they then stay in at night to conserve heat.
They are primarily solitary animals, though during mating season they form monogamous pairs which occupy a territory with their young. If their territory is infringed upon by another aardwolf, they will chase the intruder away for up to or to the border. If the intruder is caught, which rarely happens, a fight will occur, which is accompanied by soft clucking, hoarse barking, and a type of roar. The majority of incursions occur during mating season, when they can occur once or twice per week. When food is scarce, the stringent territorial system may be abandoned and as many as three pairs may occupy a single territory.
The territory is marked by both sexes, as they both have developed anal glands from which they extrude a black substance that is smeared on rocks or grass stalks in -long streaks. Aardwolves also have scent glands on the forefoot and penile pad. They often mark near termite mounds within their territory every 20 minutes or so. If they are patrolling their territorial boundaries, the marking frequency increases drastically, to once every . At this rate, an individual may mark 60 marks per hour, and upwards of 200 per night.
An aardwolf pair's territory may have up to 10 dens, and numerous middens where they dig small holes and bury their feces with sand. Their dens are usually abandoned aardvark, springhare, or porcupine dens, or on occasion they are crevices in rocks. They will also dig their own dens, or enlarge dens started by springhares. They typically will only use one or two dens at a time, rotating through all of their dens every six months. During the summer, they may rest outside their den during the night and sleep underground during the heat of the day.
Aardwolves are not fast runners nor are they particularly adept at fighting off predators. Therefore, when threatened, the aardwolf may attempt to mislead its foe by doubling back on its tracks. If confronted, it may raise its mane in an attempt to appear more menacing. It also emits a foul-smelling liquid from its anal glands.
Feeding
The aardwolf feeds primarily on termites and more specifically on Trinervitermes. This genus of termites has different species throughout the aardwolf's range. In East Africa, they eat Trinervitermes bettonianus, in central Africa, they eat Trinervitermes rhodesiensis, and in southern Africa, they eat T. trinervoides. Their technique consists of licking them off the ground as opposed to the aardvark, which digs into the mound. They locate their food by sound and also from the scent secreted by the soldier termites. An aardwolf may consume up to 250,000 termites per night using its long, broad, sticky tongue.
They do not destroy the termite mound or consume the entire colony, thus ensuring that the termites can rebuild and provide a continuous supply of food. They often memorize the location of such nests and return to them every few months. During certain seasonal events, such as the onset of the rainy season and the cold of midwinter, the primary termites become scarce, so the need for other foods becomes pronounced. During these times, the southern aardwolf will seek out Hodotermes mossambicus, a type of harvester termite active in the afternoon, which explains some of their diurnal behavior in the winter. The eastern aardwolf, during the rainy season, subsists on termites from the genera Odontotermes and Macrotermes. They are also known to feed on other insects and larvae, and, some sources mention, occasionally eggs, small mammals and birds, but these constitute a very small percentage of their total diet. They use their wide tongues to lap surface foraging termites off of the ground and consume large quantities of sand in the process, which aids in digestion in the absence of teeth to break down their food.
Unlike other hyenas, aardwolves do not scavenge or kill larger animals. Contrary to popular myths, aardwolves do not eat carrion, and if they are seen eating while hunched over a dead carcass, they are actually eating larvae and beetles. Also, contrary to some sources, they do not like meat, unless it is finely ground or cooked for them. The adult aardwolf was formerly assumed to forage in small groups, but more recent research has shown that they are primarily solitary foragers, necessary because of the scarcity of their insect prey. Their primary source, Trinervitermes, forages in small but dense patches of . While foraging, the aardwolf can cover about per hour, which translates to per summer night and per winter night.
Breeding
The breeding season varies depending on location, but normally takes place during autumn or spring. In South Africa, breeding occurs in early July. During the breeding season, unpaired male aardwolves search their own territory, as well as others, for a female to mate with. Dominant males also mate opportunistically with the females of less dominant neighboring aardwolves, which can result in conflict between rival males. Dominant males even go a step further and as the breeding season approaches, they make increasingly greater and greater incursions onto weaker males' territories. As the female comes into oestrus, they add pasting to their tricks inside of the other territories, sometimes doing so more in rivals' territories than their own. Females will also, when given the opportunity, mate with the dominant male, which increases the chances of the dominant male guarding "his" cubs with her. Copulation lasts between 1 and 4.5 hours.
Gestation lasts between 89 and 92 days, producing two to five cubs (most often two or three) during the rainy season (October–December), when termites are more active. They are born with their eyes open, but initially are helpless, and weigh around . The first six to eight weeks are spent in the den with their parents. The male may spend up to six hours a night watching over the cubs while the mother is out looking for food. After three months, they begin supervised foraging, and by four months are normally independent, though they often share a den with their mother until the next breeding season. By the time the next set of cubs is born, the older cubs have moved on. Aardwolves generally achieve sexual maturity at one and a half to two years of age.
Conservation
The aardwolf has not seen decreasing numbers and is relatively widespread throughout eastern Africa. They are not common throughout their range, as they maintain a density of no more than 1 per square kilometer, if food is abundant. Because of these factors, the IUCN has rated the aardwolf as least concern. In some areas, they are persecuted because of the mistaken belief that they prey on livestock; however, they are actually beneficial to the farmers because they eat termites that are detrimental. In other areas, the farmers have recognized this, but they are still killed, on occasion, for their fur. Dogs and insecticides are also common killers of the aardwolf.
In captivity
Frankfurt Zoo in Germany was home to the oldest recorded aardwolf in captivity at 18 years and 11 months.
Notes
References
Sources
Further reading
External links
Animal Diversity Web
IUCN Hyaenidae Specialist Group Aardwolf pages on hyaenidae.org
Cam footage from the Namib desert https://m.youtube.com/watch?v=lRevqS6Pxgg
Mammals described in 1783
Carnivorans of Africa
Hyenas
Mammals of Southern Africa
Fauna of East Africa
Myrmecophagous mammals
Taxa named by Anders Sparrman
Nocturnal animals | Aardwolf | [
"Biology"
] | 2,670 | [
"Nocturnal animals",
"Animals"
] |
682 | https://en.wikipedia.org/wiki/Adobe | Adobe ( ; ) is a building material made from earth and organic materials. is Spanish for mudbrick. In some English-speaking regions of Spanish heritage, such as the Southwestern United States, the term is used to refer to any kind of earthen construction, or various architectural styles like Pueblo Revival or Territorial Revival. Most adobe buildings are similar in appearance to cob and rammed earth buildings. Adobe is among the earliest building materials, and is used throughout the world.
Adobe architecture has been dated to before 5,100 BP.
Description
Adobe bricks are rectangular prisms small enough that they can quickly air dry individually without cracking. They can be subsequently assembled, with the application of adobe mud to bond the individual bricks into a structure. There is no standard size, with substantial variations over the years and in different regions. In some areas a popular size measured weighing about ; in other contexts the size is weighing about . The maximum sizes can reach up to ; above this weight it becomes difficult to move the pieces, and it is preferred to ram the mud in situ, resulting in a different typology known as rammed earth.
Strength
In dry climates, adobe structures are extremely durable, and account for some of the oldest existing buildings in the world. Adobe buildings offer significant advantages due to their greater thermal mass, but they are known to be particularly susceptible to earthquake damage if they are not reinforced. Cases where adobe structures were widely damaged during earthquakes include the 1976 Guatemala earthquake, the 2003 Bam earthquake, and the 2010 Chile earthquake.
Distribution
Buildings made of sun-dried earth are common throughout the world (Middle East, Western Asia, North Africa, West Africa, South America, Southwestern North America, Southwestern and Eastern Europe.). Adobe had been in use by indigenous peoples of the Americas in the Southwestern United States, Mesoamerica, and the Andes for several thousand years. Puebloan peoples built their adobe structures with handsful or basketsful of adobe, until the Spanish introduced them to making bricks. Adobe bricks were used in Spain from the Late Bronze and Iron Ages (eighth century BCE onwards). Its wide use can be attributed to its simplicity of design and manufacture, and economics.
Etymology
The word adobe has existed for around 4,000 years with relatively little change in either pronunciation or meaning. The word can be traced from the Middle Egyptian () word ḏbt "mud brick" (with vowels unwritten). Middle Egyptian evolved into Late Egyptian and finally to Coptic (), where it appeared as ⲧⲱⲃⲉ tōbə. This was adopted into Arabic as aṭ-ṭawbu or aṭ-ṭūbu, with the definite article al- attached to the root tuba. This was assimilated into the Old Spanish language as adobe , probably via Mozarabic. English borrowed the word from Spanish in the early 18th century, still referring to mudbrick construction.
In more modern English usage, the term adobe has come to include a style of architecture popular in the desert climates of North America, especially in New Mexico, regardless of the construction method.
Composition
An adobe brick is a composite material made of earth mixed with water and an organic material such as straw or dung. The soil composition typically contains sand, silt and clay. Straw is useful in binding the brick together and allowing the brick to dry evenly, thereby preventing cracking due to uneven shrinkage rates through the brick. Dung offers the same advantage. The most desirable soil texture for producing the mud of adobe is 15% clay, 10–30% silt, and 55–75% fine sand. Another source quotes 15–25% clay and the remainder sand and coarser particles up to cobbles , with no deleterious effect. Modern adobe is stabilized with either emulsified asphalt or Portland cement up to 10% by weight.
No more than half the clay content should be expansive clays, with the remainder non-expansive illite or kaolinite. Too much expansive clay results in uneven drying through the brick, resulting in cracking, while too much kaolinite will make a weak brick. Typically the soils of the Southwest United States, where such construction has been widely used, are an adequate composition.
Material properties
Adobe walls are load bearing, i.e. they carry their own weight into the foundation rather than by another structure, hence the adobe must have sufficient compressive strength. In the United States, most building codes call for a minimum compressive strength of for the adobe block. Adobe construction should be designed so as to avoid lateral structural loads that would cause bending loads. The building codes require the building sustain a lateral acceleration earthquake load. Such an acceleration will cause lateral loads on the walls, resulting in shear and bending and inducing tensile stresses. To withstand such loads, the codes typically call for a tensile modulus of rupture strength of at least for the finished block.
In addition to being an inexpensive material with a small resource cost, adobe can serve as a significant heat reservoir due to the thermal properties inherent in the massive walls typical in adobe construction. In climates typified by hot days and cool nights, the high thermal mass of adobe mediates the high and low temperatures of the day, moderating the temperature of the living space. The massive walls require a large and relatively long input of heat from the sun (radiation) and from the surrounding air (convection) before they warm through to the interior. After the sun sets and the temperature drops, the warm wall will continue to transfer heat to the interior for several hours due to the time-lag effect. Thus, a well-planned adobe wall of the appropriate thickness is very effective at controlling inside temperature through the wide daily fluctuations typical of desert climates, a factor which has contributed to its longevity as a building material.
Thermodynamic material properties have significant variation in the literature. Some experiments suggest that the standard consideration of conductivity is not adequate for this material, as its main thermodynamic property is inertia, and conclude that experimental tests should be performed over a longer period of time than usual – preferably with changing thermal jumps. There is an effective R-value for a north facing wall of R0=10 hr ft2 °F/Btu, which corresponds to thermal conductivity k=10 in x 1 ft/12 in /R0=0.33 Btu/(hr ft °F) or 0.57 W/(m K) in agreement with the thermal conductivity reported from another source. To determine the total R-value of a wall, scale R0 by the thickness of the wall in inches. The thermal resistance of adobe is also stated as an R-value for a wall R0=4.1 hr ft2 °F/Btu. Another source provides the following properties: conductivity 0.30 Btu/(hr ft °F) or 0.52 W/(m K); specific heat capacity 0.24 Btu/(lb °F) or 1 kJ/(kg K) and density , giving heat capacity 25.4 Btu/(ft3 °F) or 1700 kJ/(m3 K). Using the average value of the thermal conductivity as k = 32 Btu/(hr ft °F) or 0.55 W/(m K), the thermal diffusivity is calculated to be .
Uses
Poured and puddled adobe walls
Poured and puddled adobe (puddled clay, piled earth), today called cob, is made by placing soft adobe in layers, rather than by making individual dried bricks or using a form. "Puddle" is a general term for a clay or clay and sand-based material worked into a dense, plastic state. These are the oldest methods of building with adobe in the Americas until holes in the ground were used as forms, and later wooden forms used to make individual bricks were introduced by the Spanish.
Adobe bricks
Bricks made from adobe are usually made by pressing the mud mixture into an open timber frame. In North America, the brick is typically about in size. The mixture is molded into the frame, which is removed after initial setting. After drying for a few hours, the bricks are turned on edge to finish drying. Slow drying in shade reduces cracking.
The same mixture, without straw, is used to make mortar and often plaster on interior and exterior walls. Some cultures used lime-based cement for the plaster to protect against rain damage.
Depending on the form into which the mixture is pressed, adobe can encompass nearly any shape or size, provided drying is even and the mixture includes reinforcement for larger bricks. Reinforcement can include manure, straw, cement, rebar, or wooden posts. Straw, cement, or manure added to a standard adobe mixture can produce a stronger, more crack-resistant brick. A test is done on the soil content first. To do so, a sample of the soil is mixed into a clear container with some water, creating an almost completely saturated liquid. The container is shaken vigorously for one minute. It is then allowed to settle for a day until the soil has settled into layers. Heavier particles settle out first, sand above, silt above that, and very fine clay and organic matter will stay in suspension for days. After the water has cleared, percentages of the various particles can be determined. Fifty to 60 percent sand and 35 to 40 percent clay will yield strong bricks. The Cooperative State Research, Education, and Extension Service at New Mexico State University recommends a mix of not more than clay, not less than sand, and never more than silt.
During the Great Depression, designer and builder Hugh W. Comstock used cheaper materials and made a specialized adobe brick called "Bitudobe." His first adobe house was built in 1936. In 1948, he published the book Post-Adobe; Simplified Adobe Construction Combining A Rugged Timber Frame And Modern Stabilized Adobe, which described his method of construction, including how to make "Bitudobe." In 1938, he served as an adviser to the architects Franklin & Kump Associates, who built the Carmel High School, which used his Post-adobe system.
Adobe wall construction
The ground supporting an adobe structure should be compressed, as the weight of adobe wall is significant and foundation settling may cause cracking of the wall. Footing depth is to be below the ground frost level. The footing and stem wall are commonly thick, respectively. Modern construction codes call for the use of reinforcing steel in the footing and stem wall. Adobe bricks are laid by course. Adobe walls usually never rise above two stories as they are load bearing and adobe has low structural strength. When creating window and door openings, a lintel is placed on top of the opening to support the bricks above. Atop the last courses of brick, bond beams made of heavy wood beams or modern reinforced concrete are laid to provide a horizontal bearing plate for the roof beams and to redistribute lateral earthquake loads to shear walls more able to carry the forces. To protect the interior and exterior adobe walls, finishes such as mud plaster, whitewash or stucco can be applied. These protect the adobe wall from water damage, but need to be reapplied periodically. Alternatively, the walls can be finished with other nontraditional plasters that provide longer protection. Bricks made with stabilized adobe generally do not need protection of plasters.
Adobe roof
The traditional adobe roof has been constructed using a mixture of soil/clay, water, sand and organic materials. The mixture was then formed and pressed into wood forms, producing rows of dried earth bricks that would then be laid across a support structure of wood and plastered into place with more adobe.
Depending on the materials available, a roof may be assembled using wood or metal beams to create a framework to begin layering adobe bricks. Depending on the thickness of the adobe bricks, the framework has been preformed using a steel framing and a layering of a metal fencing or wiring over the framework to allow an even load as masses of adobe are spread across the metal fencing like cob and allowed to air dry accordingly. This method was demonstrated with an adobe blend heavily impregnated with cement to allow even drying and prevent cracking.
The more traditional flat adobe roofs are functional only in dry climates that are not exposed to snow loads. The heaviest wooden beams, called vigas, lie atop the wall. Across the vigas lie smaller members called latillas and upon those brush is then laid. Finally, the adobe layer is applied.
To construct a flat adobe roof, beams of wood were laid to span the building, the ends of which were attached to the tops of the walls. Once the vigas, latillas and brush are laid, adobe bricks are placed. An adobe roof is often laid with bricks slightly larger in width to ensure a greater expanse is covered when placing the bricks onto the roof. Following each individual brick should be a layer of adobe mortar, recommended to be at least thick to make certain there is ample strength between the brick's edges and also to provide a relative moisture barrier during rain.
Roof design evolved around 1850 in the American Southwest. of adobe mud was applied on top of the latillas, then of dry adobe dirt applied to the roof. The dirt was contoured into a low slope to a downspout aka a 'canal'. When moisture was applied to the roof the clay particles expanded to create a waterproof membrane. Once a year it was necessary to pull the weeds from the roof and re-slope the dirt as needed.
Depending on the materials, adobe roofs can be inherently fire-proof. The construction of a chimney can greatly influence the construction of the roof supports, creating an extra need for care in choosing the materials. The builders can make an adobe chimney by stacking simple adobe bricks in a similar fashion as the surrounding walls.
In 1927, the Uniform Building Code (UBC) was adopted in the United States. Local ordinances, referencing the UBC added requirements to building with adobe. These included: restriction of building height of adobe structures to 1-story, requirements for adobe mix (compressive and shear strength) and new requirements which stated that every building shall be designed to withstand seismic activity, specifically lateral forces. By the 1980s however, seismic related changes in the California Building Code effectively ended solid wall adobe construction in California; however Post-and-Beam adobe and veneers are still being used.
Adobe around the world
The largest structure ever made from adobe is the Arg-é Bam built by the Achaemenid Empire. Other large adobe structures are the Huaca del Sol in Peru, with 100 million signed bricks and the ciudellas of Chan Chan and Tambo Colorado, both in Peru.
See also
used adobe walls
(waterproofing plaster)
(also known as Ctesiphon Arch) in Iraq is the largest mud brick arch in the world, built beginning in 540 AD
References
External links
Soil-based building materials
Masonry
Adobe buildings and structures
Appropriate technology
Vernacular architecture
Sustainable building
Western (genre) staples and terminology | Adobe | [
"Engineering"
] | 3,030 | [
"Construction",
"Sustainable building",
"Masonry",
"Building engineering"
] |
713 | https://en.wikipedia.org/wiki/Android%20%28robot%29 | An android is a humanoid robot or other artificial being, often made from a flesh-like material. Historically, androids existed only in the domain of science fiction and were frequently seen in film and television, but advances in robot technology have allowed the design of functional and realistic humanoid robots.
Terminology
The Oxford English Dictionary traces the earliest use (as "Androides") to Ephraim Chambers' 1728 Cyclopaedia, in reference to an automaton that St. Albertus Magnus allegedly created. By the late 1700s, "androides", elaborate mechanical devices resembling humans performing human activities, were displayed in exhibit halls.
The term "android" appears in US patents as early as 1863 in reference to miniature human-like toy automatons. The term android was used in a more modern sense by the French author Auguste Villiers de l'Isle-Adam in his work Tomorrow's Eve (1886), featuring an artificial humanoid robot named Hadaly. The term made an impact into English pulp science fiction starting from Jack Williamson's The Cometeers (1936) and the distinction between mechanical robots and fleshy androids was popularized by Edmond Hamilton's Captain Future stories (1940–1944).
Although Karel Čapek's robots in R.U.R. (Rossum's Universal Robots) (1921)—the play that introduced the word robot to the world—were organic artificial humans, the word "robot" has come to primarily refer to mechanical humans, animals, and other beings. The term "android" can mean either one of these, while a cyborg ("cybernetic organism" or "bionic man") would be a creature that is a combination of organic and mechanical parts.
The term "droid", popularized by George Lucas in the original Star Wars film and now used widely within science fiction, originated as an abridgment of "android", but has been used by Lucas and others to mean any robot, including distinctly non-human form machines like R2-D2. The word "android" was used in Star Trek: The Original Series episode "What Are Little Girls Made Of?" The abbreviation "andy", coined as a pejorative by writer Philip K. Dick in his novel Do Androids Dream of Electric Sheep?, has seen some further usage, such as within the TV series Total Recall 2070.
While the term "android" is used in reference to human-looking robots in general (not necessarily male-looking humanoid robots), a robot with a female appearance can also be referred to as a gynoid. Besides one can refer to robots without alluding to their sexual appearance by calling them anthrobots (a portmanteau of anthrōpos and robot; see anthrobotics) or anthropoids (short for anthropoid robots; the term humanoids is not appropriate because it is already commonly used to refer to human-like organic species in the context of science fiction, futurism and speculative astrobiology).
Authors have used the term android in more diverse ways than robot or cyborg. In some fictional works, the difference between a robot and android is only superficial, with androids being made to look like humans on the outside but with robot-like internal mechanics. In other stories, authors have used the word "android" to mean a wholly organic, yet artificial, creation. Other fictional depictions of androids fall somewhere in between.
Eric G. Wilson, who defines an android as a "synthetic human being", distinguishes between three types of android, based on their body's composition:
the mummy type – made of "dead things" or "stiff, inanimate, natural material", such as mummies, puppets, dolls and statues
the golem type – made from flexible, possibly organic material, including golems and homunculi
the automaton type – made from a mix of dead and living parts, including automatons and robots
Although human morphology is not necessarily the ideal form for working robots, the fascination in developing robots that can mimic it can be found historically in the assimilation of two concepts: simulacra (devices that exhibit likeness) and automata (devices that have independence).
Projects
Several projects aiming to create androids that look, and, to a certain degree, speak or act like a human being have been launched or are underway.
Japan
Japanese robotics have been leading the field since the 1970s. Waseda University initiated the WABOT project in 1967, and in 1972 completed the WABOT-1, the first android, a full-scale humanoid intelligent robot. Its limb control system allowed it to walk with the lower limbs, and to grip and transport objects with hands, using tactile sensors. Its vision system allowed it to measure distances and directions to objects using external receptors, artificial eyes and ears. And its conversation system allowed it to communicate with a person in Japanese, with an artificial mouth.
In 1984, WABOT-2 was revealed, and made a number of improvements. It was capable of playing the organ. Wabot-2 had ten fingers and two feet, and was able to read a score of music. It was also able to accompany a person. In 1986, Honda began its humanoid research and development program, to create humanoid robots capable of interacting successfully with humans.
The Intelligent Robotics Lab, directed by Hiroshi Ishiguro at Osaka University, and the Kokoro company demonstrated the Actroid at Expo 2005 in Aichi Prefecture, Japan and released the Telenoid R1 in 2010. In 2006, Kokoro developed a new DER 2 android. The height of the human body part of DER2 is 165 cm. There are 47 mobile points. DER2 can not only change its expression but also move its hands and feet and twist its body. The "air servosystem" which Kokoro developed originally is used for the actuator. As a result of having an actuator controlled precisely with air pressure via a servosystem, the movement is very fluid and there is very little noise. DER2 realized a slimmer body than that of the former version by using a smaller cylinder. Outwardly DER2 has a more beautiful proportion. Compared to the previous model, DER2 has thinner arms and a wider repertoire of expressions. Once programmed, it is able to choreograph its motions and gestures with its voice.
The Intelligent Mechatronics Lab, directed by Hiroshi Kobayashi at the Tokyo University of Science, has developed an android head called Saya, which was exhibited at Robodex 2002 in Yokohama, Japan. There are several other initiatives around the world involving humanoid research and development at this time, which will hopefully introduce a broader spectrum of realized technology in the near future. Now Saya is working at the Science University of Tokyo as a guide.
The Waseda University (Japan) and NTT docomo's manufacturers have succeeded in creating a shape-shifting robot WD-2. It is capable of changing its face. At first, the creators decided the positions of the necessary points to express the outline, eyes, nose, and so on of a certain person. The robot expresses its face by moving all points to the decided positions, they say. The first version of the robot was first developed back in 2003. After that, a year later, they made a couple of major improvements to the design. The robot features an elastic mask made from the average head dummy. It uses a driving system with a 3DOF unit. The WD-2 robot can change its facial features by activating specific facial points on a mask, with each point possessing three degrees of freedom. This one has 17 facial points, for a total of 56 degrees of freedom. As for the materials they used, the WD-2's mask is fabricated with a highly elastic material called Septom, with bits of steel wool mixed in for added strength. Other technical features reveal a shaft driven behind the mask at the desired facial point, driven by a DC motor with a simple pulley and a slide screw. Apparently, the researchers can also modify the shape of the mask based on actual human faces. To "copy" a face, they need only a 3D scanner to determine the locations of an individual's 17 facial points. After that, they are then driven into position using a laptop and 56 motor control boards. In addition, the researchers also mention that the shifting robot can even display an individual's hair style and skin color if a photo of their face is projected onto the 3D Mask.
Singapore
Prof Nadia Thalmann, a Nanyang Technological University scientist, directed efforts of the Institute for Media Innovation along with the School of Computer Engineering in the development of a social robot, Nadine. Nadine is powered by software similar to Apple's Siri or Microsoft's Cortana. Nadine may become a personal assistant in offices and homes in future, or she may become a companion for the young and the elderly.
Assoc Prof Gerald Seet from the School of Mechanical & Aerospace Engineering and the BeingThere Centre led a three-year R&D development in tele-presence robotics, creating EDGAR. A remote user can control EDGAR with the user's face and expressions displayed on the robot's face in real time. The robot also mimics their upper body movements.
South Korea
KITECH researched and developed EveR-1, an android interpersonal communications model capable of emulating human emotional expression via facial "musculature" and capable of rudimentary conversation, having a vocabulary of around 400 words. She is tall and weighs , matching the average figure of a Korean woman in her twenties. EveR-1's name derives from the Biblical Eve, plus the letter r for robot. EveR-1's advanced computing processing power enables speech recognition and vocal synthesis, at the same time processing lip synchronization and visual recognition by 90-degree micro-CCD cameras with face recognition technology. An independent microchip inside her artificial brain handles gesture expression, body coordination, and emotion expression. Her whole body is made of highly advanced synthetic jelly silicon and with 60 artificial joints in her face, neck, and lower body; she is able to demonstrate realistic facial expressions and sing while simultaneously dancing. In South Korea, the Ministry of Information and Communication had an ambitious plan to put a robot in every household by 2020. Several robot cities have been planned for the country: the first will be built in 2016 at a cost of 500 billion won (US$440 million), of which 50 billion is direct government investment. The new robot city will feature research and development centers for manufacturers and part suppliers, as well as exhibition halls and a stadium for robot competitions. The country's new Robotics Ethics Charter will establish ground rules and laws for human interaction with robots in the future, setting standards for robotics users and manufacturers, as well as guidelines on ethical standards to be programmed into robots to prevent human abuse of robots and vice versa.
United States
Walt Disney and a staff of Imagineers created Great Moments with Mr. Lincoln that debuted at the 1964 New York World's Fair.
Dr. William Barry, an Education Futurist and former visiting West Point Professor of Philosophy and Ethical Reasoning at the United States Military Academy, created an AI android character named "Maria Bot". This Interface AI android was named after the infamous fictional robot Maria in the 1927 film Metropolis, as a well-behaved distant relative. Maria Bot is the first AI Android Teaching Assistant at the university level. Maria Bot has appeared as a keynote speaker as a duo with Barry for a TEDx talk in Everett, Washington in February 2020.
Resembling a human from the shoulders up, Maria Bot is a virtual being android that has complex facial expressions and head movement and engages in conversation about a variety of subjects. She uses AI to process and synthesize information to make her own decisions on how to talk and engage. She collects data through conversations, direct data inputs such as books or articles, and through internet sources.
Maria Bot was built by an international high-tech company for Barry to help improve education quality and eliminate education poverty. Maria Bot is designed to create new ways for students to engage and discuss ethical issues raised by the increasing presence of robots and artificial intelligence. Barry also uses Maria Bot to demonstrate that programming a robot with life-affirming, ethical framework makes them more likely to help humans to do the same.
Maria Bot is an ambassador robot for good and ethical AI technology.
Hanson Robotics, Inc., of Texas and KAIST produced an android portrait of Albert Einstein, using Hanson's facial android technology mounted on KAIST's life-size walking bipedal robot body. This Einstein android, also called "Albert Hubo", thus represents the first full-body walking android in history. Hanson Robotics, the FedEx Institute of Technology, and the University of Texas at Arlington also developed the android portrait of sci-fi author Philip K. Dick (creator of Do Androids Dream of Electric Sheep?, the basis for the film Blade Runner), with full conversational capabilities that incorporated thousands of pages of the author's works. In 2005, the PKD android won a first-place artificial intelligence award from AAAI.
Use in fiction
Androids are a staple of science fiction. Isaac Asimov pioneered the fictionalization of the science of robotics and artificial intelligence, notably in his 1950s series I, Robot. One thing common to most fictional androids is that the real-life technological challenges associated with creating thoroughly human-like robots — such as the creation of strong artificial intelligence—are assumed to have been solved. Fictional androids are often depicted as mentally and physically equal or superior to humans—moving, thinking and speaking as fluidly as them.
The tension between the nonhuman substance and the human appearance—or even human ambitions—of androids is the dramatic impetus behind most of their fictional depictions. Some android heroes seek, like Pinocchio, to become human, as in the film Bicentennial Man, or Data in Star Trek: The Next Generation. Others, as in the film Westworld, rebel against abuse by careless humans. Android hunter Deckard in Do Androids Dream of Electric Sheep? and its film adaptation Blade Runner discovers that his targets appear to be, in some ways, more "human" than he is. The sequel Blade Runner 2049 involves android hunter K, himself an android, discovering the same thing. Android stories, therefore, are not essentially stories "about" androids; they are stories about the human condition and what it means to be human.
One aspect of writing about the meaning of humanity is to use discrimination against androids as a mechanism for exploring racism in society, as in Blade Runner. Perhaps the clearest example of this is John Brunner's 1968 novel Into the Slave Nebula, where the blue-skinned android slaves are explicitly shown to be fully human. More recently, the androids Bishop and Annalee Call in the films Aliens and Alien Resurrection are used as vehicles for exploring how humans deal with the presence of an "Other". The 2018 video game Detroit: Become Human also explores how androids are treated as second class citizens in a near future society.
Female androids, or "gynoids", are often seen in science fiction, and can be viewed as a continuation of the long tradition of men attempting to create the stereotypical "perfect woman". Examples include the Greek myth of Pygmalion and the female robot Maria in Fritz Lang's Metropolis. Some gynoids, like Pris in Blade Runner, are designed as sex-objects, with the intent of "pleasing men's violent sexual desires", or as submissive, servile companions, such as in The Stepford Wives. Fiction about gynoids has therefore been described as reinforcing "essentialist ideas of femininity", although others have suggested that the treatment of androids is a way of exploring racism and misogyny in society.
The 2015 Japanese film Sayonara, starring Geminoid F, was promoted as "the first movie to feature an android performing opposite a human actor".
See also
References
Further reading
Kerman, Judith B. (1991). Retrofitting Blade Runner: Issues in Ridley Scott's Blade Runner and Philip K. Dick's Do Androids Dream of Electric Sheep? Bowling Green, OH: Bowling Green State University Popular Press. .
Perkowitz, Sidney (2004). Digital People: From Bionic Humans to Androids. Joseph Henry Press. .
Shelde, Per (1993). Androids, Humanoids, and Other Science Fiction Monsters: Science and Soul in Science Fiction Films. New York: New York University Press. .
Ishiguro, Hiroshi. "Android science." Cognitive Science Society. 2005.
Glaser, Horst Albert and Rossbach, Sabine: The Artificial Human, Frankfurt/M., Bern, New York 2011 "The Artificial Human"
TechCast Article Series, Jason Rupinski and Richard Mix, "Public Attitudes to Androids: Robot Gender, Tasks, & Pricing"
Carpenter, J. (2009). Why send the Terminator to do R2D2s job?: Designing androids as rhetorical phenomena. Proceedings of HCI 2009: Beyond Gray Droids: Domestic Robot Design for the 21st Century. Cambridge, UK. 1 September.
Telotte, J.P. Replications: A Robotic History of the Science Fiction Film. University of Illinois Press, 1995.
External links
Japanese inventions
South Korean inventions
Osaka University research
Science fiction themes
Human–machine interaction
Robots | Android (robot) | [
"Physics",
"Technology",
"Engineering",
"Biology"
] | 3,597 | [
"Machines",
"Behavior",
"Robots",
"Physical systems",
"Android (robot)",
"Human–machine interaction",
"Design",
"Human behavior"
] |
736 | https://en.wikipedia.org/wiki/Albert%20Einstein | Albert Einstein (, ; ; 14 March 1879 – 18 April 1955) was a German-born theoretical physicist who is best known for developing the theory of relativity. Einstein also made important contributions to quantum mechanics. His mass–energy equivalence formula , which arises from special relativity, has been called "the world's most famous equation". He received the 1921 Nobel Prize in Physics for .
Born in the German Empire, Einstein moved to Switzerland in 1895, forsaking his German citizenship the following year. In 1897, at the age of seventeen he enrolled in the mathematics and physics teaching diploma program at the Swiss Federal Polytechnic School in Zurich, graduating in 1900. He acquired Swiss citizenship a year later and afterwards secured a permanent position at the Swiss Patent Office in Bern. In 1905, he submitted a successful PhD dissertation to the University of Zurich. In 1914, he moved to Berlin to join the Prussian Academy of Sciences and the Humboldt University of Berlin, becoming director of the Kaiser Wilhelm Institute for Physics. In 1933, while Einstein was visiting the United States, Adolf Hitler came to power in Germany. Horrified by the Nazi persecution of his fellow Jews, he decided to remain in the US, and was granted American citizenship in 1940. On the eve of World War II, he endorsed a letter to President Franklin D. Roosevelt alerting him to the potential German nuclear weapons program and recommending that the US begin similar research.
In 1905, he published four groundbreaking papers, sometimes described as his annus mirabilis (miracle year). These papers outlined a theory of the photoelectric effect, explained Brownian motion, introduced his special theory of relativity, and demonstrated that if the special theory is correct, mass and energy are equivalent to each other. In 1915, he proposed a general theory of relativity that extended his system of mechanics to incorporate gravitation. A cosmological paper that he published the following year laid out the implications of general relativity for the modeling of the structure and evolution of the universe as a whole. In 1917, Einstein wrote a paper which laid the foundations for the concepts of both laser and maser, and contained a trove of information that would be beneficial to developments in physics later on, such as quantum electrodynamics and quantum optics. A joint paper in 1935, with physicist Nathan Rosen, introduced the notion of a wormhole.
In the middle part of his career, Einstein made important contributions to statistical mechanics and quantum theory. Especially notable was his work on the quantum physics of radiation, in which light consists of particles, subsequently called photons. With physicist Satyendra Nath Bose, he laid the groundwork for Bose-Einstein statistics. For much of the last phase of his academic life, Einstein worked on two endeavors that ultimately proved unsuccessful. First, he advocated against quantum theory's introduction of fundamental randomness into science's picture of the world, objecting that . Second, he attempted to devise a unified field theory by generalizing his geometric theory of gravitation to include electromagnetism. As a result, he became increasingly isolated from mainstream modern physics. In 1999, he was named Time's Person of the Century. That same year, a Physics World poll named him the greatest physicist of all time.
Life and career
Childhood, youth and education
Albert Einstein was born in Ulm, in the Kingdom of Württemberg in the German Empire, on 14 March 1879. His parents, secular Ashkenazi Jews, were Hermann Einstein, a salesman and engineer, and Pauline Koch. In 1880, the family moved to Munich's borough of Ludwigsvorstadt-Isarvorstadt, where Einstein's father and his uncle Jakob founded Elektrotechnische Fabrik J. Einstein & Cie, a company that manufactured electrical equipment based on direct current. He often related a formative event from his youth, when he was sick in bed and his father brought him a compass. This sparked his lifelong fascination with electromagnetism. He realized that "Something deeply hidden had to be behind things."
Albert attended St. Peter's Catholic elementary school in Munich from the age of five. When he was eight, he was transferred to the Luitpold Gymnasium, where he received advanced primary and then secondary school education.
In 1894, Hermann and Jakob's company tendered for a contract to install electric lighting in Munich, but without success—they lacked the capital that would have been required to update their technology from direct current to the more efficient, alternating current alternative. The failure of their bid forced them to sell their Munich factory and search for new opportunities elsewhere. The Einstein family moved to Italy, first to Milan and a few months later to Pavia, where they settled in Palazzo Cornazzani. Einstein, then fifteen, stayed behind in Munich in order to finish his schooling. His father wanted him to study electrical engineering, but he was a fractious pupil who found the Gymnasium's regimen and teaching methods far from congenial. He later wrote that the school's policy of strict rote learning was harmful to creativity. At the end of December 1894, a letter from a doctor persuaded the Luitpold's authorities to release him from its care, and he joined his family in Pavia. While in Italy as a teenager, he wrote an essay entitled "On the Investigation of the State of the Ether in a Magnetic Field".
Einstein excelled at physics and mathematics from an early age, and soon acquired the mathematical expertise normally only found in a child several years his senior. He began teaching himself algebra, calculus and Euclidean geometry when he was twelve; he made such rapid progress that he discovered an original proof of the Pythagorean theorem before his thirteenth birthday. A family tutor, Max Talmud, said that only a short time after he had given the twelve year old Einstein a geometry textbook, the boy Einstein recorded that he had "mastered integral and differential calculus" while still just fourteen. His love of algebra and geometry was so great that at twelve, he was already confident that nature could be understood as a "mathematical structure".
At thirteen, when his range of enthusiasms had broadened to include music and philosophy, Talmud introduced Einstein to Kant's Critique of Pure Reason. Kant became his favorite philosopher; according to Talmud,
In 1895, at the age of sixteen, Einstein sat the entrance examination for the Federal Polytechnic School (later the Eidgenössische Technische Hochschule, ETH) in Zurich, Switzerland. He failed to reach the required standard in the general part of the test, but performed with distinction in physics and mathematics. On the advice of the polytechnic's principal, he completed his secondary education at the Argovian cantonal school (a gymnasium) in Aarau, Switzerland, graduating in 1896. While lodging in Aarau with the family of Jost Winteler, he fell in love with Winteler's daughter, Marie. (His sister, Maja, later married Winteler's son Paul.)
In January 1896, with his father's approval, Einstein renounced his citizenship of the German Kingdom of Württemberg in order to avoid conscription into military service. The Matura (graduation for the successful completion of higher secondary schooling), awarded to him in September 1896, acknowledged him to have performed well across most of the curriculum, allotting him a top grade of 6 for history, physics, algebra, geometry, and descriptive geometry. At seventeen, he enrolled in the four-year mathematics and physics teaching diploma program at the Federal Polytechnic School. Marie Winteler, a year older than him, took up a teaching post in Olsberg, Switzerland.
The five other polytechnic school freshmen following the same course as Einstein included just one woman, a twenty year old Serbian, Mileva Marić. Over the next few years, the pair spent many hours discussing their shared interests and learning about topics in physics that the polytechnic school's lectures did not cover. In his letters to Marić, Einstein confessed that exploring science with her by his side was much more enjoyable than reading a textbook in solitude. Eventually the two students became not only friends but also lovers.
Historians of physics are divided on the question of the extent to which Marić contributed to the insights of Einstein's annus mirabilis publications. There is at least some evidence that he was influenced by her scientific ideas, but there are scholars who doubt whether her impact on his thought was of any great significance at all.
Marriages, relationships and children
Correspondence between Einstein and Marić, discovered and published in 1987, revealed that in early 1902, while Marić was visiting her parents in Novi Sad, she gave birth to a daughter, Lieserl. When Marić returned to Switzerland it was without the child, whose fate is uncertain. A letter of Einstein's that he wrote in September 1903 suggests that the girl was either given up for adoption or died of scarlet fever in infancy.
Einstein and Marić married in January 1903. In May 1904, their son Hans Albert was born in Bern, Switzerland. Their son Eduard was born in Zurich in July 1910. In letters that Einstein wrote to Marie Winteler in the months before Eduard's arrival, he described his love for his wife as "misguided" and mourned the "missed life" that he imagined he would have enjoyed if he had married Winteler instead: "I think of you in heartfelt love every spare minute and am so unhappy as only a man can be."
In 1912, Einstein entered into a relationship with Elsa Löwenthal, who was both his first cousin on his mother's side and his second cousin on his father's. When Marić learned of his infidelity soon after moving to Berlin with him in April 1914, she returned to Zurich, taking Hans Albert and Eduard with her. Einstein and Marić were granted a divorce on 14 February 1919 on the grounds of having lived apart for five years. As part of the divorce settlement, Einstein agreed that if he were to win a Nobel Prize, he would give the money that he received to Marić; he won the prize two years later.
Einstein married Löwenthal in 1919. In 1923, he began a relationship with a secretary named Betty Neumann, the niece of his close friend Hans Mühsam. Löwenthal nevertheless remained loyal to him, accompanying him when he emigrated to the United States in 1933. In 1935, she was diagnosed with heart and kidney problems. She died in December 1936.
A volume of Einstein's letters released by Hebrew University of Jerusalem in 2006 added some other women with whom he was romantically involved. They included Margarete Lebach (a married Austrian), Estella Katzenellenbogen (the rich owner of a florist business), Toni Mendel (a wealthy Jewish widow) and Ethel Michanowski (a Berlin socialite), with whom he spent time and from whom he accepted gifts while married to Löwenthal. After being widowed, Einstein was briefly in a relationship with Margarita Konenkova, thought by some to be a Russian spy; her husband, the Russian sculptor Sergei Konenkov, created the bronze bust of Einstein at the Institute for Advanced Study at Princeton.
Following an episode of acute mental illness at about the age of twenty, Einstein's son Eduard was diagnosed with schizophrenia. He spent the remainder of his life either in the care of his mother or in temporary confinement in an asylum. After her death, he was committed permanently to Burghölzli, the Psychiatric University Hospital in Zurich.
1902–1909: Assistant at the Swiss Patent Office
Einstein graduated from the Federal Polytechnic School in 1900, duly certified as competent to teach mathematics and physics. His successful acquisition of Swiss citizenship in February 1901 was not followed by the usual sequel of conscription; the Swiss authorities deemed him medically unfit for military service. He found that Swiss schools too appeared to have no use for him, failing to offer him a teaching position despite the almost two years that he spent applying for one. Eventually it was with the help of Marcel Grossmann's father that he secured a post in Bern at the Swiss Patent Office, as an assistant examiner – level III.
Patent applications that landed on Einstein's desk for his evaluation included ideas for a gravel sorter and an electric typewriter. His employers were pleased enough with his work to make his position permanent in 1903, although they did not think that he should be promoted until he had "fully mastered machine technology". It is conceivable that his labors at the patent office had a bearing on his development of his special theory of relativity. He arrived at his revolutionary ideas about space, time and light through thought experiments about the transmission of signals and the synchronization of clocks, matters which also figured in some of the inventions submitted to him for assessment.
In 1902, Einstein and some friends whom he had met in Bern formed a group that held regular meetings to discuss science and philosophy. Their choice of a name for their club, the Olympia Academy, was an ironic comment upon its far from Olympian status. Sometimes they were joined by Marić, who limited her participation in their proceedings to careful listening. The thinkers whose works they reflected upon included Henri Poincaré, Ernst Mach and David Hume, all of whom significantly influenced Einstein's own subsequent ideas and beliefs.
1900–1905: First scientific papers
Einstein's first paper, "Folgerungen aus den Capillaritätserscheinungen" ("Conclusions drawn from the phenomena of capillarity"), in which he proposed a model of intermolecular attraction that he afterwards disavowed as worthless, was published in the journal Annalen der Physik in 1901. His 24-page doctoral dissertation also addressed a topic in molecular physics. Titled "Eine neue Bestimmung der Moleküldimensionen" ("A New Determination of Molecular Dimensions") and dedicated to his friend Marcel Grossman, it was completed on 30 April 1905 and approved by Professor Alfred Kleiner of the University of Zurich three months later. (Einstein was formally awarded his PhD on 15 January 1906.) Four other pieces of work that Einstein completed in 1905—his famous papers on the photoelectric effect, Brownian motion, his special theory of relativity and the equivalence of mass and energy—have led to the year being celebrated as an annus mirabilis for physics akin to 1666 (the year in which Isaac Newton experienced his greatest epiphanies). The publications deeply impressed Einstein's contemporaries.
1908–1933: Academic career in Europe
Einstein's sabbatical as a civil servant approached its end in 1908, when he secured a junior teaching position at the University of Bern. In 1909, a lecture on relativistic electrodynamics that he gave at the University of Zurich, much admired by Alfred Kleiner, led to Zurich's luring him away from Bern with a newly created associate professorship. Promotion to a full professorship followed in April 1911, when he accepted a chair at the German Charles-Ferdinand University in Prague, a move which required him to become an Austrian citizen of the Austro-Hungarian Empire, which was not completed. His time in Prague saw him producing eleven research papers.
In July 1912, he returned to his alma mater, the ETH Zurich, to take up a chair in theoretical physics. His teaching activities there centred on thermodynamics and analytical mechanics, and his research interests included the molecular theory of heat, continuum mechanics and the development of a relativistic theory of gravitation. In his work on the latter topic, he was assisted by his friend, Marcel Grossmann, whose knowledge of the kind of mathematics required was greater than his own.
In the spring of 1913, two German visitors, Max Planck and Walther Nernst, called upon Einstein in Zurich in the hope of persuading him to relocate to Berlin. They offered him membership of the Prussian Academy of Sciences, the directorship of the planned Kaiser Wilhelm Institute for Physics and a chair at the Humboldt University of Berlin that would allow him to pursue his research supported by a professorial salary but with no teaching duties to burden him. Their invitation was all the more appealing to him because Berlin happened to be the home of his latest girlfriend, Elsa Löwenthal. He duly joined the Academy on 24 July 1913, and moved into an apartment in the Berlin district of Dahlem on 1 April 1914. He was installed in his Humboldt University position shortly thereafter.
The outbreak of the First World War in July 1914 marked the beginning of Einstein's gradual estrangement from the nation of his birth. When the "Manifesto of the Ninety-Three" was published in October 1914—a document signed by a host of prominent German thinkers that justified Germany's belligerence—Einstein was one of the few German intellectuals to distance himself from it and sign the alternative, eirenic "Manifesto to the Europeans" instead. However, this expression of his doubts about German policy did not prevent him from being elected to a two-year term as president of the German Physical Society in 1916. When the Kaiser Wilhelm Institute for Physics opened its doors the following year—its foundation delayed because of the war—Einstein was appointed its first director, just as Planck and Nernst had promised.
Einstein was elected a Foreign Member of the Royal Netherlands Academy of Arts and Sciences in 1920, and a Foreign Member of the Royal Society in 1921. In 1922, he was awarded the 1921 Nobel Prize in Physics "for his services to Theoretical Physics, and especially for his discovery of the law of the photoelectric effect". At this point some physicists still regarded the general theory of relativity skeptically, and the Nobel citation displayed a degree of doubt even about the work on photoelectricity that it acknowledged: it did not assent to Einstein's notion of the particulate nature of light, which only won over the entire scientific community when S. N. Bose derived the Planck spectrum in 1924. That same year, Einstein was elected an International Honorary Member of the American Academy of Arts and Sciences. Britain's closest equivalent of the Nobel award, the Royal Society's Copley Medal, was not hung around Einstein's neck until 1925. He was elected an International Member of the American Philosophical Society in 1930.
Einstein resigned from the Prussian Academy in March 1933. His accomplishments in Berlin had included the completion of the general theory of relativity, proving the Einstein–de Haas effect, contributing to the quantum theory of radiation, and the development of Bose–Einstein statistics.
1919: Putting general relativity to the test
In 1907, Einstein reached a milestone on his long journey from his special theory of relativity to a new idea of gravitation with the formulation of his equivalence principle, which asserts that an observer in an infinitesimally small box falling freely in a gravitational field would be unable to find any evidence that the field exists. In 1911, he used the principle to estimate the amount by which a ray of light from a distant star would be bent by the gravitational pull of the Sun as it passed close to the Sun's photosphere (that is, the Sun's apparent surface). He reworked his calculation in 1913, having now found a way to model gravitation with the Riemann curvature tensor of a non-Euclidean four-dimensional spacetime. By the fall of 1915, his reimagining of the mathematics of gravitation in terms of Riemannian geometry was complete, and he applied his new theory not just to the behavior of the Sun as a gravitational lens but also to another astronomical phenomenon, the precession of the perihelion of Mercury (a slow drift in the point in Mercury's elliptical orbit at which it approaches the Sun most closely). A total eclipse of the Sun that took place on 29 May 1919 provided an opportunity to put his theory of gravitational lensing to the test, and observations performed by Sir Arthur Eddington yielded results that were consistent with his calculations. Eddington's work was reported at length in newspapers around the world. On 7 November 1919, for example, the leading British newspaper, The Times, printed a banner headline that read: .
1921–1923: Coming to terms with fame
With Eddington's eclipse observations widely reported not just in academic journals but by the popular press as well, Einstein became , a genius who had shattered a paradigm that had been basic to physicists' understanding of the universe since the seventeenth century.
Einstein began his new life as an intellectual icon in America, where he arrived on 2 April 1921. He was welcomed to New York City by Mayor John Francis Hylan, and then spent three weeks giving lectures and attending receptions. He spoke several times at Columbia University and Princeton, and in Washington, he visited the White House with representatives of the National Academy of Sciences. He returned to Europe via London, where he was the guest of the philosopher and statesman Viscount Haldane. He used his time in the British capital to meet several people prominent in British scientific, political or intellectual life, and to deliver a lecture at King's College. In July 1921, he published an essay, "My First Impression of the U.S.A.", in which he sought to sketch the American character, much as had Alexis de Tocqueville in Democracy in America (1835). He wrote of his transatlantic hosts in highly approving terms:
In 1922, Einstein's travels were to the old world rather than the new. He devoted six months to a tour of Asia that saw him speaking in Japan, Singapore and Sri Lanka (then known as Ceylon). After his first public lecture in Tokyo, he met Emperor Yoshihito and his wife at the Imperial Palace, with thousands of spectators thronging the streets in the hope of catching a glimpse of him. (In a letter to his sons, he wrote that Japanese people seemed to him to be generally modest, intelligent and considerate, and to have a true appreciation of art. But his picture of them in his diary was less flattering: His journal also contains views of China and India which were uncomplimentary. Of Chinese people, he wrote that .) He was greeted with even greater enthusiasm on the last leg of his tour, in which he spent twelve days in Mandatory Palestine, newly entrusted to British rule by the League of Nations in the aftermath of the First World War. Sir Herbert Samuel, the British High Commissioner, welcomed him with a degree of ceremony normally only accorded to a visiting head of state, including a cannon salute. One reception held in his honor was stormed by people determined to hear him speak: he told them that he was happy that Jews were beginning to be recognized as a force in the world.
Einstein's decision to tour the eastern hemisphere in 1922 meant that he was unable to go to Stockholm in the December of that year to participate in the Nobel prize ceremony. His place at the traditional Nobel banquet was taken by a German diplomat, who gave a speech praising him not only as a physicist but also as a campaigner for peace. A two-week visit to Spain that he undertook in 1923 saw him collecting another award, a membership of the Spanish Academy of Sciences signified by a diploma handed to him by King Alfonso XIII. (His Spanish trip also gave him a chance to meet a fellow Nobel laureate, the neuroanatomist Santiago Ramón y Cajal.)
1922–1932: Serving the League of Nations
From 1922 until 1932, with the exception of a few months in 1923 and 1924, Einstein was a member of the Geneva-based International Committee on Intellectual Cooperation of the League of Nations, a group set up by the League to encourage scientists, artists, scholars, teachers and other people engaged in the life of the mind to work more closely with their counterparts in other countries. He was appointed as a German delegate rather than as a representative of Switzerland because of the machinations of two Catholic activists, Oskar Halecki and Giuseppe Motta. By persuading Secretary General Eric Drummond to deny Einstein the place on the committee reserved for a Swiss thinker, they created an opening for Gonzague de Reynold, who used his League of Nations position as a platform from which to promote traditional Catholic doctrine. Einstein's former physics professor Hendrik Lorentz and the Polish chemist Marie Curie were also members of the committee.
1925: Touring South America
In March and April 1925, Einstein and his wife visited South America, where they spent about a week in Brazil, a week in Uruguay and a month in Argentina. Their tour was suggested by Jorge Duclout (1856–1927) and Mauricio Nirenstein (1877–1935) with the support of several Argentine scholars, including Julio Rey Pastor, Jakob Laub, and Leopoldo Lugones. and was financed primarily by the Council of the University of Buenos Aires and the Asociación Hebraica Argentina (Argentine Hebraic Association) with a smaller contribution from the Argentine-Germanic Cultural Institution.
1930–1931: Touring the US
In December 1930, Einstein began another significant sojourn in the United States, drawn back to the US by the offer of a two month research fellowship at the California Institute of Technology. Caltech supported him in his wish that he should not be exposed to quite as much attention from the media as he had experienced when visiting the US in 1921, and he therefore declined all the invitations to receive prizes or make speeches that his admirers poured down upon him. But he remained willing to allow his fans at least some of the time with him that they requested.
After arriving in New York City, Einstein was taken to various places and events, including Chinatown, a lunch with the editors of The New York Times, and a performance of Carmen at the Metropolitan Opera, where he was cheered by the audience on his arrival. During the days following, he was given the keys to the city by Mayor Jimmy Walker and met Nicholas Murray Butler, the president of Columbia University, who described Einstein as "the ruling monarch of the mind". Harry Emerson Fosdick, pastor at New York's Riverside Church, gave Einstein a tour of the church and showed him a full-size statue that the church made of Einstein, standing at the entrance. Also during his stay in New York, he joined a crowd of 15,000 people at Madison Square Garden during a Hanukkah celebration.
Einstein next traveled to California, where he met Caltech president and Nobel laureate Robert A. Millikan. His friendship with Millikan was , as Millikan , where Einstein was a pronounced pacifist. During an address to Caltech's students, Einstein noted that science was often inclined to do more harm than good.
This aversion to war also led Einstein to befriend author Upton Sinclair and film star Charlie Chaplin, both noted for their pacifism. Carl Laemmle, head of Universal Studios, gave Einstein a tour of his studio and introduced him to Chaplin. They had an instant rapport, with Chaplin inviting Einstein and his wife, Elsa, to his home for dinner. Chaplin said Einstein's outward persona, calm and gentle, seemed to conceal a "highly emotional temperament", from which came his "extraordinary intellectual energy".
Chaplin's film City Lights was to premiere a few days later in Hollywood, and Chaplin invited Einstein and Elsa to join him as his special guests. Walter Isaacson, Einstein's biographer, described this as . Chaplin visited Einstein at his home on a later trip to Berlin and recalled his "modest little flat" and the piano at which he had begun writing his theory. Chaplin speculated that it was .
1933: Emigration to the US
In February 1933, while on a visit to the United States, Einstein knew he could not return to Germany with the rise to power of the Nazis under Germany's new chancellor, Adolf Hitler.
While at American universities in early 1933, he undertook his third two-month visiting professorship at the California Institute of Technology in Pasadena. In February and March 1933, the Gestapo repeatedly raided his family's apartment in Berlin. He and his wife Elsa returned to Europe in March, and during the trip, they learned that the German Reichstag had passed the Enabling Act on 23 March, transforming Hitler's government into a de facto legal dictatorship, and that they would not be able to proceed to Berlin. Later on, they heard that their cottage had been raided by the Nazis and Einstein's personal sailboat confiscated. Upon landing in Antwerp, Belgium on 28 March, Einstein immediately went to the German consulate and surrendered his passport, formally renouncing his German citizenship. The Nazis later sold his boat and converted his cottage into a Hitler Youth camp.
Refugee status
In April 1933, Einstein discovered that the new German government had passed laws barring Jews from holding any official positions, including teaching at universities. Historian Gerald Holton describes how, with , thousands of Jewish scientists were suddenly forced to give up their university positions and their names were removed from the rolls of institutions where they were employed.
A month later, Einstein's works were among those targeted by the German Student Union in the Nazi book burnings, with Nazi propaganda minister Joseph Goebbels proclaiming, "Jewish intellectualism is dead." One German magazine included him in a list of enemies of the German regime with the phrase, "not yet hanged", offering a $5,000 bounty on his head. In a subsequent letter to physicist and friend Max Born, who had already emigrated from Germany to England, Einstein wrote, After moving to the US, he described the book burnings as a by those who , and .
Einstein was now without a permanent home, unsure where he would live and work, and equally worried about the fate of countless other scientists still in Germany. Aided by the Academic Assistance Council, founded in April 1933 by British Liberal politician William Beveridge to help academics escape Nazi persecution, Einstein was able to leave Germany. He rented a house in De Haan, Belgium, where he lived for a few months. In late July 1933, he visited England for about six weeks at the invitation of the British Member of Parliament Commander Oliver Locker-Lampson, who had become friends with him in the preceding years. Locker-Lampson invited him to stay near his Cromer home in a secluded wooden cabin on Roughton Heath in the Parish of Roughton, Norfolk. To protect Einstein, Locker-Lampson had two bodyguards watch over him; a photo of them carrying shotguns and guarding Einstein was published in the Daily Herald on 24 July 1933.
Locker-Lampson took Einstein to meet Winston Churchill at his home, and later, Austen Chamberlain and former Prime Minister Lloyd George. Einstein asked them to help bring Jewish scientists out of Germany. British historian Martin Gilbert notes that Churchill responded immediately, and sent his friend, physicist Frederick Lindemann, to Germany to seek out Jewish scientists and place them in British universities. Churchill later observed that as a result of Germany having driven the Jews out, they had lowered their "technical standards" and put the Allies' technology ahead of theirs.
Einstein later contacted leaders of other nations, including Turkey's Prime Minister, İsmet İnönü, to whom he wrote in September 1933, requesting placement of unemployed German-Jewish scientists. As a result of Einstein's letter, Jewish invitees to Turkey eventually totaled over "1,000 saved individuals".
Locker-Lampson also submitted a bill to parliament to extend British citizenship to Einstein, during which period Einstein made a number of public appearances describing the crisis brewing in Europe. In one of his speeches he denounced Germany's treatment of Jews, while at the same time he introduced a bill promoting Jewish citizenship in Palestine, as they were being denied citizenship elsewhere. In his speech he described Einstein as a "citizen of the world" who should be offered a temporary shelter in the UK. Both bills failed, however, and Einstein then accepted an earlier offer from the Institute for Advanced Study, in Princeton, New Jersey, US, to become a resident scholar.
Resident scholar at the Institute for Advanced Study
On 3 October 1933, Einstein delivered a speech on the importance of academic freedom before a packed audience at the Royal Albert Hall in London, with The Times reporting he was wildly cheered throughout. Four days later he returned to the US and took up a position at the Institute for Advanced Study, noted for having become a refuge for scientists fleeing Nazi Germany. At the time, most American universities, including Harvard, Princeton and Yale, had minimal or no Jewish faculty or students, as a result of their Jewish quotas, which lasted until the late 1940s.
Einstein was still undecided about his future. He had offers from several European universities, including Christ Church, Oxford, where he stayed for three short periods between May 1931 and June 1933 and was offered a five-year research fellowship (called a "studentship" at Christ Church), but in 1935, he arrived at the decision to remain permanently in the United States and apply for citizenship.
Einstein's affiliation with the Institute for Advanced Study would last until his death in 1955. He was one of the four first selected (along with John von Neumann, Kurt Gödel, and Hermann Weyl) at the new Institute. He soon developed a close friendship with Gödel; the two would take long walks together discussing their work. Bruria Kaufman, his assistant, later became a physicist. During this period, Einstein tried to develop a unified field theory and to refute the accepted interpretation of quantum physics, both unsuccessfully. He lived in Princeton at his home from 1935 onwards. The Albert Einstein House was made a National Historic Landmark in 1976.
World War II and the Manhattan Project
In 1939, a group of Hungarian scientists that included émigré physicist Leó Szilárd attempted to alert Washington to ongoing Nazi atomic bomb research. The group's warnings were discounted. Einstein and Szilárd, along with other refugees such as Edward Teller and Eugene Wigner, To make certain the US was aware of the danger, in July 1939, a few months before the beginning of World War II in Europe, Szilárd and Wigner visited Einstein to explain the possibility of atomic bombs, which Einstein, a pacifist, said he had never considered. He was asked to lend his support by writing a letter, with Szilárd, to President Roosevelt, recommending the US pay attention and engage in its own nuclear weapons research.
The letter is believed to be . In addition to the letter, Einstein used his connections with the Belgian royal family and the Belgian queen mother to get access with a personal envoy to the White House's Oval Office. Some say that as a result of Einstein's letter and his meetings with Roosevelt, the US entered the "race" to develop the bomb, drawing on its "immense material, financial, and scientific resources" to initiate the Manhattan Project.
For Einstein, By signing the letter to Roosevelt, some argue he went against his pacifist principles. In 1954, a year before his death, Einstein said to his old friend, Linus Pauling, In 1955, Einstein and ten other intellectuals and scientists, including British philosopher Bertrand Russell, signed a manifesto highlighting the danger of nuclear weapons. In 1960 Einstein was included posthumously as a charter member of the World Academy of Art and Science (WAAS), an organization founded by distinguished scientists and intellectuals who committed themselves to the responsible and ethical advances of science, particularly in light of the development of nuclear weapons.
US citizenship
Einstein became an American citizen in 1940. Not long after settling into his career at the Institute for Advanced Study in Princeton, New Jersey, he expressed his appreciation of the meritocracy in American culture compared to Europe. He recognized the "right of individuals to say and think what they pleased" without social barriers. As a result, individuals were encouraged, he said, to be more creative, a trait he valued from his early education.
Einstein joined the National Association for the Advancement of Colored People (NAACP) in Princeton, where he campaigned for the civil rights of African Americans. He considered racism America's "worst disease", seeing it as . As part of his involvement, he corresponded with civil rights activist W. E. B. Du Bois and was prepared to testify on his behalf during his trial as an alleged foreign agent in 1951. When Einstein offered to be a character witness for Du Bois, the judge decided to drop the case.
In 1946, Einstein visited Lincoln University in Pennsylvania, a historically black college, where he was awarded an honorary degree. Lincoln was the first university in the United States to grant college degrees to African Americans; alumni include Langston Hughes and Thurgood Marshall. Einstein gave a speech about racism in America, adding, A resident of Princeton recalls that Einstein had once paid the college tuition for a black student. Einstein has said, .
Personal views
Political views
In 1918, Einstein was one of the signatories of the founding proclamation of the German Democratic Party, a liberal party. Later in his life, Einstein's political view was in favor of socialism and critical of capitalism, which he detailed in his essays such as "Why Socialism?". His opinions on the Bolsheviks also changed with time. In 1925, he criticized them for not having a "well-regulated system of government" and called their rule a "regime of terror and a tragedy in human history". He later adopted a more moderated view, criticizing their methods but praising them, which is shown by his 1929 remark on Vladimir Lenin:
Einstein offered and was called on to give judgments and opinions on matters often unrelated to theoretical physics or mathematics. He strongly advocated the idea of a democratic global government that would check the power of nation-states in the framework of a world federation. He wrote The FBI created a secret dossier on Einstein in 1932; by the time of his death, it was 1,427 pages long.
Einstein was deeply impressed by Mahatma Gandhi, with whom he corresponded. He described Gandhi as . The initial connection was established on 27 September 1931, when Wilfrid Israel took his Indian guest V. A. Sundaram to meet his friend Einstein at his summer home in the town of Caputh. Sundaram was Gandhi's disciple and special envoy, whom Wilfrid Israel met while visiting India and visiting the Indian leader's home in 1925. During the visit, Einstein wrote a short letter to Gandhi that was delivered to him through his envoy, and Gandhi responded quickly with his own letter. Although in the end Einstein and Gandhi were unable to meet as they had hoped, the direct connection between them was established through Wilfrid Israel.
Relationship with Zionism
Einstein was a figurehead leader in the establishment of the Hebrew University of Jerusalem, which opened in 1925. Earlier, in 1921, he was asked by the biochemist and president of the World Zionist Organization, Chaim Weizmann, to help raise funds for the planned university. He made suggestions for the creation of an Institute of Agriculture, a Chemical Institute and an Institute of Microbiology in order to fight the various ongoing epidemics such as malaria, which he called an "evil" that was undermining a third of the country's development. He also promoted the establishment of an Oriental Studies Institute, to include language courses given in both Hebrew and Arabic.
Einstein was not a nationalist and opposed the creation of an independent Jewish state. He felt that the waves of arriving Jews of the Aliyah could live alongside existing Arabs in Palestine. The state of Israel was established without his help in 1948; Einstein was limited to a marginal role in the Zionist movement. Upon the death of Israeli president Weizmann in November 1952, Prime Minister David Ben-Gurion offered Einstein the largely ceremonial position of President of Israel at the urging of Ezriel Carlebach. The offer was presented by Israel's ambassador in Washington, Abba Eban, who explained that the offer . Einstein wrote that he was "deeply moved", but "at once saddened and ashamed" that he could not accept it.
Religious and philosophical views
Per Lee Smolin, Einstein expounded his spiritual outlook in a wide array of writings and interviews. He said he had sympathy for the impersonal pantheistic God of Baruch Spinoza's philosophy. He did not believe in a personal god who concerns himself with fates and actions of human beings, a view which he described as naïve. He clarified, however, that , preferring to call himself an agnostic, or a . He wrote that
Einstein was primarily affiliated with non-religious humanist and Ethical Culture groups in both the UK and US. He served on the advisory board of the First Humanist Society of New York, and was an honorary associate of the Rationalist Association, which publishes New Humanist in Britain. For the 75th anniversary of the New York Society for Ethical Culture, he stated that the idea of Ethical Culture embodied his personal conception of what is most valuable and enduring in religious idealism. He observed,
In a German-language letter to philosopher Eric Gutkind, dated 3 January 1954, Einstein wrote:
Einstein had been sympathetic toward vegetarianism for a long time. In a letter in 1930 to Hermann Huth, vice-president of the German Vegetarian Federation (Deutsche Vegetarier-Bund), he wrote:
He became a vegetarian himself only during the last part of his life. In March 1954 he wrote in a letter:
Love of music
Einstein developed an appreciation for music at an early age. In his late journals he wrote:
His mother played the piano reasonably well and wanted her son to learn the violin, not only to instill in him a love of music but also to help him assimilate into German culture. According to conductor Leon Botstein, Einstein began playing when he was 5. However, he did not enjoy it at that age.
When he turned 13, he discovered Mozart's violin sonatas, whereupon he became enamored of Mozart's compositions and studied music more willingly. Einstein taught himself to play without "ever practicing systematically". He said that . At the age of 17, he was heard by a school examiner in Aarau while playing Beethoven's violin sonatas. The examiner stated afterward that his playing was . What struck the examiner, writes Botstein, was that Einstein
Music took on a pivotal and permanent role in Einstein's life from that period on. Although the idea of becoming a professional musician himself was not on his mind at any time, among those with whom Einstein played chamber music were a few professionals, including Kurt Appelbaum, and he performed for private audiences and friends. Chamber music had also become a regular part of his social life while living in Bern, Zurich, and Berlin, where he played with Max Planck and his son, among others. He is sometimes erroneously credited as the editor of the 1937 edition of the Köchel catalog of Mozart's work; that edition was prepared by Alfred Einstein, who may have been a distant relation.
In 1931, while engaged in research at the California Institute of Technology, he visited the Zoellner family conservatory in Los Angeles, where he played some of Beethoven and Mozart's works with members of the Zoellner Quartet. Near the end of his life, when the young Juilliard Quartet visited him in Princeton, he played his violin with them, and the quartet was .
Death
On 17 April 1955, Einstein experienced internal bleeding caused by the rupture of an abdominal aortic aneurysm, which had previously been reinforced surgically by Rudolph Nissen in 1948. He took the draft of a speech he was preparing for a television appearance commemorating the state of Israel's seventh anniversary with him to the hospital, but he did not live to complete it.
Einstein refused surgery, saying, He died in the Princeton Hospital early the next morning at the age of 76, having continued to work until near the end.
During the autopsy, the pathologist Thomas Stoltz Harvey removed Einstein's brain for preservation without the permission of his family, in the hope that the neuroscience of the future would be able to discover what made Einstein so intelligent. Einstein's remains were cremated in Trenton, New Jersey, and his ashes were scattered at an undisclosed location.
In a memorial lecture delivered on 13 December 1965 at UNESCO headquarters, nuclear physicist J. Robert Oppenheimer summarized his impression of Einstein as a person:
Einstein bequeathed his personal archives, library, and intellectual assets to the Hebrew University of Jerusalem in Israel.
Scientific career
Throughout his life, Einstein published hundreds of books and articles. He published more than 300 scientific papers and 150 non-scientific ones. On 5 December 2014, universities and archives announced the release of Einstein's papers, comprising more than 30,000 unique documents. In addition to the work he did by himself he also collaborated with other scientists on additional projects including the Bose–Einstein statistics, the Einstein refrigerator and others.
Statistical mechanics
Thermodynamic fluctuations and statistical physics
Einstein's first paper submitted in 1900 to Annalen der Physik was on capillary attraction. It was published in 1901 with the title "Folgerungen aus den Capillaritätserscheinungen", which translates as "Conclusions from the capillarity phenomena". Two papers he published in 1902–1903 (thermodynamics) attempted to interpret atomic phenomena from a statistical point of view. These papers were the foundation for the 1905 paper on Brownian motion, which showed that Brownian movement can be construed as firm evidence that molecules exist. His research in 1903 and 1904 was mainly concerned with the effect of finite atomic size on diffusion phenomena.
Theory of critical opalescence
Einstein returned to the problem of thermodynamic fluctuations, giving a treatment of the density variations in a fluid at its critical point. Ordinarily the density fluctuations are controlled by the second derivative of the free energy with respect to the density. At the critical point, this derivative is zero, leading to large fluctuations. The effect of density fluctuations is that light of all wavelengths is scattered, making the fluid look milky white. Einstein relates this to Rayleigh scattering, which is what happens when the fluctuation size is much smaller than the wavelength, and which explains why the sky is blue. Einstein quantitatively derived critical opalescence from a treatment of density fluctuations, and demonstrated how both the effect and Rayleigh scattering originate from the atomistic constitution of matter.
1905 – Annus Mirabilis papers
The Annus Mirabilis papers are four articles pertaining to the photoelectric effect (which gave rise to quantum theory), Brownian motion, the special theory of relativity, and E=mc2 that Einstein published in the Annalen der Physik scientific journal in 1905. These four works contributed substantially to the foundation of modern physics and changed views on space, time, and matter. The four papers are:
Special relativity
Einstein's "" ("On the Electrodynamics of Moving Bodies") was received on 30 June 1905 and published 26 September of that same year. It reconciled conflicts between Maxwell's equations (the laws of electricity and magnetism) and the laws of Newtonian mechanics by introducing changes to the laws of mechanics. Observationally, the effects of these changes are most apparent at high speeds (where objects are moving at speeds close to the speed of light). The theory developed in this paper later became known as Einstein's special theory of relativity.
This paper predicted that, when measured in the frame of a relatively moving observer, a clock carried by a moving body would appear to slow down, and the body itself would contract in its direction of motion. This paper also argued that the idea of a luminiferous aether—one of the leading theoretical entities in physics at the time—was superfluous.
In his paper on mass–energy equivalence, Einstein produced E=mc2 as a consequence of his special relativity equations. Einstein's 1905 work on relativity remained controversial for many years, but was accepted by leading physicists, starting with Max Planck.
Einstein originally framed special relativity in terms of kinematics (the study of moving bodies). In 1908, Hermann Minkowski reinterpreted special relativity in geometric terms as a theory of spacetime. Einstein adopted Minkowski's formalism in his 1915 general theory of relativity.
General relativity
General relativity and the equivalence principle
General relativity (GR) is a theory of gravitation that was developed by Einstein between 1907 and 1915. According to it, the observed gravitational attraction between masses results from the warping of spacetime by those masses. General relativity has developed into an essential tool in modern astrophysics; it provides the foundation for the current understanding of black holes, regions of space where gravitational attraction is so strong that not even light can escape.
As Einstein later said, the reason for the development of general relativity was that the preference of inertial motions within special relativity was unsatisfactory, while a theory which from the outset prefers no state of motion (even accelerated ones) should appear more satisfactory. Consequently, in 1907 he published an article on acceleration under special relativity. In that article titled "On the Relativity Principle and the Conclusions Drawn from It", he argued that free fall is really inertial motion, and that for a free-falling observer the rules of special relativity must apply. This argument is called the equivalence principle. In the same article, Einstein also predicted the phenomena of gravitational time dilation, gravitational redshift and gravitational lensing.
In 1911, Einstein published another article "On the Influence of Gravitation on the Propagation of Light" expanding on the 1907 article, in which he estimated the amount of deflection of light by massive bodies. Thus, the theoretical prediction of general relativity could for the first time be tested experimentally.
Gravitational waves
In 1916, Einstein predicted gravitational waves, ripples in the curvature of spacetime which propagate as waves, traveling outward from the source, transporting energy as gravitational radiation. The existence of gravitational waves is possible under general relativity due to its Lorentz invariance which brings the concept of a finite speed of propagation of the physical interactions of gravity with it. By contrast, gravitational waves cannot exist in the Newtonian theory of gravitation, which postulates that the physical interactions of gravity propagate at infinite speed.
The first, indirect, detection of gravitational waves came in the 1970s through observation of a pair of closely orbiting neutron stars, PSR B1913+16. The explanation for the decay in their orbital period was that they were emitting gravitational waves. Einstein's prediction was confirmed on 11 February 2016, when researchers at LIGO published the first observation of gravitational waves, detected on Earth on 14 September 2015, nearly one hundred years after the prediction.
Hole argument and Entwurf theory
While developing general relativity, Einstein became confused about the gauge invariance in the theory. He formulated an argument that led him to conclude that a general relativistic field theory is impossible. He gave up looking for fully generally covariant tensor equations and searched for equations that would be invariant under general linear transformations only.
In June 1913, the Entwurf ('draft') theory was the result of these investigations. As its name suggests, it was a sketch of a theory, less elegant and more difficult than general relativity, with the equations of motion supplemented by additional gauge fixing conditions. After more than two years of intensive work, Einstein realized that the hole argument was mistaken and abandoned the theory in November 1915.
Physical cosmology
In 1917, Einstein applied the general theory of relativity to the structure of the universe as a whole. He discovered that the general field equations predicted a universe that was dynamic, either contracting or expanding. As observational evidence for a dynamic universe was lacking at the time, Einstein introduced a new term, the cosmological constant, into the field equations, in order to allow the theory to predict a static universe. The modified field equations predicted a static universe of closed curvature, in accordance with Einstein's understanding of Mach's principle in these years. This model became known as the Einstein World or Einstein's static universe.
Following the discovery of the recession of the galaxies by Edwin Hubble in 1929, Einstein abandoned his static model of the universe, and proposed two dynamic models of the cosmos, the Friedmann–Einstein universe of 1931 and the Einstein–de Sitter universe of 1932. In each of these models, Einstein discarded the cosmological constant, claiming that it was "in any case theoretically unsatisfactory".
In many Einstein biographies, it is claimed that Einstein referred to the cosmological constant in later years as his "biggest blunder", based on a letter George Gamow claimed to have received from him. The astrophysicist Mario Livio has cast doubt on this claim.
In late 2013, a team led by the Irish physicist Cormac O'Raifeartaigh discovered evidence that, shortly after learning of Hubble's observations of the recession of the galaxies, Einstein considered a steady-state model of the universe. In a hitherto overlooked manuscript, apparently written in early 1931, Einstein explored a model of the expanding universe in which the density of matter remains constant due to a continuous creation of matter, a process that he associated with the cosmological constant. As he stated in the paper,
It thus appears that Einstein considered a steady-state model of the expanding universe many years before Hoyle, Bondi and Gold. However, Einstein's steady-state model contained a fundamental flaw and he quickly abandoned the idea.
Energy momentum pseudotensor
General relativity includes a dynamical spacetime, so it is difficult to see how to identify the conserved energy and momentum. Noether's theorem allows these quantities to be determined from a Lagrangian with translation invariance, but general covariance makes translation invariance into something of a gauge symmetry. The energy and momentum derived within general relativity by Noether's prescriptions do not make a real tensor for this reason.
Einstein argued that this is true for a fundamental reason: the gravitational field could be made to vanish by a choice of coordinates. He maintained that the non-covariant energy momentum pseudotensor was, in fact, the best description of the energy momentum distribution in a gravitational field. While the use of non-covariant objects like pseudotensors was criticized by Erwin Schrödinger and others, Einstein's approach has been echoed by physicists including Lev Landau and Evgeny Lifshitz.
Wormholes
In 1935, Einstein collaborated with Nathan Rosen to produce a model of a wormhole, often called Einstein–Rosen bridges. His motivation was to model elementary particles with charge as a solution of gravitational field equations, in line with the program outlined in the paper "Do Gravitational Fields play an Important Role in the Constitution of the Elementary Particles?". These solutions cut and pasted Schwarzschild black holes to make a bridge between two patches. Because these solutions included spacetime curvature without the presence of a physical body, Einstein and Rosen suggested that they could provide the beginnings of a theory that avoided the notion of point particles. However, it was later found that Einstein–Rosen bridges are not stable.
Einstein–Cartan theory
In order to incorporate spinning point particles into general relativity, the affine connection needed to be generalized to include an antisymmetric part, called the torsion. This modification was made by Einstein and Cartan in the 1920s.
Equations of motion
In general relativity, gravitational force is reimagined as curvature of spacetime. A curved path like an orbit is not the result of a force deflecting a body from an ideal straight-line path, but rather the body's attempt to fall freely through a background that is itself curved by the presence of other masses. A remark by John Archibald Wheeler that has become proverbial among physicists summarizes the theory: The Einstein field equations cover the latter aspect of the theory, relating the curvature of spacetime to the distribution of matter and energy. The geodesic equation covers the former aspect, stating that freely falling bodies follow lines that are as straight as possible in a curved spacetime. Einstein regarded this as an "independent fundamental assumption" that had to be postulated in addition to the field equations in order to complete the theory. Believing this to be a shortcoming in how general relativity was originally presented, he wished to derive it from the field equations themselves. Since the equations of general relativity are non-linear, a lump of energy made out of pure gravitational fields, like a black hole, would move on a trajectory which is determined by the Einstein field equations themselves, not by a new law. Accordingly, Einstein proposed that the field equations would determine the path of a singular solution, like a black hole, to be a geodesic. Both physicists and philosophers have often repeated the assertion that the geodesic equation can be obtained from applying the field equations to the motion of a gravitational singularity, but this claim remains disputed.
Old quantum theory
Photons and energy quanta
In a 1905 paper, Einstein postulated that light itself consists of localized particles (quanta). Einstein's light quanta were nearly universally rejected by all physicists, including Max Planck and Niels Bohr. This idea only became universally accepted in 1919, with Robert Millikan's detailed experiments on the photoelectric effect, and with the measurement of Compton scattering.
Einstein concluded that each wave of frequency f is associated with a collection of photons with energy hf each, where h is the Planck constant. He did not say much more, because he was not sure how the particles were related to the wave. But he did suggest that this idea would explain certain experimental results, notably the photoelectric effect. Light quanta were dubbed photons by Gilbert N. Lewis in 1926.
Quantized atomic vibrations
In 1907, Einstein proposed a model of matter where each atom in a lattice structure is an independent harmonic oscillator. In the Einstein model, each atom oscillates independently—a series of equally spaced quantized states for each oscillator. Einstein was aware that getting the frequency of the actual oscillations would be difficult, but he nevertheless proposed this theory because it was a particularly clear demonstration that quantum mechanics could solve the specific heat problem in classical mechanics. Peter Debye refined this model.
Bose–Einstein statistics
In 1924, Einstein received a description of a statistical model from Indian physicist Satyendra Nath Bose, based on a counting method that assumed that light could be understood as a gas of indistinguishable particles. Einstein noted that Bose's statistics applied to some atoms as well as to the proposed light particles, and submitted his translation of Bose's paper to the Zeitschrift für Physik. Einstein also published his own articles describing the model and its implications, among them the Bose–Einstein condensate phenomenon that some particulates should appear at very low temperatures. It was not until 1995 that the first such condensate was produced experimentally by Eric Allin Cornell and Carl Wieman using ultra-cooling equipment built at the NIST–JILA laboratory at the University of Colorado at Boulder. Bose–Einstein statistics are now used to describe the behaviors of any assembly of bosons. Einstein's sketches for this project may be seen in the Einstein Archive in the library of the Leiden University.
Wave–particle duality
Although the patent office promoted Einstein to Technical Examiner Second Class in 1906, he had not given up on academia. In 1908, he became a Privatdozent at the University of Bern. In "Über die Entwicklung unserer Anschauungen über das Wesen und die Konstitution der Strahlung" ("The Development of our Views on the Composition and Essence of Radiation"), on the quantization of light, and in an earlier 1909 paper, Einstein showed that Max Planck's energy quanta must have well-defined momenta and act in some respects as independent, point-like particles. This paper introduced the photon concept and inspired the notion of wave–particle duality in quantum mechanics. Einstein saw this wave–particle duality in radiation as concrete evidence for his conviction that physics needed a new, unified foundation.
Zero-point energy
In a series of works completed from 1911 to 1913, Planck reformulated his 1900 quantum theory and introduced the idea of zero-point energy in his "second quantum theory". Soon, this idea attracted the attention of Einstein and his assistant Otto Stern. Assuming the energy of rotating diatomic molecules contains zero-point energy, they then compared the theoretical specific heat of hydrogen gas with the experimental data. The numbers matched nicely. However, after publishing the findings, they promptly withdrew their support, because they no longer had confidence in the correctness of the idea of zero-point energy.
Stimulated emission
In 1917, at the height of his work on relativity, Einstein published an article in Physikalische Zeitschrift that proposed the possibility of stimulated emission, the physical process that makes possible the maser and the laser.
This article showed that the statistics of absorption and emission of light would only be consistent with Planck's distribution law if the emission of light into a mode with n photons would be enhanced statistically compared to the emission of light into an empty mode. This paper was enormously influential in the later development of quantum mechanics, because it was the first paper to show that the statistics of atomic transitions had simple laws.
Matter waves
Einstein discovered Louis de Broglie's work and supported his ideas, which were received skeptically at first. In another major paper from this era, Einstein observed that de Broglie waves could explain the quantization rules of Bohr and Sommerfeld. This paper would inspire Schrödinger's work of 1926.
Quantum mechanics
Einstein's objections to quantum mechanics
Einstein played a major role in developing quantum theory, beginning with his 1905 paper on the photoelectric effect. However, he became displeased with modern quantum mechanics as it had evolved after 1925, despite its acceptance by other physicists. He was skeptical that the randomness of quantum mechanics was fundamental rather than the result of determinism, stating that God "is not playing at dice". Until the end of his life, he continued to maintain that quantum mechanics was incomplete.
Bohr versus Einstein
The Bohr–Einstein debates were a series of public disputes about quantum mechanics between Einstein and Niels Bohr, who were two of its founders. Their debates are remembered because of their importance to the philosophy of science. Their debates would influence later interpretations of quantum mechanics.
Einstein–Podolsky–Rosen paradox
Einstein never fully accepted quantum mechanics. While he recognized that it made correct predictions, he believed a more fundamental description of nature must be possible. Over the years he presented multiple arguments to this effect, but the one he preferred most dated to a debate with Bohr in 1930. Einstein suggested a thought experiment in which two objects are allowed to interact and then moved apart a great distance from each other. The quantum-mechanical description of the two objects is a mathematical entity known as a wavefunction. If the wavefunction that describes the two objects before their interaction is given, then the Schrödinger equation provides the wavefunction that describes them after their interaction. But because of what would later be called quantum entanglement, measuring one object would lead to an instantaneous change of the wavefunction describing the other object, no matter how far away it is. Moreover, the choice of which measurement to perform upon the first object would affect what wavefunction could result for the second object. Einstein reasoned that no influence could propagate from the first object to the second instantaneously fast. Indeed, he argued, physics depends on being able to tell one thing apart from another, and such instantaneous influences would call that into question. Because the true "physical condition" of the second object could not be immediately altered by an action done to the first, Einstein concluded, the wavefunction could not be that true physical condition, only an incomplete description of it.
A more famous version of this argument came in 1935, when Einstein published a paper with Boris Podolsky and Nathan Rosen that laid out what would become known as the EPR paradox. In this thought experiment, two particles interact in such a way that the wavefunction describing them is entangled. Then, no matter how far the two particles were separated, a precise position measurement on one particle would imply the ability to predict, perfectly, the result of measuring the position of the other particle. Likewise, a precise momentum measurement of one particle would result in an equally precise prediction for of the momentum of the other particle, without needing to disturb the other particle in any way. They argued that no action taken on the first particle could instantaneously affect the other, since this would involve information being transmitted faster than light, which is forbidden by the theory of relativity. They invoked a principle, later known as the "EPR criterion of reality", positing that: From this, they inferred that the second particle must have a definite value of both position and of momentum prior to either quantity being measured. But quantum mechanics considers these two observables incompatible and thus does not associate simultaneous values for both to any system. Einstein, Podolsky, and Rosen therefore concluded that quantum theory does not provide a complete description of reality.
In 1964, John Stewart Bell carried the analysis of quantum entanglement much further. He deduced that if measurements are performed independently on the two separated particles of an entangled pair, then the assumption that the outcomes depend upon hidden variables within each half implies a mathematical constraint on how the outcomes on the two measurements are correlated. This constraint would later be called a Bell inequality. Bell then showed that quantum physics predicts correlations that violate this inequality. Consequently, the only way that hidden variables could explain the predictions of quantum physics is if they are "nonlocal", which is to say that somehow the two particles are able to interact instantaneously no matter how widely they ever become separated. Bell argued that because an explanation of quantum phenomena in terms of hidden variables would require nonlocality, the EPR paradox .
Despite this, and although Einstein personally found the argument in the EPR paper overly complicated, that paper became among the most influential papers published in Physical Review. It is considered a centerpiece of the development of quantum information theory.
Unified field theory
Encouraged by his success with general relativity, Einstein sought an even more ambitious geometrical theory that would treat gravitation and electromagnetism as aspects of a single entity. In 1950, he described his unified field theory in a Scientific American article titled "On the Generalized Theory of Gravitation". His attempt to find the most fundamental laws of nature won him praise but not success: a particularly conspicuous blemish of his model was that it did not accommodate the strong and weak nuclear forces, neither of which was well understood until many years after his death. Although most researchers now believe that Einstein's approach to unifying physics was mistaken, his goal of a theory of everything is one to which his successors still aspire.
Other investigations
Einstein conducted other investigations that were unsuccessful and abandoned. These pertain to force, superconductivity, and other research.
Collaboration with other scientists
In addition to longtime collaborators Leopold Infeld, Nathan Rosen, Peter Bergmann and others, Einstein also had some one-shot collaborations with various scientists.
Einstein–de Haas experiment
In 1908, Owen Willans Richardson predicted that a change in the magnetic moment of a free body will cause this body to rotate. This effect is a consequence of the conservation of angular momentum and is strong enough to be observable in ferromagnetic materials. Einstein and Wander Johannes de Haas published two papers in 1915 claiming the first experimental observation of the effect. Measurements of this kind demonstrate that the phenomenon of magnetization is caused by the alignment (polarization) of the angular momenta of the electrons in the material along the axis of magnetization. These measurements also allow the separation of the two contributions to the magnetization: that which is associated with the spin and with the orbital motion of the electrons. The Einstein-de Haas experiment is the only experiment concived, realized and published by Albert Einstein himself.
A complete original version of the Einstein-de Haas experimental equipment was donated by Geertruida de Haas-Lorentz, wife of de Haas and daughter of Lorentz, to the Ampère Museum in Lyon France in 1961 where it is currently on display. It was lost among the museum's holdings and was rediscovered in 2023.
Einstein as an inventor
In 1926, Einstein and his former student Leó Szilárd co-invented (and in 1930, patented) the Einstein refrigerator. This absorption refrigerator was then revolutionary for having no moving parts and using only heat as an input. On 11 November 1930, was awarded to Einstein and Leó Szilárd for the refrigerator. Their invention was not immediately put into commercial production, but the most promising of their patents were acquired by the Swedish company Electrolux.
Einstein also invented an electromagnetic pump, sound reproduction device, and several other household devices.
Legacy
Non-scientific
While traveling, Einstein wrote daily to his wife Elsa and adopted stepdaughters Margot and Ilse. The letters were included in the papers bequeathed to the Hebrew University of Jerusalem. Margot Einstein permitted the personal letters to be made available to the public, but requested that it not be done until twenty years after her death (she died in 1986). Barbara Wolff, of the Hebrew University's Albert Einstein Archives, told the BBC that there are about 3,500 pages of private correspondence written between 1912 and 1955.
Einstein's right of publicity was litigated in 2015 in a federal district court in California. Although the court initially held that the right had expired, that ruling was immediately appealed, and the decision was later vacated in its entirety. The underlying claims between the parties in that lawsuit were ultimately settled. The right is enforceable, and the Hebrew University of Jerusalem is the exclusive representative of that right. Corbis, successor to The Roger Richman Agency, licenses the use of his name and associated imagery, as agent for the university.
Mount Einstein in the Chugach Mountains of Alaska was named in 1955. Mount Einstein in New Zealand's Paparoa Range was named after him in 1970 by the Department of Scientific and Industrial Research.
In 1999, Einstein was named Time's Person of the Century.
Scientific
In 1999, a survey of the top 100 physicists voted for Einstein as the "greatest physicist ever", while a parallel survey of rank-and-file physicists gave the top spot to Isaac Newton, with Einstein second.
Physicist Lev Landau ranked physicists from 0 to 5 on a logarithmic scale of productivity and genius, with Newton and Einstein belonging in a "super league", with Newton receiving the highest ranking of 0, followed by Einstein with 0.5, while fathers of quantum mechanics such as Werner Heisenberg and Paul Dirac were ranked 1, with Landau himself a 2.
Physicist Eugene Wigner noted that while John von Neumann had the quickest and acute mind he ever knew, the understanding of Einstein was deeper than von Neumann's, stating that:The year 2005 was labeled the "World Year of Physics", and was also known as "Einstein Year", in recognition of Einstein's "miracle year" in 1905.
In popular culture
Einstein became one of the most famous scientific celebrities after the confirmation of his general theory of relativity in 1919. Although most of the public had little understanding of his work, he was widely recognized and admired. In the period before World War II, The New Yorker published a vignette in their "The Talk of the Town" feature saying that Einstein was so well known in America that he would be stopped on the street by people wanting him to explain "that theory". Eventually he came to cope with unwanted enquirers by pretending to be someone else:
Einstein has been the subject of or inspiration for many novels, films, plays, and works of music. He is a favorite model for depictions of absent-minded professors; his expressive face and distinctive hairstyle have been widely copied and exaggerated. Time magazine's Frederic Golden wrote that Einstein was "a cartoonist's dream come true". His intellectual achievements and originality made Einstein broadly synonymous with genius.
Many popular quotations are often misattributed to him.
Awards and honors
Einstein received numerous awards and honors, and in 1922, he was awarded the 1921 Nobel Prize in Physics . None of the nominations in 1921 met the criteria set by Alfred Nobel, so the 1921 prize was carried forward and awarded to Einstein in 1922.
Einsteinium, a synthetic chemical element, was named in his honor in 1955, a few months after his death.
Publications
Scientific
First of a series of papers on this topic.
A reprint of this book was published by Edition Erbrich in 1982, .
Further information about the volumes published so far can be found on the webpages of the Einstein Papers Project and on the Princeton University Press Einstein Page.
Others
Einstein, Albert (September 1960). Foreword to Gandhi Wields the Weapon of Moral Power: Three Case Histories. Introduction by Bharatan Kumarappa. Ahmedabad: Navajivan Publishing House. pp. v–vi. . Foreword originally written in April 1953.
The chasing a light beam thought experiment is described on pages 48–51.
See also
Bern Historical Museum (Einstein Museum)
Einstein notation
Frist Campus Center at Princeton University room 302 is associated with Einstein. The center was once the Palmer Physical Laboratory.
Heinrich Burkhardt
Heinrich Zangger
History of gravitational theory
List of coupled cousins
List of German inventors and discoverers
List of Jewish Nobel laureates
List of peace activists
Relativity priority dispute
Sticky bead argument
Notes
References
Works cited
Further reading
, or
External links
Einstein's Personal Correspondence: Religion, Politics, The Holocaust, and Philosophy Shapell Manuscript Foundation
Federal Bureau of Investigation file on Albert Einstein
Einstein and his love of music, Physics World
including the Nobel Lecture 11 July 1923 Fundamental ideas and problems of the theory of relativity
Albert Einstein Archives Online (80,000+ Documents) (MSNBC, 19 March 2012)
Einstein's declaration of intention for American citizenship on the World Digital Library
Albert Einstein Collection at Brandeis University
The Collected Papers of Albert Einstein "Digital Einstein" at Princeton University
Home page of Albert Einstein at The Institute for Advanced Study
Albert – The Digital Repository of the IAS, which contains many digitized original documents and photographs
1879 births
1955 deaths
19th-century German Jews
20th-century American engineers
20th-century American inventors
20th-century American male writers
20th-century American non-fiction writers
20th-century American physicists
20th-century Swiss inventors
Academic staff of Charles University
Academic staff of ETH Zurich
Academic staff of the University of Bern
Academic staff of the University of Zurich
American agnostics
American Ashkenazi Jews
American democratic socialists
American humanists
American letter writers
American male non-fiction writers
American Nobel laureates
American pacifists
American relativity theorists
American science writers
American Zionists
Anti-nationalists
Deaths from abdominal aortic aneurysm
Denaturalized citizens of Germany
Albert
ETH Zurich alumni
European democratic socialists
German agnostics
German Ashkenazi Jews
German emigrants to Switzerland
German humanists
German male non-fiction writers
German Nobel laureates
German relativity theorists
German Zionists
Institute for Advanced Study faculty
Jewish agnostics
Jewish American non-fiction writers
Jewish American physicists
Jewish German physicists
Jewish emigrants from Nazi Germany to the United States
Jewish Nobel laureates
Jewish scientists
Jewish socialists
Labor Zionists
Max Planck Institute directors
Members of the American Philosophical Society
Members of the Royal Netherlands Academy of Arts and Sciences
Members of the United States National Academy of Sciences
Naturalised citizens of Austria
Naturalised citizens of Switzerland
Naturalized citizens of the United States
Nobel laureates in Physics
Pantheists
Patent examiners
People from Ulm
People who lost German citizenship
People with multiple citizenship
Philosophers of mathematics
Philosophers of science
Philosophy of science
Quantum physicists
Recipients of Franklin Medal
Scientists from Munich
Stateless people
Swiss agnostics
Swiss Ashkenazi Jews
Swiss cosmologists
Swiss emigrants to the United States
Swiss Nobel laureates
Swiss physicists
University of Zurich alumni
Winners of the Max Planck Medal
Württemberger emigrants to the United States | Albert Einstein | [
"Physics",
"Mathematics"
] | 15,570 | [
"Philosophers of mathematics",
"Quantum physicists",
"Quantum mechanics"
] |
772 | https://en.wikipedia.org/wiki/Ampere | The ampere ( , ; symbol: A), often shortened to amp, is the unit of electric current in the International System of Units (SI). One ampere is equal to 1 coulomb (C) moving past a point per second. It is named after French mathematician and physicist André-Marie Ampère (1775–1836), considered the father of electromagnetism along with Danish physicist Hans Christian Ørsted.
As of the 2019 revision of the SI, the ampere is defined by fixing the elementary charge to be exactly , which means an ampere is an electric current equivalent to elementary charges moving every seconds or elementary charges moving in a second. Prior to the redefinition the ampere was defined as the current passing through two parallel wires 1 metre apart that produces a magnetic force of newtons per metre.
The earlier CGS system has two units of current, one structured similarly to the SI's and the other using Coulomb's law as a fundamental relationship, with the CGS unit of charge defined by measuring the force between two charged metal plates. The CGS unit of current is then defined as one unit of charge per second.
History
The ampere is named for French physicist and mathematician André-Marie Ampère (1775–1836), who studied electromagnetism and laid the foundation of electrodynamics. In recognition of Ampère's contributions to the creation of modern electrical science, an international convention, signed at the 1881 International Exposition of Electricity, established the ampere as a standard unit of electrical measurement for electric current.
The ampere was originally defined as one tenth of the unit of electric current in the centimetre–gram–second system of units. That unit, now known as the abampere, was defined as the amount of current that generates a force of two dynes per centimetre of length between two wires one centimetre apart. The size of the unit was chosen so that the units derived from it in the MKSA system would be conveniently sized.
The "international ampere" was an early realization of the ampere, defined as the current that would deposit of silver per second from a silver nitrate solution. Later, more accurate measurements revealed that this current is .
Since power is defined as the product of current and voltage, the ampere can alternatively be expressed in terms of the other units using the relationship , and thus 1 A = 1 W/V. Current can be measured by a multimeter, a device that can measure electrical voltage, current, and resistance.
Former definition in the SI
Until 2019, the SI defined the ampere as follows:
The ampere is that constant current which, if maintained in two straight parallel conductors of infinite length, of negligible circular cross-section, and placed one metre apart in vacuum, would produce between these conductors a force equal to newtons per metre of length.
Ampère's force law states that there is an attractive or repulsive force between two parallel wires carrying an electric current. This force is used in the formal definition of the ampere.
The SI unit of charge, the coulomb, was then defined as "the quantity of electricity carried in 1 second by a current of 1 ampere". Conversely, a current of one ampere is one coulomb of charge going past a given point per second:
In general, charge was determined by steady current flowing for a time as .
This definition of the ampere was most accurately realised using a Kibble balance, but in practice the unit was maintained via Ohm's law from the units of electromotive force and resistance, the volt and the ohm, since the latter two could be tied to physical phenomena that are relatively easy to reproduce, the Josephson effect and the quantum Hall effect, respectively.
Techniques to establish the realisation of an ampere had a relative uncertainty of approximately a few parts in 10, and involved realisations of the watt, the ohm and the volt.
Present definition
The 2019 revision of the SI defined the ampere by taking the fixed numerical value of the elementary charge to be when expressed in the unit C, which is equal to A⋅s, where the second is defined in terms of , the unperturbed ground state hyperfine transition frequency of the caesium-133 atom.
The SI unit of charge, the coulomb, "is the quantity of electricity carried in 1 second by a current of 1 ampere". Conversely, a current of one ampere is one coulomb of charge going past a given point per second:
In general, charge is determined by steady current flowing for a time as .
Constant, instantaneous and average current are expressed in amperes (as in "the charging current is 1.2 A") and the charge accumulated (or passed through a circuit) over a period of time is expressed in coulombs (as in "the battery charge is "). The relation of the ampere (C/s) to the coulomb is the same as that of the watt (J/s) to the joule.
Units derived from the ampere
The international system of units (SI) is based on seven SI base units the second, metre, kilogram, kelvin, ampere, mole, and candela representing seven fundamental types of physical quantity, or "dimensions", (time, length, mass, temperature, electric current, amount of substance, and luminous intensity respectively) with all other SI units being defined using these. These SI derived units can either be given special names e.g. watt, volt, lux, etc. or defined in terms of others, e.g. metre per second. The units with special names derived from the ampere are:
There are also some SI units that are frequently used in the context of electrical engineering and electrical appliances, but are defined independently of the ampere, notably the hertz, joule, watt, candela, lumen, and lux.
SI prefixes
Like other SI units, the ampere can be modified by adding a prefix that multiplies it by a power of 10.
See also
References
External links
The NIST Reference on Constants, Units, and Uncertainty
NIST Definition of ampere and μ0
SI base units
Units of electric current | Ampere | [
"Mathematics"
] | 1,291 | [
"Quantity",
"Units of electric current",
"Units of measurement"
] |
775 | https://en.wikipedia.org/wiki/Algorithm | In mathematics and computer science, an algorithm () is a finite sequence of mathematically rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing calculations and data processing. More advanced algorithms can use conditionals to divert the code execution through various routes (referred to as automated decision-making) and deduce valid inferences (referred to as automated reasoning).
In contrast, a heuristic is an approach to solving problems that do not have well-defined correct or optimal results. For example, although social media recommender systems are commonly called "algorithms", they actually rely on heuristics as there is no truly "correct" recommendation.
As an effective method, an algorithm can be expressed within a finite amount of space and time and in a well-defined formal language for calculating a function. Starting from an initial state and initial input (perhaps empty), the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states, eventually producing "output" and terminating at a final ending state. The transition from one state to the next is not necessarily deterministic; some algorithms, known as randomized algorithms, incorporate random input.
Etymology
Around 825 AD, Persian scientist and polymath Muḥammad ibn Mūsā al-Khwārizmī wrote kitāb al-ḥisāb al-hindī ("Book of Indian computation") and kitab al-jam' wa'l-tafriq al-ḥisāb al-hindī ("Addition and subtraction in Indian arithmetic"). In the early 12th century, Latin translations of said al-Khwarizmi texts involving the Hindu–Arabic numeral system and arithmetic appeared, for example Liber Alghoarismi de practica arismetrice, attributed to John of Seville, and Liber Algorismi de numero Indorum, attributed to Adelard of Bath. Hereby, alghoarismi or algorismi is the Latinization of Al-Khwarizmi's name; the text starts with the phrase Dixit Algorismi, or "Thus spoke Al-Khwarizmi". Around 1230, the English word algorism is attested and then by Chaucer in 1391, English adopted the French term. In the 15th century, under the influence of the Greek word ἀριθμός (arithmos, "number"; cf. "arithmetic"), the Latin word was altered to algorithmus.
Definition
One informal definition is "a set of rules that precisely defines a sequence of operations", which would include all computer programs (including programs that do not perform numeric calculations), and any prescribed bureaucratic procedure
or cook-book recipe. In general, a program is an algorithm only if it stops eventually—even though infinite loops may sometimes prove desirable. define an algorithm to be an explicit set of instructions for determining an output, that can be followed by a computing machine or a human who could only carry out specific elementary operations on symbols.
Most algorithms are intended to be implemented as computer programs. However, algorithms are also implemented by other means, such as in a biological neural network (for example, the human brain performing arithmetic or an insect looking for food), in an electrical circuit, or a mechanical device.
History
Ancient algorithms
Step-by-step procedures for solving mathematical problems have been recorded since antiquity. This includes in Babylonian mathematics (around 2500 BC), Egyptian mathematics (around 1550 BC), Indian mathematics (around 800 BC and later), the Ifa Oracle (around 500 BC), Greek mathematics (around 240 BC), Chinese mathematics (around 200 BC and later), and Arabic mathematics (around 800 AD).
The earliest evidence of algorithms is found in ancient Mesopotamian mathematics. A Sumerian clay tablet found in Shuruppak near Baghdad and dated to describes the earliest division algorithm. During the Hammurabi dynasty , Babylonian clay tablets described algorithms for computing formulas. Algorithms were also used in Babylonian astronomy. Babylonian clay tablets describe and employ algorithmic procedures to compute the time and place of significant astronomical events.
Algorithms for arithmetic are also found in ancient Egyptian mathematics, dating back to the Rhind Mathematical Papyrus . Algorithms were later used in ancient Hellenistic mathematics. Two examples are the Sieve of Eratosthenes, which was described in the Introduction to Arithmetic by Nicomachus, and the Euclidean algorithm, which was first described in Euclid's Elements ().Examples of ancient Indian mathematics included the Shulba Sutras, the Kerala School, and the Brāhmasphuṭasiddhānta.
The first cryptographic algorithm for deciphering encrypted code was developed by Al-Kindi, a 9th-century Arab mathematician, in A Manuscript On Deciphering Cryptographic Messages. He gave the first description of cryptanalysis by frequency analysis, the earliest codebreaking algorithm.
Computers
Weight-driven clocks
Bolter credits the invention of the weight-driven clock as "the key invention [of Europe in the Middle Ages]," specifically the verge escapement mechanism producing the tick and tock of a mechanical clock. "The accurate automatic machine" led immediately to "mechanical automata" in the 13th century and "computational machines"—the difference and analytical engines of Charles Babbage and Ada Lovelace in the mid-19th century. Lovelace designed the first algorithm intended for processing on a computer, Babbage's analytical engine, which is the first device considered a real Turing-complete computer instead of just a calculator. Although a full implementation of Babbage's second device was not realized for decades after her lifetime, Lovelace has been called "history's first programmer".
Electromechanical relay
Bell and Newell (1971) write that the Jacquard loom, a precursor to Hollerith cards (punch cards), and "telephone switching technologies" led to the development of the first computers. By the mid-19th century, the telegraph, the precursor of the telephone, was in use throughout the world. By the late 19th century, the ticker tape () was in use, as were Hollerith cards (c. 1890). Then came the teleprinter () with its punched-paper use of Baudot code on tape.
Telephone-switching networks of electromechanical relays were invented in 1835. These led to the invention of the digital adding device by George Stibitz in 1937. While working in Bell Laboratories, he observed the "burdensome" use of mechanical calculators with gears. "He went home one evening in 1937 intending to test his idea... When the tinkering was over, Stibitz had constructed a binary adding device".
Formalization
In 1928, a partial formalization of the modern concept of algorithms began with attempts to solve the Entscheidungsproblem (decision problem) posed by David Hilbert. Later formalizations were framed as attempts to define "effective calculability" or "effective method". Those formalizations included the Gödel–Herbrand–Kleene recursive functions of 1930, 1934 and 1935, Alonzo Church's lambda calculus of 1936, Emil Post's Formulation 1 of 1936, and Alan Turing's Turing machines of 1936–37 and 1939.
Representations
Algorithms can be expressed in many kinds of notation, including natural languages, pseudocode, flowcharts, drakon-charts, programming languages or control tables (processed by interpreters). Natural language expressions of algorithms tend to be verbose and ambiguous and are rarely used for complex or technical algorithms. Pseudocode, flowcharts, drakon-charts, and control tables are structured expressions of algorithms that avoid common ambiguities of natural language. Programming languages are primarily for expressing algorithms in a computer-executable form, but are also used to define or document algorithms.
Turing machines
There are many possible representations and Turing machine programs can be expressed as a sequence of machine tables (see finite-state machine, state-transition table, and control table for more), as flowcharts and drakon-charts (see state diagram for more), as a form of rudimentary machine code or assembly code called "sets of quadruples", and more. Algorithm representations can also be classified into three accepted levels of Turing machine description: high-level description, implementation description, and formal description. A high-level description describes qualities of the algorithm itself, ignoring how it is implemented on the Turing machine. An implementation description describes the general manner in which the machine moves its head and stores data in order to carry out the algorithm, but does not give exact states. In the most detail, a formal description gives the exact state table and list of transitions of the Turing machine.
Flowchart representation
The graphical aid called a flowchart offers a way to describe and document an algorithm (and a computer program corresponding to it). It has four primary symbols: arrows showing program flow, rectangles (SEQUENCE, GOTO), diamonds (IF-THEN-ELSE), and dots (OR-tie). Sub-structures can "nest" in rectangles, but only if a single exit occurs from the superstructure.
Algorithmic analysis
It is often important to know how much time, storage, or other cost an algorithm may require. Methods have been developed for the analysis of algorithms to obtain such quantitative answers (estimates); for example, an algorithm that adds up the elements of a list of n numbers would have a time requirement of , using big O notation. The algorithm only needs to remember two values: the sum of all the elements so far, and its current position in the input list. If the space required to store the input numbers is not counted, it has a space requirement of , otherwise is required.
Different algorithms may complete the same task with a different set of instructions in less or more time, space, or 'effort' than others. For example, a binary search algorithm (with cost ) outperforms a sequential search (cost ) when used for table lookups on sorted lists or arrays.
Formal versus empirical
The analysis, and study of algorithms is a discipline of computer science. Algorithms are often studied abstractly, without referencing any specific programming language or implementation. Algorithm analysis resembles other mathematical disciplines as it focuses on the algorithm's properties, not implementation. Pseudocode is typical for analysis as it is a simple and general representation. Most algorithms are implemented on particular hardware/software platforms and their algorithmic efficiency is tested using real code. The efficiency of a particular algorithm may be insignificant for many "one-off" problems but it may be critical for algorithms designed for fast interactive, commercial or long life scientific usage. Scaling from small n to large n frequently exposes inefficient algorithms that are otherwise benign.
Empirical testing is useful for uncovering unexpected interactions that affect performance. Benchmarks may be used to compare before/after potential improvements to an algorithm after program optimization.
Empirical tests cannot replace formal analysis, though, and are non-trivial to perform fairly.
Execution efficiency
To illustrate the potential improvements possible even in well-established algorithms, a recent significant innovation, relating to FFT algorithms (used heavily in the field of image processing), can decrease processing time up to 1,000 times for applications like medical imaging. In general, speed improvements depend on special properties of the problem, which are very common in practical applications. Speedups of this magnitude enable computing devices that make extensive use of image processing (like digital cameras and medical equipment) to consume less power.
Design
Algorithm design is a method or mathematical process for problem-solving and engineering algorithms. The design of algorithms is part of many solution theories, such as divide-and-conquer or dynamic programming within operation research. Techniques for designing and implementing algorithm designs are also called algorithm design patterns, with examples including the template method pattern and the decorator pattern. One of the most important aspects of algorithm design is resource (run-time, memory usage) efficiency; the big O notation is used to describe e.g., an algorithm's run-time growth as the size of its input increases.
Structured programming
Per the Church–Turing thesis, any algorithm can be computed by any Turing complete model. Turing completeness only requires four instruction types—conditional GOTO, unconditional GOTO, assignment, HALT. However, Kemeny and Kurtz observe that, while "undisciplined" use of unconditional GOTOs and conditional IF-THEN GOTOs can result in "spaghetti code", a programmer can write structured programs using only these instructions; on the other hand "it is also possible, and not too hard, to write badly structured programs in a structured language". Tausworthe augments the three Böhm-Jacopini canonical structures: SEQUENCE, IF-THEN-ELSE, and WHILE-DO, with two more: DO-WHILE and CASE. An additional benefit of a structured program is that it lends itself to proofs of correctness using mathematical induction.
Legal status
By themselves, algorithms are not usually patentable. In the United States, a claim consisting solely of simple manipulations of abstract concepts, numbers, or signals does not constitute "processes" (USPTO 2006), so algorithms are not patentable (as in Gottschalk v. Benson). However practical applications of algorithms are sometimes patentable. For example, in Diamond v. Diehr, the application of a simple feedback algorithm to aid in the curing of synthetic rubber was deemed patentable. The patenting of software is controversial, and there are criticized patents involving algorithms, especially data compression algorithms, such as Unisys's LZW patent. Additionally, some cryptographic algorithms have export restrictions (see export of cryptography).
Classification
By implementation
Recursion
A recursive algorithm invokes itself repeatedly until meeting a termination condition, and is a common functional programming method. Iterative algorithms use repetitions such as loops or data structures like stacks to solve problems. Problems may be suited for one implementation or the other. The Tower of Hanoi is a puzzle commonly solved using recursive implementation. Every recursive version has an equivalent (but possibly more or less complex) iterative version, and vice versa.
Serial, parallel or distributed
Algorithms are usually discussed with the assumption that computers execute one instruction of an algorithm at a time on serial computers. Serial algorithms are designed for these environments, unlike parallel or distributed algorithms. Parallel algorithms take advantage of computer architectures where multiple processors can work on a problem at the same time. Distributed algorithms use multiple machines connected via a computer network. Parallel and distributed algorithms divide the problem into subproblems and collect the results back together. Resource consumption in these algorithms is not only processor cycles on each processor but also the communication overhead between the processors. Some sorting algorithms can be parallelized efficiently, but their communication overhead is expensive. Iterative algorithms are generally parallelizable, but some problems have no parallel algorithms and are called inherently serial problems.
Deterministic or non-deterministic
Deterministic algorithms solve the problem with exact decision at every step; whereas non-deterministic algorithms solve problems via guessing. Guesses are typically made more accurate through the use of heuristics.
Exact or approximate
While many algorithms reach an exact solution, approximation algorithms seek an approximation that is close to the true solution. Such algorithms have practical value for many hard problems. For example, the Knapsack problem, where there is a set of items and the goal is to pack the knapsack to get the maximum total value. Each item has some weight and some value. The total weight that can be carried is no more than some fixed number X. So, the solution must consider weights of items as well as their value.
Quantum algorithm
Quantum algorithms run on a realistic model of quantum computation. The term is usually used for those algorithms which seem inherently quantum or use some essential feature of Quantum computing such as quantum superposition or quantum entanglement.
By design paradigm
Another way of classifying algorithms is by their design methodology or paradigm. Some common paradigms are:
Brute-force or exhaustive search
Brute force is a problem-solving method of systematically trying every possible option until the optimal solution is found. This approach can be very time-consuming, testing every possible combination of variables. It is often used when other methods are unavailable or too complex. Brute force can solve a variety of problems, including finding the shortest path between two points and cracking passwords.
Divide and conquer
A divide-and-conquer algorithm repeatedly reduces a problem to one or more smaller instances of itself (usually recursively) until the instances are small enough to solve easily. Merge sorting is an example of divide and conquer, where an unordered list can be divided into segments containing one item and sorting of entire list can be obtained by merging the segments. A simpler variant of divide and conquer is called a decrease-and-conquer algorithm, which solves one smaller instance of itself, and uses the solution to solve the bigger problem. Divide and conquer divides the problem into multiple subproblems and so the conquer stage is more complex than decrease and conquer algorithms. An example of a decrease and conquer algorithm is the binary search algorithm.
Search and enumeration
Many problems (such as playing chess) can be modelled as problems on graphs. A graph exploration algorithm specifies rules for moving around a graph and is useful for such problems. This category also includes search algorithms, branch and bound enumeration, and backtracking.
Randomized algorithm
Such algorithms make some choices randomly (or pseudo-randomly). They find approximate solutions when finding exact solutions may be impractical (see heuristic method below). For some problems the fastest approximations must involve some randomness. Whether randomized algorithms with polynomial time complexity can be the fastest algorithm for some problems is an open question known as the P versus NP problem. There are two large classes of such algorithms:
Monte Carlo algorithms return a correct answer with high probability. E.g. RP is the subclass of these that run in polynomial time.
Las Vegas algorithms always return the correct answer, but their running time is only probabilistically bound, e.g. ZPP.
Reduction of complexity
This technique transforms difficult problems into better-known problems solvable with (hopefully) asymptotically optimal algorithms. The goal is to find a reducing algorithm whose complexity is not dominated by the resulting reduced algorithms. For example, one selection algorithm finds the median of an unsorted list by first sorting the list (the expensive portion), then pulling out the middle element in the sorted list (the cheap portion). This technique is also known as transform and conquer.
Back tracking
In this approach, multiple solutions are built incrementally and abandoned when it is determined that they cannot lead to a valid full solution.
Optimization problems
For optimization problems there is a more specific classification of algorithms; an algorithm for such problems may fall into one or more of the general categories described above as well as into one of the following:
Linear programming
When searching for optimal solutions to a linear function bound by linear equality and inequality constraints, the constraints can be used directly to produce optimal solutions. There are algorithms that can solve any problem in this category, such as the popular simplex algorithm. Problems that can be solved with linear programming include the maximum flow problem for directed graphs. If a problem also requires that any of the unknowns be integers, then it is classified in integer programming. A linear programming algorithm can solve such a problem if it can be proved that all restrictions for integer values are superficial, i.e., the solutions satisfy these restrictions anyway. In the general case, a specialized algorithm or an algorithm that finds approximate solutions is used, depending on the difficulty of the problem.
Dynamic programming
When a problem shows optimal substructures—meaning the optimal solution can be constructed from optimal solutions to subproblems—and overlapping subproblems, meaning the same subproblems are used to solve many different problem instances, a quicker approach called dynamic programming avoids recomputing solutions. For example, Floyd–Warshall algorithm, the shortest path between a start and goal vertex in a weighted graph can be found using the shortest path to the goal from all adjacent vertices. Dynamic programming and memoization go together. Unlike divide and conquer, dynamic programming subproblems often overlap. The difference between dynamic programming and simple recursion is the caching or memoization of recursive calls. When subproblems are independent and do not repeat, memoization does not help; hence dynamic programming is not applicable to all complex problems. Using memoization dynamic programming reduces the complexity of many problems from exponential to polynomial.
The greedy method
Greedy algorithms, similarly to a dynamic programming, work by examining substructures, in this case not of the problem but of a given solution. Such algorithms start with some solution and improve it by making small modifications. For some problems they always find the optimal solution but for others they may stop at local optima. The most popular use of greedy algorithms is finding minimal spanning trees of graphs without negative cycles. Huffman Tree, Kruskal, Prim, Sollin are greedy algorithms that can solve this optimization problem.
The heuristic method
In optimization problems, heuristic algorithms find solutions close to the optimal solution when finding the optimal solution is impractical. These algorithms get closer and closer to the optimal solution as they progress. In principle, if run for an infinite amount of time, they will find the optimal solution. They can ideally find a solution very close to the optimal solution in a relatively short time. These algorithms include local search, tabu search, simulated annealing, and genetic algorithms. Some, like simulated annealing, are non-deterministic algorithms while others, like tabu search, are deterministic. When a bound on the error of the non-optimal solution is known, the algorithm is further categorized as an approximation algorithm.
Examples
One of the simplest algorithms finds the largest number in a list of numbers of random order. Finding the solution requires looking at every number in the list. From this follows a simple algorithm, which can be described in plain English as:
High-level description:
If a set of numbers is empty, then there is no highest number.
Assume the first number in the set is the largest.
For each remaining number in the set: if this number is greater than the current largest, it becomes the new largest.
When there are no unchecked numbers left in the set, consider the current largest number to be the largest in the set.
(Quasi-)formal description:
Written in prose but much closer to the high-level language of a computer program, the following is the more formal coding of the algorithm in pseudocode or pidgin code:
Input: A list of numbers L.
Output: The largest number in the list L.
if L.size = 0 return null
largest ← L[0]
for each item in L, do
if item > largest, then
largest ← item
return largest
See also
Abstract machine
ALGOL
Algorithm aversion
Algorithm engineering
Algorithm characterizations
Algorithmic bias
Algorithmic composition
Algorithmic entities
Algorithmic synthesis
Algorithmic technique
Algorithmic topology
Computational mathematics
Garbage in, garbage out
Introduction to Algorithms (textbook)
Government by algorithm
List of algorithms
List of algorithm general topics
Medium is the message
Regulation of algorithms
Theory of computation
Computability theory
Computational complexity theory
Notes
Bibliography
Bell, C. Gordon and Newell, Allen (1971), Computer Structures: Readings and Examples, McGraw–Hill Book Company, New York. .
Includes a bibliography of 56 references.
,
: cf. Chapter 3 Turing machines where they discuss "certain enumerable sets not effectively (mechanically) enumerable".
Campagnolo, M.L., Moore, C., and Costa, J.F. (2000) An analog characterization of the subrecursive functions. In Proc. of the 4th Conference on Real Numbers and Computers, Odense University, pp. 91–109
Reprinted in The Undecidable, p. 89ff. The first expression of "Church's Thesis". See in particular page 100 (The Undecidable) where he defines the notion of "effective calculability" in terms of "an algorithm", and he uses the word "terminates", etc.
Reprinted in The Undecidable, p. 110ff. Church shows that the Entscheidungsproblem is unsolvable in about 3 pages of text and 3 pages of footnotes.
Davis gives commentary before each article. Papers of Gödel, Alonzo Church, Turing, Rosser, Kleene, and Emil Post are included; those cited in the article are listed here by author's name.
Davis offers concise biographies of Leibniz, Boole, Frege, Cantor, Hilbert, Gödel and Turing with von Neumann as the show-stealing villain. Very brief bios of Joseph-Marie Jacquard, Babbage, Ada Lovelace, Claude Shannon, Howard Aiken, etc.
,
Yuri Gurevich, Sequential Abstract State Machines Capture Sequential Algorithms, ACM Transactions on Computational Logic, Vol 1, no 1 (July 2000), pp. 77–111. Includes bibliography of 33 sources.
, 3rd edition 1976[?], (pbk.)
, . Cf. Chapter "The Spirit of Truth" for a history leading to, and a discussion of, his proof.
Presented to the American Mathematical Society, September 1935. Reprinted in The Undecidable, p. 237ff. Kleene's definition of "general recursion" (known now as mu-recursion) was used by Church in his 1935 paper An Unsolvable Problem of Elementary Number Theory that proved the "decision problem" to be "undecidable" (i.e., a negative result).
Reprinted in The Undecidable, p. 255ff. Kleene refined his definition of "general recursion" and proceeded in his chapter "12. Algorithmic theories" to posit "Thesis I" (p. 274); he would later repeat this thesis (in Kleene 1952:300) and name it "Church's Thesis"(Kleene 1952:317) (i.e., the Church thesis).
Kosovsky, N.K. Elements of Mathematical Logic and its Application to the theory of Subrecursive Algorithms, LSU Publ., Leningrad, 1981
A.A. Markov (1954) Theory of algorithms. [Translated by Jacques J. Schorr-Kon and PST staff] Imprint Moscow, Academy of Sciences of the USSR, 1954 [i.e., Jerusalem, Israel Program for Scientific Translations, 1961; available from the Office of Technical Services, U.S. Dept. of Commerce, Washington] Description 444 p. 28 cm. Added t.p. in Russian Translation of Works of the Mathematical Institute, Academy of Sciences of the USSR, v. 42. Original title: Teoriya algerifmov. [QA248.M2943 Dartmouth College library. U.S. Dept. of Commerce, Office of Technical Services, number OTS .]
Minsky expands his "...idea of an algorithm – an effective procedure..." in chapter 5.1 Computability, Effective Procedures and Algorithms. Infinite machines.
Reprinted in The Undecidable, pp. 289ff. Post defines a simple algorithmic-like process of a man writing marks or erasing marks and going from box to box and eventually halting, as he follows a list of simple instructions. This is cited by Kleene as one source of his "Thesis I", the so-called Church–Turing thesis.
Reprinted in The Undecidable, p. 223ff. Herein is Rosser's famous definition of "effective method": "...a method each step of which is precisely predetermined and which is certain to produce the answer in a finite number of steps... a machine which will then solve any problem of the set with no human intervention beyond inserting the question and (later) reading the answer" (p. 225–226, The Undecidable)
Cf. in particular the first chapter titled: Algorithms, Turing Machines, and Programs. His succinct informal definition: "...any sequence of instructions that can be obeyed by a robot, is called an algorithm" (p. 4).
. Corrections, ibid, vol. 43(1937) pp. 544–546. Reprinted in The Undecidable, p. 116ff. Turing's famous paper completed as a Master's dissertation while at King's College Cambridge UK.
Reprinted in The Undecidable, pp. 155ff. Turing's paper that defined "the oracle" was his PhD thesis while at Princeton.
United States Patent and Trademark Office (2006), 2106.02 **>Mathematical Algorithms: 2100 Patentability, Manual of Patent Examining Procedure (MPEP). Latest revision August 2006
Zaslavsky, C. (1970). Mathematics of the Yoruba People and of Their Neighbors in Southern Nigeria. The Two-Year College Mathematics Journal, 1(2), 76–99. https://doi.org/10.2307/3027363
Further reading
Jon Kleinberg, Éva Tardos(2006): Algorithm Design, Pearson/Addison-Wesley, ISBN 978-0-32129535-4
Knuth, Donald E. (2000). Selected Papers on Analysis of Algorithms . Stanford, California: Center for the Study of Language and Information.
Knuth, Donald E. (2010). Selected Papers on Design of Algorithms . Stanford, California: Center for the Study of Language and Information.
External links
Dictionary of Algorithms and Data Structures – National Institute of Standards and Technology
Algorithm repositories
The Stony Brook Algorithm Repository – State University of New York at Stony Brook
Collected Algorithms of the ACM – Associations for Computing Machinery
The Stanford GraphBase – Stanford University
Articles with example pseudocode
Mathematical logic
Theoretical computer science | Algorithm | [
"Mathematics"
] | 6,223 | [
"Theoretical computer science",
"Applied mathematics",
"Algorithms",
"Mathematical logic"
] |
782 | https://en.wikipedia.org/wiki/Mouthwash | Mouthwash, mouth rinse, oral rinse, or mouth bath is a liquid which is held in the mouth passively or swirled around the mouth by contraction of the perioral muscles and/or movement of the head, and may be gargled, where the head is tilted back and the liquid bubbled at the back of the mouth.
Usually mouthwashes are antiseptic solutions intended to reduce the microbial load in the mouth, although other mouthwashes might be given for other reasons such as for their analgesic, anti-inflammatory or anti-fungal action. Additionally, some rinses act as saliva substitutes to neutralize acid and keep the mouth moist in xerostomia (dry mouth). Cosmetic mouthrinses temporarily control or reduce bad breath and leave the mouth with a pleasant taste.
Rinsing with water or mouthwash after brushing with a fluoride toothpaste can reduce the availability of salivary fluoride. This can lower the anti-cavity re-mineralization and antibacterial effects of fluoride. Fluoridated mouthwash may mitigate this effect or in high concentrations increase available fluoride, but is not as cost-effective as leaving the fluoride toothpaste on the teeth after brushing. A group of experts discussing post brushing rinsing in 2012 found that although there was clear guidance given in many public health advice publications to "spit, avoid rinsing with water/excessive rinsing with water" they believed there was a limited evidence base for best practice.
Use
Common use involves rinsing the mouth with about of mouthwash. The wash is typically swished or gargled for about half a minute and then spat out. Most companies suggest not drinking water immediately after using mouthwash. In some brands, the expectorate is stained, so that one can see the bacteria and debris.
Mouthwash should not be used immediately after brushing the teeth so as not to wash away the beneficial fluoride residue left from the toothpaste. Similarly, the mouth should not be rinsed out with water after brushing. Patients were told to "spit don't rinse" after toothbrushing as part of a National Health Service campaign in the UK. A fluoride mouthrinse can be used at a different time of the day to brushing.
Gargling is where the head is tilted back, allowing the mouthwash to sit in the back of the mouth while exhaling, causing the liquid to bubble. Gargling is practiced in Japan for perceived prevention of viral infection. One commonly used way is with infusions or tea. In some cultures, gargling is usually done in private, typically in a bathroom at a sink so the liquid can be rinsed away.
Dangerous misuse
Serious harm and even death can quickly result from ingestion due to the high alcohol content and other substances harmful to ingestion present in some brands of mouthwash. Zero percent alcohol mouthwashes do exist, as well as many other formulations for different needs (covered in the above sections).
These risks may be higher in toddlers and young children if they are allowed to use toothpaste and/or mouthwash unsupervised, where they may swallow it. Misuse in this way can be avoided with parental admission or supervision and by using child-safe forms or a children's brand of mouthwash.
Surrogate alcohol use such as ingestion of mouthwash is a common cause of death among homeless people during winter months, because a person can feel warmer after drinking it.
Effects
The most-commonly-used mouthwashes are commercial antiseptics, which are used at home as part of an oral hygiene routine. Mouthwashes combine ingredients to treat a variety of oral conditions. Variations are common, and mouthwash has no standard formulation, so its use and recommendation involves concerns about patient safety. Some manufacturers of mouthwash state that their antiseptic and antiplaque mouthwashes kill the bacterial plaque that causes cavities, gingivitis, and bad breath. It is, however, generally agreed that the use of mouthwash does not eliminate the need for both brushing and flossing. The American Dental Association asserts that regular brushing and proper flossing are enough in most cases, in addition to regular dental check-ups, although they approve many mouthwashes.
For many patients, however, the mechanical methods could be tedious and time-consuming, and, additionally, some local conditions may render them especially difficult. Chemotherapeutic agents, including mouthwashes, could have a key role as adjuncts to daily home care, preventing and controlling supragingival plaque, gingivitis and oral malodor.
Minor and transient side effects of mouthwashes are very common, such as taste disturbance, tooth staining, sensation of a dry mouth, etc. Alcohol-containing mouthwashes may make dry mouth and halitosis worse, as they dry out the mouth. Soreness, ulceration and redness may sometimes occur (e.g., aphthous stomatitis or allergic contact stomatitis) if the person is allergic or sensitive to mouthwash ingredients, such as preservatives, coloring, flavors and fragrances. Such effects might be reduced or eliminated by diluting the mouthwash with water, using a different mouthwash (e.g. saltwater), or foregoing mouthwash entirely.
Prescription mouthwashes are used prior to and after oral surgery procedures, such as tooth extraction, or to treat the pain associated with mucositis caused by radiation therapy or chemotherapy. They are also prescribed for aphthous ulcers, other oral ulcers, and other mouth pain. "Magic mouthwashes" are prescription mouthwashes compounded in a pharmacy from a list of ingredients specified by a doctor. Despite a lack of evidence that prescription mouthwashes are more effective in decreasing the pain of oral lesions, many patients and prescribers continue to use them. There has been only one controlled study to evaluate the efficacy of magic mouthwash; it shows no difference in efficacy between the most common magic-mouthwash formulation, on the one hand, and commercial mouthwashes (such as chlorhexidine) or a saline/baking soda solution, on the other. Current guidelines suggest that saline solution is just as effective as magic mouthwash in pain relief and in shortening the healing time of oral mucositis from cancer therapies.
History
The first known references to mouth rinsing is in Ayurveda for treatment of gingivitis. Later, in the Greek and Roman periods, mouth rinsing following mechanical cleansing became common among the upper classes, and Hippocrates recommended a mixture of salt, alum, and vinegar. The Jewish Talmud, dating back about 1,800 years, suggests a cure for gum ailments containing "dough water" and olive oil. The ancient Chinese had also gargled salt water, tea and wine as a form of mouthwash after meals, due to the antiseptic properties of those liquids.
Before Europeans came to the Americas, Native North American and Mesoamerican cultures used mouthwashes, often made from plants such as Coptis trifolia. Peoples of the Americas used salt water mouthwashes for sore throats, and other mouthwashes for problems such as teething and mouth ulcers.
Anton van Leeuwenhoek, the famous 17th century microscopist, discovered living organisms (living, because they were mobile) in deposits on the teeth (what we now call dental plaque). He also found organisms in water from the canal next to his home in Delft. He experimented with samples by adding vinegar or brandy and found that this resulted in the immediate immobilization or killing of the organisms suspended in water. Next he tried rinsing the mouth of himself and somebody else with a mouthwash containing vinegar or brandy and found that living organisms remained in the dental plaque. He concluded—correctly—that the mouthwash either did not reach, or was not present long enough, to kill the plaque organisms.
In 1892, German Richard Seifert invented mouthwash product Odol, which was produced by company founder Karl August Lingner (1861–1916) in Dresden.
That remained the state of affairs until the late 1960s when Harald Loe (at the time a professor at the Royal Dental College in Aarhus, Denmark) demonstrated that a chlorhexidine compound could prevent the build-up of dental plaque. The reason for chlorhexidine's effectiveness is that it strongly adheres to surfaces in the mouth and thus remains present in effective concentrations for many hours.
Since then commercial interest in mouthwashes has been intense and several newer products claim effectiveness in reducing the build-up in dental plaque and the associated severity of gingivitis, in addition to fighting bad breath. Many of these solutions aim to control the volatile sulfur compound–creating anaerobic bacteria that live in the mouth and excrete substances that lead to bad breath and unpleasant mouth taste. For example, the number of mouthwash variants in the United States of America has grown from 15 (1970) to 66 (1998) to 113 (2012).
Research
Research in the field of microbiotas shows that only a limited set of microbes cause tooth decay, with most of the bacteria in the human mouth being harmless. Focused attention on cavity-causing bacteria such as Streptococcus mutans has led research into new mouthwash treatments that prevent these bacteria from initially growing. While current mouthwash treatments must be used with a degree of frequency to prevent this bacteria from regrowing, future treatments could provide a viable long-term solution.
A clinical trial and laboratory studies have shown that alcohol-containing mouthwash could reduce the growth of Neisseria gonorrhoeae in the pharynx. However, subsequent trials have found that there was no difference in gonorrhoea cases among men using daily mouthwash compared to those who did not use mouthwash for 12 weeks.
Ingredients
Alcohol
Alcohol is added to mouthwash not to destroy bacteria but to act as a carrier agent for essential active ingredients such as menthol, eucalyptol and thymol, which help to penetrate plaque. Sometimes a significant amount of alcohol (up to 27% vol) is added, as a carrier for the flavor, to provide "bite". Because of the alcohol content, it is possible to fail a breathalyzer test after rinsing, although breath alcohol levels return to normal after 10 minutes. In addition, alcohol is a drying agent, which encourages bacterial activity in the mouth, releasing more malodorous volatile sulfur compounds. Therefore, alcohol-containing mouthwash may temporarily worsen halitosis in those who already have it, or, indeed, be the sole cause of halitosis in other individuals. Alcohol in mouthwashes may act as a carcinogen (cancer-inducing agent) in some cases . Many newer brands of mouthwash are alcohol-free, not just in response to consumer concerns about oral cancer, but also to cater for religious groups who abstain from alcohol consumption.
Benzydamine (analgesic)
In painful oral conditions such as aphthous stomatitis, analgesic mouthrinses (e.g. benzydamine mouthwash, or "Difflam") are sometimes used to ease pain, commonly used before meals to reduce discomfort while eating.
Benzoic acid
Benzoic acid acts as a buffer.
Betamethasone
Betamethasone is sometimes used as an anti-inflammatory, corticosteroid mouthwash. It may be used for severe inflammatory conditions of the oral mucosa such as the severe forms of aphthous stomatitis.
Cetylpyridinium chloride (antiseptic, antimalodor)
Cetylpyridinium chloride containing mouthwash (e.g. 0.05%) is used in some specialized mouthwashes for halitosis. Cetylpyridinium chloride mouthwash has less anti-plaque effect than chlorhexidine and may cause staining of teeth, or sometimes an oral burning sensation or ulceration.
Chlorhexidine digluconate and hexetidine (antiseptic)
Chlorhexidine digluconate is a chemical antiseptic and is used in a 0.05–0.2% solution as a mouthwash. There is no evidence to support that higher concentrations are more effective in controlling dental plaque and gingivitis. A randomized clinical trial conducted in Rabat University in Morocco found better results in plaque inhibition when chlorohexidine with alcohol base 0.12% was used, when compared to an alcohol-free 0.1% chlorhexidine mouthrinse.
Chlorhexidine has good substantivity (the ability of a mouthwash to bind to hard and soft tissues in the mouth). It has anti-plaque action, and also some anti-fungal action. It is especially effective against Gram-negative rods. The proportion of Gram-negative rods increase as gingivitis develops, so it is also used to reduce gingivitis. It is sometimes used as an adjunct to prevent dental caries and to treat periodontal disease, although it does not penetrate into periodontal pockets well. Chlorhexidine mouthwash alone is unable to prevent plaque, so it is not a substitute for regular toothbrushing and flossing. Instead, chlorhexidine mouthwash is more effective when used as an adjunctive treatment with toothbrushing and flossing. In the short term, if toothbrushing is impossible due to pain, as may occur in primary herpetic gingivostomatitis, chlorhexidine mouthwash is used as a temporary substitute for other oral hygiene measures. It is not suited for use in acute necrotizing ulcerative gingivitis, however. Rinsing with chlorhexidine mouthwash before and after a tooth extraction may reduce the risk of a dry socket. Other uses of chlorhexidine mouthwash include prevention of oral candidiasis in immunocompromised persons, treatment of denture-related stomatitis, mucosal ulceration/erosions and oral mucosal lesions, general burning sensation and many other uses.
Chlorhexidine mouthwash is known to have minor adverse effects. Chlorhexidine binds to tannins, meaning that prolonged use in persons who consume coffee, tea or red wine is associated with extrinsic staining (i.e. removable staining) of teeth. A systematic review of commercial chlorhexidine products with anti-discoloration systems (ADSs) found that the ADSs were able to reduce tooth staining without affecting the beneficial effects of chlorhexidine. Chlorhexidine mouthwash can also cause taste disturbance or alteration. Chlorhexidine is rarely associated with other issues like overgrowth of enterobacteria in persons with leukemia, desquamation, irritation, and stomatitis of oral mucosa, salivary gland pain and swelling, and hypersensitivity reactions including anaphylaxis.
Hexetidine also has anti-plaque, analgesic, astringent and anti-malodor properties, but is considered an inferior alternative to chlorhexidine.
Chlorine dioxide
In dilute concentrations, chlorine dioxide is an ingredient that acts as an antiseptic agent in some mouthwashes.
Edible oils
In traditional Ayurvedic medicine, the use of oil mouthwashes is called "Kavala" ("oil swishing") or "Gandusha", and this practice has more recently been re-marketed by the complementary and alternative medicine industry as "oil pulling". Its promoters claim it works by "pulling out" "toxins", which are known as ama in Ayurvedic medicine, and thereby reducing inflammation. Ayurvedic literature claims that oil pulling is capable of improving oral and systemic health, including a benefit in conditions such as headaches, migraines, diabetes mellitus, asthma, and acne, as well as whitening teeth.
Oil pulling has received little study and there is little evidence to support claims made by the technique's advocates. When compared with chlorhexidine in one small study, it was found to be less effective at reducing oral bacterial load, and the other health claims of oil pulling have failed scientific verification or have not been investigated. There is a report of lipid pneumonia caused by accidental inhalation of the oil during oil pulling.
The mouth is rinsed with approximately one tablespoon of oil for 10–20 minutes then spat out. Sesame oil, coconut oil and ghee are traditionally used, but newer oils such as sunflower oil are also used.
Essential oils
Phenolic compounds and monoterpenes include essential oil constituents that have some antibacterial properties, such as eucalyptol, eugenol, hinokitiol, menthol, phenol, or thymol.
Essential oils are oils which have been extracted from plants. Mouthwashes based on essential oils could be more effective than traditional mouthcare as anti-gingival treatments. They have been found effective in reducing halitosis, and are being used in several commercial mouthwashes.
Fluoride (anticavity)
Anti-cavity mouthwashes contain fluoride compounds (such as sodium fluoride, stannous fluoride, or sodium monofluorophosphate) to protect against tooth decay. Fluoride-containing mouthwashes are used as prevention for dental caries for individuals who are considered at higher risk for tooth decay, whether due to xerostomia related to salivary dysfunction or side effects of medication, to not drinking fluoridated water, or to being physically unable to care for their oral needs (brushing and flossing), and as treatment for those with dentinal hypersensitivity, gingival recession/ root exposure.
Flavoring agents and xylitol
Flavoring agents include sweeteners such as sorbitol, sucralose, sodium saccharin, and xylitol, which stimulate salivary function due to their sweetness and taste and helps restore the mouth to a neutral level of acidity.
Xylitol rinses double as a bacterial inhibitor, and have been used as substitute for alcohol to avoid dryness of mouth associated with alcohol.
Hydrogen peroxide
Hydrogen peroxide can be used as an oxidizing mouthwash (e.g. Peroxyl, 1.5%). It kills anaerobic bacteria, and also has a mechanical cleansing action when it froths as it comes into contact with debris in mouth. It is often used in the short term to treat acute necrotising ulcerative gingivitis. Side effects can occur with prolonged use, including hypertrophy of the lingual papillae.
Lactoperoxidase (saliva substitute)
Enzymes and non-enzymatic proteins, such as lactoperoxidase, lysozyme, and lactoferrin, have been used in mouthwashes (e.g., Biotene) to reduce levels of oral bacteria, and, hence, of the acids produced by these bacteria.
Lidocaine/xylocaine
Oral lidocaine is useful for the treatment of mucositis symptoms (inflammation of mucous membranes) induced by radiation or chemotherapy. There is evidence that lidocaine anesthetic mouthwash has the potential to be systemically absorbed, when it was tested in patients with oral mucositis who underwent a bone marrow transplant.
Methyl salicylate
Methyl salicylate functions as an antiseptic, antiinflammatory, and analgesic agent, a flavoring, and a fragrance. Methyl salicylate has some anti-plaque action, but less than chlorhexidine. Methyl salicylate does not stain teeth.
Nystatin
Nystatin suspension is an antifungal ingredient used for the treatment of oral candidiasis.
Potassium oxalate
A randomized clinical trial found promising results in controlling and reducing dentine hypersensitivity when potassium oxalate mouthwash was used in conjugation with toothbrushing.
Povidone/iodine (PVP-I)
A 2005 study found that gargling three times a day with simple water or with a povidone-iodine solution was effective in preventing upper respiratory infection and decreasing the severity of symptoms if contracted. Other sources attribute the benefit to a simple placebo effect.
PVP-I in general covers "a wider virucidal spectrum, covering both enveloped and nonenveloped viruses, than the other commercially available antiseptics", which also includes the novel SARS-CoV-2 virus.
Sanguinarine
Sanguinarine-containing mouthwashes are marketed as anti-plaque and anti-malodor treatments. Sanguinarine is a toxic alkaloid herbal extract, obtained from plants such as Sanguinaria canadensis (bloodroot), Argemone mexicana (Mexican prickly poppy), and others. However, its use is strongly associated with the development of leukoplakia (a white patch in the mouth), usually in the buccal sulcus. This type of leukoplakia has been termed "sanguinaria-associated keratosis", and more than 80% of people with leukoplakia in the vestibule of the mouth have used this substance. Upon stopping contact with the causative substance, the lesions may persist for years. Although this type of leukoplakia may show dysplasia, the potential for malignant transformation is unknown. Ironically, elements within the complementary and alternative medicine industry promote the use of sanguinaria as a therapy for cancer.
Sodium bicarbonate (baking soda)
Sodium bicarbonate is sometimes combined with salt to make a simple homemade mouthwash, indicated for any of the reasons that a saltwater mouthwash might be used. Pre-mixed mouthwashes of 1% sodium bicarbonate and 1.5% sodium chloride in aqueous solution are marketed, although pharmacists will easily be able to produce such a formulation from the base ingredients when required. Sodium bicarbonate mouthwash is sometimes used to remove viscous saliva and to aid visualization of the oral tissues during examination of the mouth.
Sodium chloride (salt)
Saline has a mechanical cleansing action and an antiseptic action, as it is a hypertonic solution in relation to bacteria, which undergo lysis. The heat of the solution produces a therapeutic increase in blood flow (hyperemia) to the surgical site, promoting healing. Hot saltwater mouthwashes also encourage the draining of pus from dental abscesses. In contrast, if heat is applied on the side of the face (e.g., hot water bottle) rather than inside the mouth, it may cause a dental abscess to drain extra-orally, which is later associated with an area of fibrosis on the face .
Saltwater mouthwashes are also routinely used after oral surgery, to keep food debris out of healing wounds and to prevent infection. Some oral surgeons consider saltwater mouthwashes the mainstay of wound cleanliness after surgery. In dental extractions, hot saltwater mouthbaths should start about 24 hours after a dental extraction. The term mouth bath implies that the liquid is passively held in the mouth, rather than vigorously swilled around (which could dislodge a blood clot). Once the blood clot has stabilized, the mouthwash can be used more vigorously. These mouthwashes tend to be advised for use about 6 times per day, especially after meals (to remove food from the socket).
Sodium lauryl sulfate (foaming agent)
Sodium lauryl sulfate (SLS) is used as a foaming agent in many oral hygiene products, including many mouthwashes. Some may suggest that it is probably advisable to use mouthwash at least an hour after brushing with toothpaste when the toothpaste contains SLS, since the anionic compounds in the SLS toothpaste can deactivate cationic agents present in the mouthwash.
Sucralfate
Sucralfate is a mucosal coating agent, composed of an aluminum salt of sulfated sucrose. It is not recommended for use in the prevention of oral mucositis in head and neck cancer patients receiving radiotherapy or chemoradiation, due to a lack of efficacy found in a well-designed, randomized controlled trial.
Tetracycline (antibiotic)
Tetracycline is an antibiotic which may sometimes be used as a mouthwash in adults (it causes red staining of teeth in children). It is sometimes use for herpetiforme ulceration (an uncommon type of aphthous stomatitis), but prolonged use may lead to oral candidiasis, as the fungal population of the mouth overgrows in the absence of enough competing bacteria. Similarly, minocycline mouthwashes of 0.5% concentrations can relieve symptoms of recurrent aphthous stomatitis. Erythromycin is similar.
Tranexamic acid
A 4.8% tranexamic acid solution is sometimes used as an antifibrinolytic mouthwash to prevent bleeding during and after oral surgery in persons with coagulopathies (clotting disorders) or who are taking anticoagulants (blood thinners such as warfarin).
Triclosan
Triclosan is a non-ionic chlorinate bisphenol antiseptic found in some mouthwashes. When used in mouthwash (e.g. 0.03%), there is moderate substantivity, broad spectrum anti-bacterial action, some anti-fungal action, and significant anti-plaque effect, especially when combined with a copolymer or zinc citrate. Triclosan does not cause staining of the teeth. The safety of triclosan has been questioned.
Zinc
Astringents like zinc chloride provide a pleasant-tasting sensation and shrink tissues. Zinc, when used in combination with other antiseptic agents, can limit the buildup of tartar.
See also
Sodium fluoride/malic acid
Virucide
References
External links
Article on Bad-Breath Prevention Products – from MSNBC
Mayo Clinic Q&A on Magic Mouthwash for chemotherapy sores
American Dental Association article on mouthwash
Dentifrices
Oral hygiene
Drug delivery devices
Dosage forms | Mouthwash | [
"Chemistry"
] | 5,581 | [
"Pharmacology",
"Drug delivery devices"
] |
791 | https://en.wikipedia.org/wiki/Asteroid | An asteroid is a minor planet—an object that is neither a true planet nor an identified comet— that orbits within the inner Solar System. They are rocky, metallic, or icy bodies with no atmosphere, classified as C-type (carbonaceous), M-type (metallic), or S-type (silicaceous). The size and shape of asteroids vary significantly, ranging from small rubble piles under a kilometer across and larger than meteoroids, to Ceres, a dwarf planet almost 1000 km in diameter. A body is classified as a comet, not an asteroid, if it shows a coma (tail) when warmed by solar radiation, although recent observations suggest a continuum between these types of bodies.
Of the roughly one million known asteroids, the greatest number are located between the orbits of Mars and Jupiter, approximately 2 to 4 AU from the Sun, in a region known as the main asteroid belt. The total mass of all the asteroids combined is only 3% that of Earth's Moon. The majority of main belt asteroids follow slightly elliptical, stable orbits, revolving in the same direction as the Earth and taking from three to six years to complete a full circuit of the Sun.
Asteroids have historically been observed from Earth. The first close-up observation of an asteroid was made by the Galileo spacecraft. Several dedicated missions to asteroids were subsequently launched by NASA and JAXA, with plans for other missions in progress. NASA's NEAR Shoemaker studied Eros, and Dawn observed Vesta and Ceres. JAXA's missions Hayabusa and Hayabusa2 studied and returned samples of Itokawa and Ryugu, respectively. OSIRIS-REx studied Bennu, collecting a sample in 2020 which was delivered back to Earth in 2023. NASA's Lucy, launched in 2021, is tasked with studying ten different asteroids, two from the main belt and eight Jupiter trojans. Psyche, launched October 2023, aims to study the metallic asteroid Psyche.
Near-Earth asteroids have the potential for catastrophic consequences if they strike Earth, with a notable example being the Chicxulub impact, widely thought to have induced the Cretaceous–Paleogene mass extinction. As an experiment to meet this danger, in September 2022 the Double Asteroid Redirection Test spacecraft successfully altered the orbit of the non-threatening asteroid Dimorphos by crashing into it.
Terminology
In 2006, the International Astronomical Union (IAU) introduced the currently preferred broad term small Solar System body, defined as an object in the Solar System that is neither a planet, a dwarf planet, nor a natural satellite; this includes asteroids, comets, and more recently discovered classes. According to IAU, "the term 'minor planet' may still be used, but generally, 'Small Solar System Body' will be preferred."
Historically, the first discovered asteroid, Ceres, was at first considered a new planet. It was followed by the discovery of other similar bodies, which with the equipment of the time appeared to be points of light like stars, showing little or no planetary disc, though readily distinguishable from stars due to their apparent motions. This prompted the astronomer Sir William Herschel to propose the term asteroid, coined in Greek as ἀστεροειδής, or asteroeidēs, meaning 'star-like, star-shaped', and derived from the Ancient Greek astēr 'star, planet'. In the early second half of the 19th century, the terms asteroid and planet (not always qualified as "minor") were still used interchangeably.
Traditionally, small bodies orbiting the Sun were classified as comets, asteroids, or meteoroids, with anything smaller than one meter across being called a meteoroid. The term asteroid, never officially defined, can be informally used to mean "an irregularly shaped rocky body orbiting the Sun that does not qualify as a planet or a dwarf planet under the IAU definitions". The main difference between an asteroid and a comet is that a comet shows a coma (tail) due to sublimation of its near-surface ices by solar radiation. A few objects were first classified as minor planets but later showed evidence of cometary activity. Conversely, some (perhaps all) comets are eventually depleted of their surface volatile ices and become asteroid-like. A further distinction is that comets typically have more eccentric orbits than most asteroids; highly eccentric asteroids are probably dormant or extinct comets.
The minor planets beyond Jupiter's orbit are sometimes also called "asteroids", especially in popular presentations. However, it is becoming increasingly common for the term asteroid to be restricted to minor planets of the inner Solar System. Therefore, this article will restrict itself for the most part to the classical asteroids: objects of the asteroid belt, Jupiter trojans, and near-Earth objects.
For almost two centuries after the discovery of Ceres in 1801, all known asteroids spent most of their time at or within the orbit of Jupiter, though a few, such as 944 Hidalgo, ventured farther for part of their orbit. Starting in 1977 with 2060 Chiron, astronomers discovered small bodies that permanently resided further out than Jupiter, now called centaurs. In 1992, 15760 Albion was discovered, the first object beyond the orbit of Neptune (other than Pluto); soon large numbers of similar objects were observed, now called trans-Neptunian object. Further out are Kuiper-belt objects, scattered-disc objects, and the much more distant Oort cloud, hypothesized to be the main reservoir of dormant comets. They inhabit the cold outer reaches of the Solar System where ices remain solid and comet-like bodies exhibit little cometary activity; if centaurs or trans-Neptunian objects were to venture close to the Sun, their volatile ices would sublimate, and traditional approaches would classify them as comets.
The Kuiper-belt bodies are called "objects" partly to avoid the need to classify them as asteroids or comets. They are thought to be predominantly comet-like in composition, though some may be more akin to asteroids. Most do not have the highly eccentric orbits associated with comets, and the ones so far discovered are larger than traditional comet nuclei. Other recent observations, such as the analysis of the cometary dust collected by the Stardust probe, are increasingly blurring the distinction between comets and asteroids, suggesting "a continuum between asteroids and comets" rather than a sharp dividing line.
In 2006, the IAU created the class of dwarf planets for the largest minor planets—those massive enough to have become ellipsoidal under their own gravity. Only the largest object in the asteroid belt has been placed in this category: Ceres, at about across.
History of observations
Despite their large numbers, asteroids are a relatively recent discovery, with the first one—Ceres—only being identified in 1801. Only one asteroid, 4 Vesta, which has a relatively reflective surface, is normally visible to the naked eye in dark skies when it is favorably positioned. Rarely, small asteroids passing close to Earth may be briefly visible to the naked eye. , the Minor Planet Center had data on 1,199,224 minor planets in the inner and outer Solar System, of which about 614,690 had enough information to be given numbered designations.
Discovery of Ceres
In 1772, German astronomer Johann Elert Bode, citing Johann Daniel Titius, published a numerical procession known as the Titius–Bode law (now discredited). Except for an unexplained gap between Mars and Jupiter, Bode's formula seemed to predict the orbits of the known planets. He wrote the following explanation for the existence of a "missing planet":
This latter point seems in particular to follow from the astonishing relation which the known six planets observe in their distances from the Sun. Let the distance from the Sun to Saturn be taken as 100, then Mercury is separated by 4 such parts from the Sun. Venus is 4 + 3 = 7. The Earth 4 + 6 = 10. Mars 4 + 12 = 16. Now comes a gap in this so orderly progression. After Mars there follows a space of 4 + 24 = 28 parts, in which no planet has yet been seen. Can one believe that the Founder of the universe had left this space empty? Certainly not. From here we come to the distance of Jupiter by 4 + 48 = 52 parts, and finally to that of Saturn by 4 + 96 = 100 parts.
Bode's formula predicted another planet would be found with an orbital radius near 2.8 astronomical units (AU), or 420 million km, from the Sun. The Titius–Bode law got a boost with William Herschel's discovery of Uranus near the predicted distance for a planet beyond Saturn. In 1800, a group headed by Franz Xaver von Zach, editor of the German astronomical journal Monatliche Correspondenz (Monthly Correspondence), sent requests to 24 experienced astronomers (whom he dubbed the "celestial police"), asking that they combine their efforts and begin a methodical search for the expected planet. Although they did not discover Ceres, they later found the asteroids 2 Pallas, 3 Juno and 4 Vesta.
One of the astronomers selected for the search was Giuseppe Piazzi, a Catholic priest at the Academy of Palermo, Sicily. Before receiving his invitation to join the group, Piazzi discovered Ceres on 1 January 1801. He was searching for "the 87th [star] of the Catalogue of the Zodiacal stars of Mr la Caille", but found that "it was preceded by another". Instead of a star, Piazzi had found a moving star-like object, which he first thought was a comet:
The light was a little faint, and of the colour of Jupiter, but similar to many others which generally are reckoned of the eighth magnitude. Therefore I had no doubt of its being any other than a fixed star. [...] The evening of the third, my suspicion was converted into certainty, being assured it was not a fixed star. Nevertheless before I made it known, I waited till the evening of the fourth, when I had the satisfaction to see it had moved at the same rate as on the preceding days.
Piazzi observed Ceres a total of 24 times, the final time on 11 February 1801, when illness interrupted his work. He announced his discovery on 24 January 1801 in letters to only two fellow astronomers, his compatriot Barnaba Oriani of Milan and Bode in Berlin. He reported it as a comet but "since its movement is so slow and rather uniform, it has occurred to me several times that it might be something better than a comet". In April, Piazzi sent his complete observations to Oriani, Bode, and French astronomer Jérôme Lalande. The information was published in the September 1801 issue of the Monatliche Correspondenz.
By this time, the apparent position of Ceres had changed (mostly due to Earth's motion around the Sun), and was too close to the Sun's glare for other astronomers to confirm Piazzi's observations. Toward the end of the year, Ceres should have been visible again, but after such a long time it was difficult to predict its exact position. To recover Ceres, mathematician Carl Friedrich Gauss, then 24 years old, developed an efficient method of orbit determination. In a few weeks, he predicted the path of Ceres and sent his results to von Zach. On 31 December 1801, von Zach and fellow celestial policeman Heinrich W. M. Olbers found Ceres near the predicted position and thus recovered it. At 2.8 AU from the Sun, Ceres appeared to fit the Titius–Bode law almost perfectly; however, Neptune, once discovered in 1846, was 8 AU closer than predicted, leading most astronomers to conclude that the law was a coincidence. Piazzi named the newly discovered object Ceres Ferdinandea, "in honor of the patron goddess of Sicily and of King Ferdinand of Bourbon".
Further search
Three other asteroids (2 Pallas, 3 Juno, and 4 Vesta) were discovered by von Zach's group over the next few years, with Vesta found in 1807. No new asteroids were discovered until 1845. Amateur astronomer Karl Ludwig Hencke started his searches of new asteroids in 1830, and fifteen years later, while looking for Vesta, he found the asteroid later named 5 Astraea. It was the first new asteroid discovery in 38 years. Carl Friedrich Gauss was given the honor of naming the asteroid. After this, other astronomers joined; 15 asteroids were found by the end of 1851. In 1868, when James Craig Watson discovered the 100th asteroid, the French Academy of Sciences engraved the faces of Karl Theodor Robert Luther, John Russell Hind, and Hermann Goldschmidt, the three most successful asteroid-hunters at that time, on a commemorative medallion marking the event.
In 1891, Max Wolf pioneered the use of astrophotography to detect asteroids, which appeared as short streaks on long-exposure photographic plates. This dramatically increased the rate of detection compared with earlier visual methods: Wolf alone discovered 248 asteroids, beginning with 323 Brucia, whereas only slightly more than 300 had been discovered up to that point. It was known that there were many more, but most astronomers did not bother with them, some calling them "vermin of the skies", a phrase variously attributed to Eduard Suess and Edmund Weiss. Even a century later, only a few thousand asteroids were identified, numbered and named.
19th and 20th centuries
In the past, asteroids were discovered by a four-step process. First, a region of the sky was photographed by a wide-field telescope or astrograph. Pairs of photographs were taken, typically one hour apart. Multiple pairs could be taken over a series of days. Second, the two films or plates of the same region were viewed under a stereoscope. A body in orbit around the Sun would move slightly between the pair of films. Under the stereoscope, the image of the body would seem to float slightly above the background of stars. Third, once a moving body was identified, its location would be measured precisely using a digitizing microscope. The location would be measured relative to known star locations.
These first three steps do not constitute asteroid discovery: the observer has only found an apparition, which gets a provisional designation, made up of the year of discovery, a letter representing the half-month of discovery, and finally a letter and a number indicating the discovery's sequential number (example: ). The last step is sending the locations and time of observations to the Minor Planet Center, where computer programs determine whether an apparition ties together earlier apparitions into a single orbit. If so, the object receives a catalogue number and the observer of the first apparition with a calculated orbit is declared the discoverer, and granted the honor of naming the object subject to the approval of the International Astronomical Union.
Naming
By 1851, the Royal Astronomical Society decided that asteroids were being discovered at such a rapid rate that a different system was needed to categorize or name asteroids. In 1852, when de Gasparis discovered the twentieth asteroid, Benjamin Valz gave it a name and a number designating its rank among asteroid discoveries, 20 Massalia. Sometimes asteroids were discovered and not seen again. So, starting in 1892, new asteroids were listed by the year and a capital letter indicating the order in which the asteroid's orbit was calculated and registered within that specific year. For example, the first two asteroids discovered in 1892 were labeled 1892A and 1892B. However, there were not enough letters in the alphabet for all of the asteroids discovered in 1893, so 1893Z was followed by 1893AA. A number of variations of these methods were tried, including designations that included year plus a Greek letter in 1914. A simple chronological numbering system was established in 1925.
Currently all newly discovered asteroids receive a provisional designation (such as ) consisting of the year of discovery and an alphanumeric code indicating the half-month of discovery and the sequence within that half-month. Once an asteroid's orbit has been confirmed, it is given a number, and later may also be given a name (e.g. ). The formal naming convention uses parentheses around the number—e.g. (433) Eros—but dropping the parentheses is quite common. Informally, it is also common to drop the number altogether, or to drop it after the first mention when a name is repeated in running text. In addition, names can be proposed by the asteroid's discoverer, within guidelines established by the International Astronomical Union.
Symbols
The first asteroids to be discovered were assigned iconic symbols like the ones traditionally used to designate the planets. By 1852 there were two dozen asteroid symbols, which often occurred in multiple variants.
In 1851, after the fifteenth asteroid, Eunomia, had been discovered, Johann Franz Encke made a major change in the upcoming 1854 edition of the Berliner Astronomisches Jahrbuch (BAJ, Berlin Astronomical Yearbook). He introduced a disk (circle), a traditional symbol for a star, as the generic symbol for an asteroid. The circle was then numbered in order of discovery to indicate a specific asteroid. The numbered-circle convention was quickly adopted by astronomers, and the next asteroid to be discovered (16 Psyche, in 1852) was the first to be designated in that way at the time of its discovery. However, Psyche was given an iconic symbol as well, as were a few other asteroids discovered over the next few years. 20 Massalia was the first asteroid that was not assigned an iconic symbol, and no iconic symbols were created after the 1855 discovery of 37 Fides.
Formation
Many asteroids are the shattered remnants of planetesimals, bodies within the young Sun's solar nebula that never grew large enough to become planets. It is thought that planetesimals in the asteroid belt evolved much like the rest of objects in the solar nebula until Jupiter neared its current mass, at which point excitation from orbital resonances with Jupiter ejected over 99% of planetesimals in the belt. Simulations and a discontinuity in spin rate and spectral properties suggest that asteroids larger than approximately in diameter accreted during that early era, whereas smaller bodies are fragments from collisions between asteroids during or after the Jovian disruption. Ceres and Vesta grew large enough to melt and differentiate, with heavy metallic elements sinking to the core, leaving rocky minerals in the crust.
In the Nice model, many Kuiper-belt objects are captured in the outer asteroid belt, at distances greater than 2.6 AU. Most were later ejected by Jupiter, but those that remained may be the D-type asteroids, and possibly include Ceres.
Distribution within the Solar System
Various dynamical groups of asteroids have been discovered orbiting in the inner Solar System. Their orbits are perturbed by the gravity of other bodies in the Solar System and by the Yarkovsky effect. Significant populations include:
Asteroid belt
The majority of known asteroids orbit within the asteroid belt between the orbits of Mars and Jupiter, generally in relatively low-eccentricity (i.e. not very elongated) orbits. This belt is estimated to contain between 1.1 and 1.9 million asteroids larger than in diameter, and millions of smaller ones. These asteroids may be remnants of the protoplanetary disk, and in this region the accretion of planetesimals into planets during the formative period of the Solar System was prevented by large gravitational perturbations by Jupiter.
Contrary to popular imagery, the asteroid belt is mostly empty. The asteroids are spread over such a large volume that reaching an asteroid without aiming carefully would be improbable. Nonetheless, hundreds of thousands of asteroids are currently known, and the total number ranges in the millions or more, depending on the lower size cutoff. Over 200 asteroids are known to be larger than 100 km, and a survey in the infrared wavelengths has shown that the asteroid belt has between 700,000 and 1.7 million asteroids with a diameter of 1 km or more. The absolute magnitudes of most of the known asteroids are between 11 and 19, with the median at about 16.
The total mass of the asteroid belt is estimated to be kg, which is just 3% of the mass of the Moon; the mass of the Kuiper Belt and Scattered Disk is over 100 times as large. The four largest objects, Ceres, Vesta, Pallas, and Hygiea, account for maybe 62% of the belt's total mass, with 39% accounted for by Ceres alone.
Trojans
Trojans are populations that share an orbit with a larger planet or moon, but do not collide with it because they orbit in one of the two Lagrangian points of stability, and , which lie 60° ahead of and behind the larger body.
In the Solar System, most known trojans share the orbit of Jupiter. They are divided into the Greek camp at (ahead of Jupiter) and the Trojan camp at (trailing Jupiter). More than a million Jupiter trojans larger than one kilometer are thought to exist, of which more than 7,000 are currently catalogued. In other planetary orbits only nine Mars trojans, 28 Neptune trojans, two Uranus trojans, and two Earth trojans, have been found to date. A temporary Venus trojan is also known. Numerical orbital dynamics stability simulations indicate that Saturn and Uranus probably do not have any primordial trojans.
Near-Earth asteroids
Near-Earth asteroids, or NEAs, are asteroids that have orbits that pass close to that of Earth. Asteroids that actually cross Earth's orbital path are known as Earth-crossers. , a total of 28,772 near-Earth asteroids were known; 878 have a diameter of one kilometer or larger.
A small number of NEAs are extinct comets that have lost their volatile surface materials, although having a faint or intermittent comet-like tail does not necessarily result in a classification as a near-Earth comet, making the boundaries somewhat fuzzy. The rest of the near-Earth asteroids are driven out of the asteroid belt by gravitational interactions with Jupiter.
Many asteroids have natural satellites (minor-planet moons). , there were 85 NEAs known to have at least one moon, including three known to have two moons. The asteroid 3122 Florence, one of the largest potentially hazardous asteroids with a diameter of , has two moons measuring across, which were discovered by radar imaging during the asteroid's 2017 approach to Earth.
Near-Earth asteroids are divided into groups based on their semi-major axis (a), perihelion distance (q), and aphelion distance (Q):
The Atiras or Apoheles have orbits strictly inside Earth's orbit: an Atira asteroid's aphelion distance (Q) is smaller than Earth's perihelion distance (0.983 AU). That is, , which implies that the asteroid's semi-major axis is also less than 0.983 AU.
The Atens have a semi-major axis of less than 1 AU and cross Earth's orbit. Mathematically, and . (0.983 AU is Earth's perihelion distance.)
The Apollos have a semi-major axis of more than 1 AU and cross Earth's orbit. Mathematically, and . (1.017 AU is Earth's aphelion distance.)
The Amors have orbits strictly outside Earth's orbit: an Amor asteroid's perihelion distance (q) is greater than Earth's aphelion distance (1.017 AU). Amor asteroids are also near-earth objects so . In summary, . (This implies that the asteroid's semi-major axis (a) is also larger than 1.017 AU.) Some Amor asteroid orbits cross the orbit of Mars.
Martian moons
It is unclear whether Martian moons Phobos and Deimos are captured asteroids or were formed due to impact event on Mars. Phobos and Deimos both have much in common with carbonaceous C-type asteroids, with spectra, albedo, and density very similar to those of C- or D-type asteroids. Based on their similarity, one hypothesis is that both moons may be captured main-belt asteroids. Both moons have very circular orbits which lie almost exactly in Mars's equatorial plane, and hence a capture origin requires a mechanism for circularizing the initially highly eccentric orbit, and adjusting its inclination into the equatorial plane, most probably by a combination of atmospheric drag and tidal forces, although it is not clear whether sufficient time was available for this to occur for Deimos. Capture also requires dissipation of energy. The current Martian atmosphere is too thin to capture a Phobos-sized object by atmospheric braking. Geoffrey A. Landis has pointed out that the capture could have occurred if the original body was a binary asteroid that separated under tidal forces.
Phobos could be a second-generation Solar System object that coalesced in orbit after Mars formed, rather than forming concurrently out of the same birth cloud as Mars.
Another hypothesis is that Mars was once surrounded by many Phobos- and Deimos-sized bodies, perhaps ejected into orbit around it by a collision with a large planetesimal. The high porosity of the interior of Phobos (based on the density of 1.88 g/cm3, voids are estimated to comprise 25 to 35 percent of Phobos's volume) is inconsistent with an asteroidal origin. Observations of Phobos in the thermal infrared suggest a composition containing mainly phyllosilicates, which are well known from the surface of Mars. The spectra are distinct from those of all classes of chondrite meteorites, again pointing away from an asteroidal origin. Both sets of findings support an origin of Phobos from material ejected by an impact on Mars that reaccreted in Martian orbit, similar to the prevailing theory for the origin of Earth's moon.
Characteristics
Size distribution
Asteroids vary greatly in size, from almost for the largest down to rocks just 1 meter across, below which an object is classified as a meteoroid. The three largest are very much like miniature planets: they are roughly spherical, have at least partly differentiated interiors, and are thought to be surviving protoplanets. The vast majority, however, are much smaller and are irregularly shaped; they are thought to be either battered planetesimals or fragments of larger bodies.
The dwarf planet Ceres is by far the largest asteroid, with a diameter of . The next largest are 4 Vesta and 2 Pallas, both with diameters of just over . Vesta is the brightest of the four main-belt asteroids that can, on occasion, be visible to the naked eye. On some rare occasions, a near-Earth asteroid may briefly become visible without technical aid; see 99942 Apophis.
The mass of all the objects of the asteroid belt, lying between the orbits of Mars and Jupiter, is estimated to be , ≈ 3.25% of the mass of the Moon. Of this, Ceres comprises , about 40% of the total. Adding in the next three most massive objects, Vesta (11%), Pallas (8.5%), and Hygiea (3–4%), brings this figure up to a bit over 60%, whereas the next seven most-massive asteroids bring the total up to 70%. The number of asteroids increases rapidly as their individual masses decrease.
The number of asteroids decreases markedly with increasing size. Although the size distribution generally follows a power law, there are 'bumps' at about and , where more asteroids than expected from such a curve are found. Most asteroids larger than approximately 120 km in diameter are primordial (surviving from the accretion epoch), whereas most smaller asteroids are products of fragmentation of primordial asteroids. The primordial population of the main belt was probably 200 times what it is today.
Largest asteroids
Three largest objects in the asteroid belt, Ceres, Vesta, and Pallas, are intact protoplanets that share many characteristics common to planets, and are atypical compared to the majority of irregularly shaped asteroids. The fourth-largest asteroid, Hygiea, appears nearly spherical although it may have an undifferentiated interior, like the majority of asteroids. The four largest asteroids constitute half the mass of the asteroid belt.
Ceres is the only asteroid that appears to have a plastic shape under its own gravity and hence the only one that is a dwarf planet. It has a much higher absolute magnitude than the other asteroids, of around 3.32, and may possess a surface layer of ice. Like the planets, Ceres is differentiated: it has a crust, a mantle and a core. No meteorites from Ceres have been found on Earth.
Vesta, too, has a differentiated interior, though it formed inside the Solar System's frost line, and so is devoid of water; its composition is mainly of basaltic rock with minerals such as olivine. Aside from the large crater at its southern pole, Rheasilvia, Vesta also has an ellipsoidal shape. Vesta is the parent body of the Vestian family and other V-type asteroids, and is the source of the HED meteorites, which constitute 5% of all meteorites on Earth.
Pallas is unusual in that, like Uranus, it rotates on its side, with its axis of rotation tilted at high angles to its orbital plane. Its composition is similar to that of Ceres: high in carbon and silicon, and perhaps partially differentiated. Pallas is the parent body of the Palladian family of asteroids.
Hygiea is the largest carbonaceous asteroid and, unlike the other largest asteroids, lies relatively close to the plane of the ecliptic. It is the largest member and presumed parent body of the Hygiean family of asteroids. Because there is no sufficiently large crater on the surface to be the source of that family, as there is on Vesta, it is thought that Hygiea may have been completely disrupted in the collision that formed the Hygiean family and recoalesced after losing a bit less than 2% of its mass. Observations taken with the Very Large Telescope's SPHERE imager in 2017 and 2018, revealed that Hygiea has a nearly spherical shape, which is consistent both with it being in hydrostatic equilibrium, or formerly being in hydrostatic equilibrium, or with being disrupted and recoalescing.
Internal differentiation of large asteroids is possibly related to their lack of natural satellites, as satellites of main belt asteroids are mostly believed to form from collisional disruption, creating a rubble pile structure.
Rotation
Measurements of the rotation rates of large asteroids in the asteroid belt show that there is an upper limit. Very few asteroids with a diameter larger than 100 meters have a rotation period less than 2.2 hours. For asteroids rotating faster than approximately this rate, the inertial force at the surface is greater than the gravitational force, so any loose surface material would be flung out. However, a solid object should be able to rotate much more rapidly. This suggests that most asteroids with a diameter over 100 meters are rubble piles formed through the accumulation of debris after collisions between asteroids.
Color
Asteroids become darker and redder with age due to space weathering. However evidence suggests most of the color change occurs rapidly, in the first hundred thousand years, limiting the usefulness of spectral measurement for determining the age of asteroids.
Surface features
Except for the "big four" (Ceres, Pallas, Vesta, and Hygiea), asteroids are likely to be broadly similar in appearance, if irregular in shape. 253 Mathilde is a rubble pile saturated with craters with diameters the size of the asteroid's radius. Earth-based observations of 511 Davida, one of the largest asteroids after the big four, reveal a similarly angular profile, suggesting it is also saturated with radius-size craters. Medium-sized asteroids such as Mathilde and 243 Ida, that have been observed up close, also reveal a deep regolith covering the surface. Of the big four, Pallas and Hygiea are practically unknown. Vesta has compression fractures encircling a radius-size crater at its south pole but is otherwise a spheroid.
Dawn spacecraft revealed that Ceres has a heavily cratered surface, but with fewer large craters than expected. Models based on the formation of the current asteroid belt had suggested Ceres should possess 10 to 15 craters larger than in diameter. The largest confirmed crater on Ceres, Kerwan Basin, is across. The most likely reason for this is viscous relaxation of the crust slowly flattening out larger impacts.
Composition
Asteroids are classified by their characteristic emission spectra, with the majority falling into three main groups: C-type, M-type, and S-type. These describe carbonaceous (carbon-rich), metallic, and silicaceous (stony) compositions, respectively. The physical composition of asteroids is varied and in most cases poorly understood. Ceres appears to be composed of a rocky core covered by an icy mantle; Vesta is thought to have a nickel-iron core, olivine mantle, and basaltic crust. Thought to be the largest undifferentiated asteroid, 10 Hygiea seems to have a uniformly primitive composition of carbonaceous chondrite, but it may actually be a differentiated asteroid that was globally disrupted by an impact and then reassembled. Other asteroids appear to be the remnant cores or mantles of proto-planets, high in rock and metal. Most small asteroids are believed to be piles of rubble held together loosely by gravity, although the largest are probably solid. Some asteroids have moons or are co-orbiting binaries: rubble piles, moons, binaries, and scattered asteroid families are thought to be the results of collisions that disrupted a parent asteroid, or possibly a planet.
In the main asteroid belt, there appear to be two primary populations of asteroid: a dark, volatile-rich population, consisting of the C-type and P-type asteroids, with albedos less than 0.10 and densities under , and a dense, volatile-poor population, consisting of the S-type and M-type asteroids, with albedos over 0.15 and densities greater than 2.7. Within these populations, larger asteroids are denser, presumably due to compression. There appears to be minimal macro-porosity (interstitial vacuum) in the score of asteroids with masses greater than .
Composition is calculated from three primary sources: albedo, surface spectrum, and density. The last can only be determined accurately by observing the orbits of moons the asteroid might have. So far, every asteroid with moons has turned out to be a rubble pile, a loose conglomeration of rock and metal that may be half empty space by volume. The investigated asteroids are as large as 280 km in diameter, and include 121 Hermione (268×186×183 km), and 87 Sylvia (384×262×232 km). Few asteroids are larger than 87 Sylvia, none of them have moons. The fact that such large asteroids as Sylvia may be rubble piles, presumably due to disruptive impacts, has important consequences for the formation of the Solar System: computer simulations of collisions involving solid bodies show them destroying each other as often as merging, but colliding rubble piles are more likely to merge. This means that the cores of the planets could have formed relatively quickly.
Water
Scientists hypothesize that some of the first water brought to Earth was delivered by asteroid impacts after the collision that produced the Moon. In 2009, the presence of water ice was confirmed on the surface of 24 Themis using NASA's Infrared Telescope Facility. The surface of the asteroid appears completely covered in ice. As this ice layer is sublimating, it may be getting replenished by a reservoir of ice under the surface. Organic compounds were also detected on the surface. The presence of ice on 24 Themis makes the initial theory plausible.
In October 2013, water was detected on an extrasolar body for the first time, on an asteroid orbiting the white dwarf GD 61. On 22 January 2014, European Space Agency (ESA) scientists reported the detection, for the first definitive time, of water vapor on Ceres, the largest object in the asteroid belt. The detection was made by using the far-infrared abilities of the Herschel Space Observatory. The finding is unexpected because comets, not asteroids, are typically considered to "sprout jets and plumes". According to one of the scientists, "The lines are becoming more and more blurred between comets and asteroids."
Findings have shown that solar winds can react with the oxygen in the upper layer of the asteroids and create water. It has been estimated that "every cubic metre of irradiated rock could contain up to 20 litres"; study was conducted using an atom probe tomography, numbers are given for the Itokawa S-type asteroid.
Acfer 049, a meteorite discovered in Algeria in 1990, was shown in 2019 to have an ultraporous lithology (UPL): porous texture that could be formed by removal of ice that filled these pores, this suggests that UPL "represent fossils of primordial ice".
Organic compounds
Asteroids contain traces of amino acids and other organic compounds, and some speculate that asteroid impacts may have seeded the early Earth with the chemicals necessary to initiate life, or may have even brought life itself to Earth (an event called "panspermia"). In August 2011, a report, based on NASA studies with meteorites found on Earth, was published suggesting DNA and RNA components (adenine, guanine and related organic molecules) may have been formed on asteroids and comets in outer space.
In November 2019, scientists reported detecting, for the first time, sugar molecules, including ribose, in meteorites, suggesting that chemical processes on asteroids can produce some fundamentally essential bio-ingredients important to life, and supporting the notion of an RNA world prior to a DNA-based origin of life on Earth, and possibly, as well, the notion of panspermia.
Classification
Asteroids are commonly categorized according to two criteria: the characteristics of their orbits, and features of their reflectance spectrum.
Orbital classification
Many asteroids have been placed in groups and families based on their orbital characteristics. Apart from the broadest divisions, it is customary to name a group of asteroids after the first member of that group to be discovered. Groups are relatively loose dynamical associations, whereas families are tighter and result from the catastrophic break-up of a large parent asteroid sometime in the past. Families are more common and easier to identify within the main asteroid belt, but several small families have been reported among the Jupiter trojans. Main belt families were first recognized by Kiyotsugu Hirayama in 1918 and are often called Hirayama families in his honor.
About 30–35% of the bodies in the asteroid belt belong to dynamical families, each thought to have a common origin in a past collision between asteroids. A family has also been associated with the plutoid dwarf planet .
Some asteroids have unusual horseshoe orbits that are co-orbital with Earth or another planet. Examples are 3753 Cruithne and . The first instance of this type of orbital arrangement was discovered between Saturn's moons Epimetheus and Janus. Sometimes these horseshoe objects temporarily become quasi-satellites for a few decades or a few hundred years, before returning to their earlier status. Both Earth and Venus are known to have quasi-satellites.
Such objects, if associated with Earth or Venus or even hypothetically Mercury, are a special class of Aten asteroids. However, such objects could be associated with the outer planets as well.
Spectral classification
In 1975, an asteroid taxonomic system based on color, albedo, and spectral shape was developed by Chapman, Morrison, and Zellner. These properties are thought to correspond to the composition of the asteroid's surface material. The original classification system had three categories: C-types for dark carbonaceous objects (75% of known asteroids), S-types for stony (silicaceous) objects (17% of known asteroids) and U for those that did not fit into either C or S. This classification has since been expanded to include many other asteroid types. The number of types continues to grow as more asteroids are studied.
The two most widely used taxonomies now used are the Tholen classification and SMASS classification. The former was proposed in 1984 by David J. Tholen, and was based on data collected from an eight-color asteroid survey performed in the 1980s. This resulted in 14 asteroid categories. In 2002, the Small Main-Belt Asteroid Spectroscopic Survey resulted in a modified version of the Tholen taxonomy with 24 different types. Both systems have three broad categories of C, S, and X asteroids, where X consists of mostly metallic asteroids, such as the M-type. There are also several smaller classes.
The proportion of known asteroids falling into the various spectral types does not necessarily reflect the proportion of all asteroids that are of that type; some types are easier to detect than others, biasing the totals.
Problems
Originally, spectral designations were based on inferences of an asteroid's composition. However, the correspondence between spectral class and composition is not always very good, and a variety of classifications are in use. This has led to significant confusion. Although asteroids of different spectral classifications are likely to be composed of different materials, there are no assurances that asteroids within the same taxonomic class are composed of the same (or similar) materials.
Active asteroids
Active asteroids are objects that have asteroid-like orbits but show comet-like visual characteristics. That is, they show comae, tails, or other visual evidence of mass-loss (like a comet), but their orbit remains within Jupiter's orbit (like an asteroid). These bodies were originally designated main-belt comets (MBCs) in 2006 by astronomers David Jewitt and Henry Hsieh, but this name implies they are necessarily icy in composition like a comet and that they only exist within the main-belt, whereas the growing population of active asteroids shows that this is not always the case.
The first active asteroid discovered is 7968 Elst–Pizarro. It was discovered (as an asteroid) in 1979 but then was found to have a tail by Eric Elst and Guido Pizarro in 1996 and given the cometary designation 133P/Elst-Pizarro. Another notable object is 311P/PanSTARRS: observations made by the Hubble Space Telescope revealed that it had six comet-like tails. The tails are suspected to be streams of material ejected by the asteroid as a result of a rubble pile asteroid spinning fast enough to remove material from it.
By smashing into the asteroid Dimorphos, NASA's Double Asteroid Redirection Test spacecraft made it an active asteroid. Scientists had proposed that some active asteroids are the result of impact events, but no one had ever observed the activation of an asteroid. The DART mission activated Dimorphos under precisely known and carefully observed impact conditions, enabling the detailed study of the formation of an active asteroid for the first time. Observations show that Dimorphos lost approximately 1 million kilograms after the collision. Impact produced a dust plume that temporarily brightened the Didymos system and developed a -long dust tail that persisted for several months.
Observation and exploration
Until the age of space travel, objects in the asteroid belt could only be observed with large telescopes, their shapes and terrain remaining a mystery. The best modern ground-based telescopes and the Earth-orbiting Hubble Space Telescope can only resolve a small amount of detail on the surfaces of the largest asteroids. Limited information about the shapes and compositions of asteroids can be inferred from their light curves (variation in brightness during rotation) and their spectral properties. Sizes can be estimated by timing the lengths of star occultations (when an asteroid passes directly in front of a star). Radar imaging can yield good information about asteroid shapes and orbital and rotational parameters, especially for near-Earth asteroids. Spacecraft flybys can provide much more data than any ground or space-based observations; sample-return missions gives insights about regolith composition.
Ground-based observations
As asteroids are rather small and faint objects, the data that can be obtained from ground-based observations (GBO) are limited. By means of ground-based optical telescopes the visual magnitude can be obtained; when converted into the absolute magnitude it gives a rough estimate of the asteroid's size. Light-curve measurements can also be made by GBO; when collected over a long period of time it allows an estimate of the rotational period, the pole orientation (sometimes), and a rough estimate of the asteroid's shape. Spectral data (both visible-light and near-infrared spectroscopy) gives information about the object's composition, used to classify the observed asteroids. Such observations are limited as they provide information about only the thin layer on the surface (up to several micrometers). As planetologist Patrick Michel writes:
Mid- to thermal-infrared observations, along with polarimetry measurements, are probably the only data that give some indication of actual physical properties. Measuring the heat flux of an asteroid at a single wavelength gives an estimate of the dimensions of the object; these measurements have lower uncertainty than measurements of the reflected sunlight in the visible-light spectral region. If the two measurements can be combined, both the effective diameter and the geometric albedo—the latter being a measure of the brightness at zero phase angle, that is, when illumination comes from directly behind the observer—can be derived. In addition, thermal measurements at two or more wavelengths, plus the brightness in the visible-light region, give information on the thermal properties. The thermal inertia, which is a measure of how fast a material heats up or cools off, of most observed asteroids is lower than the bare-rock reference value but greater than that of the lunar regolith; this observation indicates the presence of an insulating layer of granular material on their surface. Moreover, there seems to be a trend, perhaps related to the gravitational environment, that smaller objects (with lower gravity) have a small regolith layer consisting of coarse grains, while larger objects have a thicker regolith layer consisting of fine grains. However, the detailed properties of this regolith layer are poorly known from remote observations. Moreover, the relation between thermal inertia and surface roughness is not straightforward, so one needs to interpret the thermal inertia with caution.
Near-Earth asteroids that come into close vicinity of the planet can be studied in more details with radar; it provides information about the surface of the asteroid (for example can show the presence of craters and boulders). Such observations were conducted by the Arecibo Observatory in Puerto Rico (305 meter dish) and Goldstone Observatory in California (70 meter dish). Radar observations can also be used for accurate determination of the orbital and rotational dynamics of observed objects.
Space-based observations
Both space and ground-based observatories conducted asteroid search programs; the space-based searches are expected to detect more objects because there is no atmosphere to interfere and because they can observe larger portions of the sky. NEOWISE observed more than 100,000 asteroids of the main belt, Spitzer Space Telescope observed more than 700 near-Earth asteroids. These observations determined rough sizes of the majority of observed objects, but provided limited detail about surface properties (such as regolith depth and composition, angle of repose, cohesion, and porosity).
Asteroids were also studied by the Hubble Space Telescope, such as tracking the colliding asteroids in the main belt, break-up of an asteroid, observing an active asteroid with six comet-like tails, and observing asteroids that were chosen as targets of dedicated missions.
Space probe missions
According to Patrick Michel
The internal structure of asteroids is inferred only from indirect evidence: bulk densities measured by spacecraft, the orbits of natural satellites in the case of asteroid binaries, and the drift of an asteroid's orbit due to the Yarkovsky thermal effect. A spacecraft near an asteroid is perturbed enough by the asteroid's gravity to allow an estimate of the asteroid's mass. The volume is then estimated using a model of the asteroid's shape. Mass and volume allow the derivation of the bulk density, whose uncertainty is usually dominated by the errors made on the volume estimate. The internal porosity of asteroids can be inferred by comparing their bulk density with that of their assumed meteorite analogues, dark asteroids seem to be more porous (>40%) than bright ones. The nature of this porosity is unclear.
Dedicated missions
The first asteroid to be photographed in close-up was 951 Gaspra in 1991, followed in 1993 by 243 Ida and its moon Dactyl, all of which were imaged by the Galileo probe en route to Jupiter. Other asteroids briefly visited by spacecraft en route to other destinations include 9969 Braille (by Deep Space 1 in 1999), 5535 Annefrank (by Stardust in 2002), 2867 Šteins and 21 Lutetia (by the Rosetta probe in 2008), and 4179 Toutatis (China's lunar orbiter Chang'e 2, which flew within in 2012).
The first dedicated asteroid probe was NASA's NEAR Shoemaker, which photographed 253 Mathilde in 1997, before entering into orbit around 433 Eros, finally landing on its surface in 2001. It was the first spacecraft to successfully orbit and land on an asteroid. From September to November 2005, the Japanese Hayabusa probe studied 25143 Itokawa in detail and returned samples of its surface to Earth on 13 June 2010, the first asteroid sample-return mission. In 2007, NASA launched the Dawn spacecraft, which orbited 4 Vesta for a year, and observed the dwarf planet Ceres for three years.
Hayabusa2, a probe launched by JAXA 2014, orbited its target asteroid 162173 Ryugu for more than a year and took samples that were delivered to Earth in 2020. The spacecraft is now on an extended mission and expected to arrive at a new target in 2031.
NASA launched the OSIRIS-REx in 2016, a sample return mission to asteroid 101955 Bennu. In 2021, the probe departed the asteroid with a sample from its surface. Sample was delivered to Earth in September 2023. The spacecraft continues its extended mission, designated OSIRIS-APEX, to explore near-Earth asteroid Apophis in 2029.
In 2021, NASA launched Double Asteroid Redirection Test (DART), a mission to test technology for defending Earth against potential hazardous objects. DART deliberately crashed into the minor-planet moon Dimorphos of the double asteroid Didymos in September 2022 to assess the potential of a spacecraft impact to deflect an asteroid from a collision course with Earth. In October, NASA declared DART a success, confirming it had shortened Dimorphos' orbital period around Didymos by about 32 minutes.
NASA's Lucy, launched in 2021, is a multiple-asteroid flyby probe focused on flying by 7 Jupiter trojans of varying types. While not yet set to reach its first main target, 3548 Eurybates, until 2027, it has made a flyby of main-belt asteroid 152830 Dinkinesh and is set to flyby another asteroid 52246 Donaldjohanson in 2025.
Planned missions
NASA's Psyche, launched in October 2023, is intended to study the large metallic asteroid of the same name, and is on track to arrive there in 2029.
ESA's Hera, launched in October 2024, is intended study the results of the DART impact. It is expected to measure the size and morphology of the crater, and momentum transmitted by the impact, to determine the efficiency of the deflection produced by DART.
JAXA's DESTINY+ is a mission for a flyby of the Geminids meteor shower parent body 3200 Phaethon, as well as various minor bodies. Its launch is planned for 2024.
CNSA's Tianwen-2 is planned to launch in 2025. If all goes as planned, it will use solar electric propulsion to explore the co-orbital near-Earth asteroid 469219 Kamoʻoalewa and the active asteroid 311P/PanSTARRS. The spacecraft is tasked with collecting samples of the regolith of Kamo'oalewa.
Asteroid mining
The concept of asteroid mining was proposed in 1970s. Matt Anderson defines successful asteroid mining as "the development of a mining program that is both financially self-sustaining and profitable to its investors". It has been suggested that asteroids might be used as a source of materials that may be rare or exhausted on Earth, or materials for constructing space habitats. Materials that are heavy and expensive to launch from Earth may someday be mined from asteroids and used for space manufacturing and construction.
As resource depletion on Earth becomes more real, the idea of extracting valuable elements from asteroids and returning these to Earth for profit, or using space-based resources to build solar-power satellites and space habitats, becomes more attractive. Hypothetically, water processed from ice could refuel orbiting propellant depots.
From the astrobiological perspective, asteroid prospecting could provide scientific data for the search for extraterrestrial intelligence (SETI). Some astrophysicists have suggested that if advanced extraterrestrial civilizations employed asteroid mining long ago, the hallmarks of these activities might be detectable.
Threats to Earth
There is increasing interest in identifying asteroids whose orbits cross Earth's, and that could, given enough time, collide with Earth. The three most important groups of near-Earth asteroids are the Apollos, Amors, and Atens.
The near-Earth asteroid 433 Eros had been discovered as long ago as 1898, and the 1930s brought a flurry of similar objects. In order of discovery, these were: 1221 Amor, 1862 Apollo, 2101 Adonis, and finally 69230 Hermes, which approached within 0.005 AU of Earth in 1937. Astronomers began to realize the possibilities of Earth impact.
Two events in later decades increased the alarm: the increasing acceptance of the Alvarez hypothesis that an impact event resulted in the Cretaceous–Paleogene extinction, and the 1994 observation of Comet Shoemaker-Levy 9 crashing into Jupiter. The U.S. military also declassified the information that its military satellites, built to detect nuclear explosions, had detected hundreds of upper-atmosphere impacts by objects ranging from one to ten meters across.
All of these considerations helped spur the launch of highly efficient surveys, consisting of charge-coupled device (CCD) cameras and computers directly connected to telescopes. , it was estimated that 89% to 96% of near-Earth asteroids one kilometer or larger in diameter had been discovered. , the LINEAR system alone had discovered 147,132 asteroids. Among the surveys, 19,266 near-Earth asteroids have been discovered including almost 900 more than in diameter.
In June 2018, the National Science and Technology Council warned that the United States is unprepared for an asteroid impact event, and has developed and released the "National Near-Earth Object Preparedness Strategy Action Plan" to better prepare. According to expert testimony in the United States Congress in 2013, NASA would require at least five years of preparation before a mission to intercept an asteroid could be launched.
Asteroid deflection strategies
Various collision avoidance techniques have different trade-offs with respect to metrics such as overall performance, cost, failure risks, operations, and technology readiness. There are various methods for changing the course of an asteroid/comet. These can be differentiated by various types of attributes such as the type of mitigation (deflection or fragmentation), energy source (kinetic, electromagnetic, gravitational, solar/thermal, or nuclear), and approach strategy (interception, rendezvous, or remote station).
Strategies fall into two basic sets: fragmentation and delay. Fragmentation concentrates on rendering the impactor harmless by fragmenting it and scattering the fragments so that they miss the Earth or are small enough to burn up in the atmosphere. Delay exploits the fact that both the Earth and the impactor are in orbit. An impact occurs when both reach the same point in space at the same time, or more correctly when some point on Earth's surface intersects the impactor's orbit when the impactor arrives. Since the Earth is approximately 12,750 km in diameter and moves at approx. 30 km per second in its orbit, it travels a distance of one planetary diameter in about 425 seconds, or slightly over seven minutes. Delaying, or advancing the impactor's arrival by times of this magnitude can, depending on the exact geometry of the impact, cause it to miss the Earth.
"Project Icarus" was one of the first projects designed in 1967 as a contingency plan in case of collision with 1566 Icarus. The plan relied on the new Saturn V rocket, which did not make its first flight until after the report had been completed. Six Saturn V rockets would be used, each launched at variable intervals from months to hours away from impact. Each rocket was to be fitted with a single 100-megaton nuclear warhead as well as a modified Apollo Service Module and uncrewed Apollo Command Module for guidance to the target. The warheads would be detonated 30 meters from the surface, deflecting or partially destroying the asteroid. Depending on the subsequent impacts on the course or the destruction of the asteroid, later missions would be modified or cancelled as needed. The "last-ditch" launch of the sixth rocket would be 18 hours prior to impact.
Fiction
Asteroids and the asteroid belt are a staple of science fiction stories. Asteroids play several potential roles in science fiction: as places human beings might colonize, resources for extracting minerals, hazards encountered by spacecraft traveling between two other points, and as a threat to life on Earth or other inhabited planets, dwarf planets, and natural satellites by potential impact.
See also
Exoasteroid
List of minor planets
List of exceptional asteroids
List of asteroid close approaches to Earth
Lost minor planet
Meanings of minor-planet names
Notes
References
Further reading
External links
NASA Asteroid and Comet Watch site
Minor planets
Solar System | Asteroid | [
"Astronomy"
] | 11,989 | [
"Outer space",
"Solar System"
] |
798 | https://en.wikipedia.org/wiki/Aries%20%28constellation%29 | Aries is one of the constellations of the zodiac. It is located in the Northern celestial hemisphere between Pisces to the west and Taurus to the east. The name Aries is Latin for ram. Its old astronomical symbol is (♈︎). It is one of the 48 constellations described by the 2nd century astronomer Ptolemy, and remains one of the 88 modern constellations. It is a mid-sized constellation ranking 39th in overall size, with an area of 441 square degrees (1.1% of the celestial sphere).
Aries has represented a ram since late Babylonian times. Before that, the stars of Aries formed a farmhand. Different cultures have incorporated the stars of Aries into different constellations including twin inspectors in China and a porpoise in the Marshall Islands. Aries is a relatively dim constellation, possessing only four bright stars: Hamal (Alpha Arietis, second magnitude), Sheratan (Beta Arietis, third magnitude), Mesarthim (Gamma Arietis, fourth magnitude), and 41 Arietis (also fourth magnitude). The few deep-sky objects within the constellation are quite faint and include several pairs of interacting galaxies. Several meteor showers appear to radiate from Aries, including the Daytime Arietids and the Epsilon Arietids.
History and mythology
Aries is now recognized as an official constellation, albeit as a specific region of the sky, by the International Astronomical Union. It was originally defined in ancient texts as a specific pattern of stars, and has remained a constellation since ancient times; it now includes the ancient pattern and the surrounding stars. In the description of the Babylonian zodiac given in the clay tablets known as the MUL.APIN, the constellation, now known as Aries, was the final station along the ecliptic. The MUL.APIN was a comprehensive table of the rising and settings of stars, which likely served as an agricultural calendar. Modern-day Aries was known as , "The Agrarian Worker" or "The Hired Man". Although likely compiled in the 12th or 11th century BC, the MUL.APIN reflects a tradition that marks the Pleiades as the vernal equinox, which was the case with some precision at the beginning of the Middle Bronze Age. The earliest identifiable reference to Aries as a distinct constellation comes from the boundary stones that date from 1350 to 1000 BC. On several boundary stones, a zodiacal ram figure is distinct from the other characters. The shift in identification from the constellation as the Agrarian Worker to the Ram likely occurred in later Babylonian tradition because of its growing association with Dumuzi the Shepherd. By the time the MUL.APIN was created—in 1000 BC—modern Aries was identified with both Dumuzi's ram and a hired labourer. The exact timing of this shift is difficult to determine due to the lack of images of Aries or other ram figures.
In ancient Egyptian astronomy, Aries was associated with the god Amun-Ra, who was depicted as a man with a ram's head and represented fertility and creativity. Because it was the location of the vernal equinox, it was called the "Indicator of the Reborn Sun". During the times of the year when Aries was prominent, priests would process statues of Amon-Ra to temples, a practice that was modified by Persian astronomers centuries later. Aries acquired the title of "Lord of the Head" in Egypt, referring to its symbolic and mythological importance.
Aries was not fully accepted as a constellation until classical times. In Hellenistic astrology, the constellation of Aries is associated with the golden ram of Greek mythology that rescued Phrixus and Helle on orders from Hermes, taking Phrixus to the land of Colchis. Phrixus and Helle were the son and daughter of King Athamas and his first wife Nephele. The king's second wife, Ino, was jealous and wished to kill his children. To accomplish this, she induced famine in Boeotia, then falsified a message from the Oracle of Delphi that said Phrixus must be sacrificed to end the famine. Athamas was about to sacrifice his son atop Mount Laphystium when Aries, sent by Nephele, arrived. Helle fell off of Aries's back in flight and drowned in the Dardanelles, also called the Hellespont in her honour.
Historically, Aries has been depicted as a crouched, wingless ram with its head turned towards Taurus. Ptolemy asserted in his Almagest that Hipparchus depicted Alpha Arietis as the ram's muzzle, though Ptolemy did not include it in his constellation figure. Instead, it was listed as an "unformed star", and denoted as "the star over the head". John Flamsteed, in his Atlas Coelestis, followed Ptolemy's description by mapping it above the figure's head. Flamsteed followed the general convention of maps by depicting Aries lying down. Astrologically, Aries has been associated with the head and its humors. It was strongly associated with Mars, both the planet and the god. It was considered to govern Western Europe and Syria and to indicate a strong temper in a person.
The First Point of Aries, the location of the vernal equinox, is named for the constellation. This is because the Sun crossed the celestial equator from south to north in Aries more than two millennia ago. Hipparchus defined it in 130 BC. as a point south of Gamma Arietis. Because of the precession of the equinoxes, the First Point of Aries has since moved into Pisces and will move into Aquarius by around 2600 AD. The Sun now appears in Aries from late April through mid-May, though the constellation is still associated with the beginning of spring.
Medieval Muslim astronomers depicted Aries in various ways. Astronomers like al-Sufi saw the constellation as a ram, modelled on the precedent of Ptolemy. However, some Islamic celestial globes depicted Aries as a nondescript four-legged animal with what may be antlers instead of horns. Some early Bedouin observers saw a ram elsewhere in the sky; this constellation featured the Pleiades as the ram's tail. The generally accepted Arabic formation of Aries consisted of thirteen stars in a figure along with five "unformed" stars, four of which were over the animal's hindquarters and one of which was the disputed star over Aries's head. Al-Sufi's depiction differed from both other Arab astronomers' and Flamsteed's, in that his Aries was running and looking behind itself.
The obsolete constellations Apes, Vespa, Lilium, and Musca Borealis all centred on the same four stars, now known as 33, 35, 39, and 41 Arietis. In 1612, Petrus Plancius introduced Apes, a constellation representing a bee. In 1624, the same stars were used by Jakob Bartsch for Vespa, representing a wasp. In 1679, Augustin Royer used these stars for his constellation Lilium, representing the fleur-de-lis. None of these constellations became widely accepted. Johann Hevelius renamed the constellation "Musca" in 1690 in his Firmamentum Sobiescianum. To differentiate it from Musca, the southern fly, it was later renamed Musca Borealis but it did not gain acceptance and its stars were ultimately officially reabsorbed into Aries.
In 1922, the International Astronomical Union defined its recommended three-letter abbreviation, "Ari". The official boundaries of Aries were defined in 1930 by Eugène Delporte as a polygon of 12 segments. Its right ascension is between 1h 46.4m and 3h 29.4m and its declination is between 10.36° and 31.22° in the equatorial coordinate system.
In non-Western astronomy
In traditional Chinese astronomy, stars from Aries were used in several constellations. The brightest stars—Alpha, Beta, and Gamma Arietis—formed a constellation called 'Lou',variously translated as "bond" or "lasso" also "sickle", which was associated with the ritual sacrifice of cattle. This name was shared by the 16th lunar mansion, the location of the full moon closest to the autumnal equinox. This constellation has also been associated with harvest-time as it could represent a woman carrying a basket of food on her head. 35, 39, and 41 Arietis were part of a constellation called Wei (胃), which represented a fat abdomen and was the namesake of the 17th lunar mansion, which represented granaries. Delta and Zeta Arietis were a part of the constellation Tianyin (天陰), thought to represent the Emperor's hunting partner. Zuogeng (左更), a constellation depicting a marsh and pond inspector, was composed of Mu, Nu, Omicron, Pi, and Sigma Arietis. He was accompanied by Yeou-kang, a constellation depicting an official in charge of pasture distribution.
In a similar system to the Chinese, the first lunar mansion in Hindu astronomy was called "Aswini", after the traditional names for Beta and Gamma Arietis, the Aswins. Because the Hindu new year began with the vernal equinox, the Rig Veda contains over 50 new-year's related hymns to the twins, making them some of the most prominent characters in the work. Aries itself was known as "Aja" and "Mesha". In Hebrew astronomy Aries was named "Taleh"; it signified either Simeon or Gad, and generally symbolizes the "Lamb of the World". The neighboring Syrians named the constellation "Amru", and the bordering Turks named it "Kuzi". Half a world away, in the Marshall Islands, several stars from Aries were incorporated into a constellation depicting a porpoise, along with stars from Cassiopeia, Andromeda, and Triangulum. Alpha, Beta, and Gamma Arietis formed the head of the porpoise, while stars from Andromeda formed the body and the bright stars of Cassiopeia formed the tail. Other Polynesian peoples recognized Aries as a constellation. The Marquesas islanders called it Na-pai-ka; the Māori constellation Pipiri may correspond to modern Aries as well. In indigenous Peruvian astronomy, a constellation with most of the same stars as Aries existed. It was called the "Market Moon" and the "Kneeling Terrace", as a reminder of when to hold the annual harvest festival, Ayri Huay.
Features
Stars
Bright stars
Aries has three prominent stars forming an asterism, designated Alpha, Beta, and Gamma Arietis by Johann Bayer. Alpha (Hamal) and Beta (Sheratan) are commonly used for navigation. There is also one other star above the fourth magnitude, 41 Arietis (Bharani). α Arietis, called Hamal, is the brightest star in Aries. Its traditional name is derived from the Arabic word for "lamb" or "head of the ram" (ras al-hamal), which references Aries's mythological background. With a spectral class of K2 and a luminosity class of III, it is an orange giant with an apparent visual magnitude of 2.00, which lies 66 light-years from Earth. Hamal has a luminosity of and its absolute magnitude is −0.1.
β Arietis, also known as Sheratan, is a blue-white star with an apparent visual magnitude of 2.64. Its traditional name is derived from "sharatayn", the Arabic word for "the two signs", referring to both Beta and Gamma Arietis in their position as heralds of the vernal equinox. The two stars were known to the Bedouin as "qarna al-hamal", "horns of the ram". It is 59 light-years from Earth. It has a luminosity of and its absolute magnitude is 2.1. It is a spectroscopic binary star, one in which the companion star is only known through analysis of the spectra. The spectral class of the primary is A5. Hermann Carl Vogel determined that Sheratan was a spectroscopic binary in 1903; its orbit was determined by Hans Ludendorff in 1907. It has since been studied for its eccentric orbit.
γ Arietis, with a common name of Mesarthim, is a binary star with two white-hued components, located in a rich field of magnitude 8–12 stars. Its traditional name has conflicting derivations. It may be derived from a corruption of "al-sharatan", the Arabic word meaning "pair" or a word for "fat ram". However, it may also come from the Sanskrit for "first star of Aries" or the Hebrew for "ministerial servants", both of which are unusual languages of origin for star names. Along with Beta Arietis, it was known to the Bedouin as "qarna al-hamal". The primary is of magnitude 4.59 and the secondary is of magnitude 4.68. The system is 164 light-years from Earth. The two components are separated by 7.8 arcseconds, and the system as a whole has an apparent magnitude of 3.9. The primary has a luminosity of and the secondary has a luminosity of ; the primary is an A-type star with an absolute magnitude of 0.2 and the secondary is a B9-type star with an absolute magnitude of 0.4. The angle between the two components is 1°. Mesarthim was discovered to be a double star by Robert Hooke in 1664, one of the earliest such telescopic discoveries. The primary, γ1 Arietis, is an Alpha² Canum Venaticorum variable star that has a range of 0.02 magnitudes and a period of 2.607 days. It is unusual because of its strong silicon emission lines.
The constellation is home to several double stars, including Epsilon, Lambda, and Pi Arietis. ε Arietis is a binary star with two white components. The primary is of magnitude 5.2 and the secondary is of magnitude 5.5. The system is 290 light-years from Earth. Its overall magnitude is 4.63, and the primary has an absolute magnitude of 1.4. Its spectral class is A2. The two components are separated by 1.5 arcseconds. λ Arietis is a wide double star with a white-hued primary and a yellow-hued secondary. The primary is of magnitude 4.8 and the secondary is of magnitude 7.3. The primary is 129 light-years from Earth. It has an absolute magnitude of 1.7 and a spectral class of F0. The two components are separated by 36 arcseconds at an angle of 50°; the two stars are located 0.5° east of 7 Arietis. π Arietis is a close binary star with a blue-white primary and a white secondary. The primary is of magnitude 5.3 and the secondary is of magnitude 8.5. The primary is 776 light-years from Earth. The primary itself is a wide double star with a separation of 25.2 arcseconds; the tertiary has a magnitude of 10.8. The primary and secondary are separated by 3.2 arcseconds.
Most of the other stars in Aries visible to the naked eye have magnitudes between 3 and 5. δ Ari, called Boteïn, is a star of magnitude 4.35, 170 light-years away. It has an absolute magnitude of −0.1 and a spectral class of K2. ζ Arietis is a star of magnitude 4.89, 263 light-years away. Its spectral class is A0 and its absolute magnitude is 0.0. 14 Arietis is a star of magnitude 4.98, 288 light-years away. Its spectral class is F2 and its absolute magnitude is 0.6. 39 Arietis (Lilii Borea) is a similar star of magnitude 4.51, 172 light-years away. Its spectral class is K1 and its absolute magnitude is 0.0. 35 Arietis is a dim star of magnitude 4.55, 343 light-years away. Its spectral class is B3 and its absolute magnitude is −1.7. 41 Arietis, known both as c Arietis and Nair al Butain, is a brighter star of magnitude 3.63, 165 light-years away. Its spectral class is B8 and it has a luminosity of . Its absolute magnitude is −0.2. 53 Arietis is a runaway star of magnitude 6.09, 815 light-years away. Its spectral class is B2. It was likely ejected from the Orion Nebula approximately five million years ago, possibly due to supernovae. Finally, Teegarden's Star is the closest star to Earth in Aries. It is a red dwarf of magnitude 15.14 and spectral class M6.5V. With a proper motion of 5.1 arcseconds per year, it is the 24th closest star to Earth overall.
Variable stars
Aries has its share of variable stars, including R and U Arietis, Mira-type variable stars, and T Arietis, a semi-regular variable star. R Arietis is a Mira variable star that ranges in magnitude from a minimum of 13.7 to a maximum of 7.4 with a period of 186.8 days. It is 4,080 light-years away. U Arietis is another Mira variable star that ranges in magnitude from a minimum of 15.2 to a maximum of 7.2 with a period of 371.1 days. T Arietis is a semiregular variable star that ranges in magnitude from a minimum of 11.3 to a maximum of 7.5 with a period of 317 days. It is 1,630 light-years away. One particularly interesting variable in Aries is SX Arietis, a rotating variable star considered to be the prototype of its class, helium variable stars. SX Arietis stars have very prominent emission lines of Helium I and Silicon III. They are normally main-sequence B0p—B9p stars, and their variations are not usually visible to the naked eye. Therefore, they are observed photometrically, usually having periods that fit in the course of one night. Similar to α2s, SX Arietis stars have periodic changes in their light and magnetic field, which correspond to the periodic rotation; they differ from the α2 Canum Venaticorum variables in their higher temperature. There are between 39 and 49 SX Arietis variable stars currently known; ten are noted as being "uncertain" in the General Catalog of Variable Stars.
Deep sky objects
NGC 772 is a spiral galaxy with an integrated magnitude of 10.3, located southeast of β Arietis and 15 arcminutes west of 15 Arietis. It is a relatively bright galaxy and shows obvious nebulosity and ellipticity in an amateur telescope. It is 7.2 by 4.2 arcminutes, meaning that its surface brightness, magnitude 13.6, is significantly lower than its integrated magnitude. NGC 772 is a class SA(s)b galaxy, which means that it is an unbarred spiral galaxy without a ring that possesses a somewhat prominent bulge and spiral arms that are wound somewhat tightly. The main arm, on the northwest side of the galaxy, is home to many star forming regions; this is due to previous gravitational interactions with other galaxies. NGC 772 has a small companion galaxy, NGC 770, that is about 113,000 light-years away from the larger galaxy. The two galaxies together are also classified as Arp 78 in the Arp peculiar galaxy catalog. NGC 772 has a diameter of 240,000 light-years and the system is 114 million light-years from Earth. Another spiral galaxy in Aries is NGC 673, a face-on class SAB(s)c galaxy. It is a weakly barred spiral galaxy with loosely wound arms. It has no ring and a faint bulge and is 2.5 by 1.9 arcminutes. It has two primary arms with fragments located farther from the core. 171,000 light-years in diameter, NGC 673 is 235 million light-years from Earth.
NGC 678 and NGC 680 are a pair of galaxies in Aries that are only about 200,000 light-years apart. Part of the NGC 691 group of galaxies, both are at a distance of approximately 130 million light-years. NGC 678 is an edge-on spiral galaxy that is 4.5 by 0.8 arcminutes. NGC 680, an elliptical galaxy with an asymmetrical boundary, is the brighter of the two at magnitude 12.9; NGC 678 has a magnitude of 13.35. Both galaxies have bright cores, but NGC 678 is the larger galaxy at a diameter of 171,000 light-years; NGC 680 has a diameter of 72,000 light-years. NGC 678 is further distinguished by its prominent dust lane. NGC 691 itself is a spiral galaxy slightly inclined to our line of sight. It has multiple spiral arms and a bright core. Because it is so diffuse, it has a low surface brightness. It has a diameter of 126,000 light-years and is 124 million light-years away. NGC 877 is the brightest member of an 8-galaxy group that also includes NGC 870, NGC 871, and NGC 876, with a magnitude of 12.53. It is 2.4 by 1.8 arcminutes and is 178 million light-years away with a diameter of 124,000 light-years. Its companion is NGC 876, which is about 103,000 light-years from the core of NGC 877. They are interacting gravitationally, as they are connected by a faint stream of gas and dust. Arp 276 is a different pair of interacting galaxies in Aries, consisting of NGC 935 and IC 1801.
NGC 821 is an E6 elliptical galaxy. It is unusual because it has hints of an early spiral structure, which is normally only found in lenticular and spiral galaxies. NGC 821 is 2.6 by 2.0 arcminutes and has a visual magnitude of 11.3. Its diameter is 61,000 light-years and it is 80 million light-years away. Another unusual galaxy in Aries is Segue 2, a dwarf and satellite galaxy of the Milky Way, recently discovered to be a potential relic of the epoch of reionization.
Meteor showers
Aries is home to several meteor showers. The Daytime Arietid meteor shower is one of the strongest meteor showers that occurs during the day, lasting from 22 May to 2 July. It is an annual shower associated with the Marsden group of comets that peaks on 7 June with a maximum zenithal hourly rate of 54 meteors. Its parent body may be the asteroid Icarus. The meteors are sometimes visible before dawn, because the radiant is 32 degrees away from the Sun. They usually appear at a rate of 1–2 per hour as "earthgrazers", meteors that last several seconds and often begin at the horizon. Because most of the Daytime Arietids are not visible to the naked eye, they are observed in the radio spectrum. This is possible because of the ionized gas they leave in their wake. Other meteor showers radiate from Aries during the day; these include the Daytime Epsilon Arietids and the Northern and Southern Daytime May Arietids. The Jodrell Bank Observatory discovered the Daytime Arietids in 1947 when James Hey and G. S. Stewart adapted the World War II-era radar systems for meteor observations.
The Delta Arietids are another meteor shower radiating from Aries. Peaking on 9 December with a low peak rate, the shower lasts from 8 December to 14 January, with the highest rates visible from 8 to 14 December. The average Delta Arietid meteor is very slow, with an average velocity of per second. However, this shower sometimes produces bright fireballs. This meteor shower has northern and southern components, both of which are likely associated with 1990 HA, a near-Earth asteroid.
The Autumn Arietids also radiate from Aries. The shower lasts from 7 September to 27 October and peaks on 9 October. Its peak rate is low. The Epsilon Arietids appear from 12 to 23 October. Other meteor showers radiating from Aries include the October Delta Arietids, Daytime Epsilon Arietids, Daytime May Arietids, Sigma Arietids, Nu Arietids, and Beta Arietids. The Sigma Arietids, a class IV meteor shower, are visible from 12 to 19 October, with a maximum zenithal hourly rate of less than two meteors per hour on 19 October.
Planetary systems
Aries contains several stars with extrasolar planets. HIP 14810, a G5 type star, is orbited by three giant planets (those more than ten times the mass of Earth). HD 12661, like HIP 14810, is a G-type main sequence star, slightly larger than the Sun, with two orbiting planets. One planet is 2.3 times the mass of Jupiter, and the other is 1.57 times the mass of Jupiter. HD 20367 is a G0 type star, approximately the size of the Sun, with one orbiting planet. The planet, discovered in 2002, has a mass 1.07 times that of Jupiter and orbits every 500 days. In 2019, scientists conducting the CARMENES survey at the Calar Alto Observatory announced evidence of two Earth-mass exoplanets orbiting Teegarden's star, located in Aries, within its habitable zone. The star is a small red dwarf with only around a tenth of the mass and radius of the Sun. It has a large radial velocity.
See also
Aries (Chinese astronomy)
References
Explanatory notes
Citations
Bibliography
Online sources
SIMBAD
External links
The Deep Photographic Guide to the Constellations: Aries
The clickable Aries
Star Tales – Aries
Warburg Institute Iconographic Database (medieval and early modern images of Aries)
Constellations
Constellations listed by Ptolemy
Northern constellations | Aries (constellation) | [
"Astronomy"
] | 5,509 | [
"Constellations listed by Ptolemy",
"Constellations",
"Northern constellations",
"Sky regions",
"Aries (constellation)"
] |
799 | https://en.wikipedia.org/wiki/Aquarius%20%28constellation%29 | Aquarius is an equatorial constellation of the zodiac, between Capricornus and Pisces. Its name is Latin for "water-carrier" or "cup-carrier", and its old astronomical symbol is (♒︎), a representation of water. Aquarius is one of the oldest of the recognized constellations along the zodiac (the Sun's apparent path). It was one of the 48 constellations listed by the 2nd century astronomer Ptolemy, and it remains one of the 88 modern constellations. It is found in a region often called the Sea due to its profusion of constellations with watery associations such as Cetus the whale, Pisces the fish, and Eridanus the river.
At apparent magnitude 2.9, Beta Aquarii is the brightest star in the constellation.
History and mythology
Aquarius is identified as "The Great One" in the Babylonian star catalogues and represents the god Ea himself, who is commonly depicted holding an overflowing vase. The Babylonian star-figure appears on entitlement stones and cylinder seals from the second millennium. It contained the winter solstice in the Early Bronze Age. In Old Babylonian astronomy, Ea was the ruler of the southernmost quarter of the Sun's path, the "Way of Ea", corresponding to the period of 45 days on either side of winter solstice. Aquarius was also associated with the destructive floods that the Babylonians regularly experienced, and thus was negatively connoted. In Ancient Egypt astronomy, Aquarius was associated with the annual flood of the Nile; the banks were said to flood when Aquarius put his jar into the river, beginning spring.
In the Greek tradition, the constellation came to be represented simply as a single vase from which a stream poured down to Piscis Austrinus. The name in the Hindu zodiac is likewise kumbha "water-pitcher".
In Greek mythology, Aquarius is sometimes associated with Deucalion, the son of Prometheus who built a ship with his wife Pyrrha to survive an imminent flood. They sailed for nine days before washing ashore on Mount Parnassus. Aquarius is also sometimes identified with beautiful Ganymede, a youth in Greek mythology and the son of Trojan king Tros, who was taken to Mount Olympus by Zeus to act as cup-carrier to the gods. Neighboring Aquila represents the eagle, under Zeus' command, that snatched the young boy; some versions of the myth indicate that the eagle was in fact Zeus transformed. One tradition, stated that he was carried off by Eos. Yet another figure associated with the water bearer is Cecrops I, a king of Athens who sacrificed water instead of wine to the gods.
Depictions
In the first century, Ptolemy's Almagest established the common Western depiction of Aquarius. His water jar, an asterism itself, consists of Gamma, Pi, Eta, and Zeta Aquarii; it pours water in a stream of more than 20 stars terminating with Fomalhaut, now assigned solely to Piscis Austrinus. The water bearer's head is represented by 5th magnitude 25 Aquarii while his left shoulder is Beta Aquarii; his right shoulder and forearm are represented by Alpha and Gamma Aquarii respectively.
In Eastern astronomy
In Chinese astronomy, the stream of water flowing from the Water Jar was depicted as the "Army of Yu-Lin" (Yu-lim-kiun or Yulinjun, Hanzi: 羽林君). The name "Yu-lin" means "feathers and forests", referring to the numerous light-footed soldiers from the northern reaches of the empire represented by these faint stars. The constellation's stars were the most numerous of any Chinese constellation, numbering 45, the majority of which were located in modern Aquarius. The celestial army was protected by the wall Leibizhen (垒壁阵), which counted Iota, Lambda, Phi, and Sigma Aquarii among its 12 stars. 88, 89, and 98 Aquarii represent Fou-youe, the axes used as weapons and for hostage executions. Also in Aquarius is Loui-pi-tchin, the ramparts that stretch from 29 and 27 Piscium and 33 and 30 Aquarii through Phi, Lambda, Sigma, and Iota Aquarii to Delta, Gamma, Kappa, and Epsilon Capricorni. Similarly in the Hindu calendar Aquarius is depicted as Kumbha, and Kumbha, which means a pot or a jug, stands for the zodiac sign of Aquarius.
Near the border with Cetus, the axe Fuyue was represented by three stars; its position is disputed and may have instead been located in Sculptor. Tienliecheng also has a disputed position; the 13-star castle replete with ramparts may have possessed Nu and Xi Aquarii but may instead have been located south in Piscis Austrinus. The Water Jar asterism was seen to the ancient Chinese as the tomb, Fenmu. Nearby, the emperors' mausoleum Xiuliang stood, demarcated by Kappa Aquarii and three other collinear stars. Ku ("crying") and Qi ("weeping"), each composed of two stars, were located in the same region.
Three of the Chinese lunar mansions shared their name with constellations. Nu, also the name for the 10th lunar mansion, was a handmaiden represented by Epsilon, Mu, 3, and 4 Aquarii. The 11th lunar mansion shared its name with the constellation Xu ("emptiness"), formed by Beta Aquarii and Alpha Equulei; it represented a bleak place associated with death and funerals. Wei, the rooftop and 12th lunar mansion, was a V-shaped constellation formed by Alpha Aquarii, Theta Pegasi, and Epsilon Pegasi; it shared its name with two other Chinese constellations, in modern-day Scorpius and Aries.
Features
Stars
Despite both its prominent position on the zodiac and its large size, Aquarius has no particularly bright stars, its four brightest stars being less bright than (The Apparent Magnitude scale is reverse logarithmic, with increasingly bright objects having lower and lower (more negative) magnitudes.) Recent research has shown that there are several stars lying within its borders that possess planetary systems.
The two brightest stars, α Aquarii and β Aquarii, are luminous yellow supergiants, of spectral types G0Ib and G2Ib respectively, that were once hot blue-white B-class main sequence stars 5 to 9 times as massive as the Sun. The two are also moving through space perpendicular to the plane of the Milky Way. β Aquarii is the brightest star in Aquarius with apparent — only slightly brighter than α Aquarii. It also has the proper name of Sadalsuud. Having cooled and swollen to around 50 times the Sun's diameter, it is around 2200 times as luminous as the Sun. It is around 6.4 times as massive as the Sun and around 56 million years old. Sadalsuud is from Earth. α Aquarii, also known as Sadalmelik, has apparent It is distant from Earth, and is around 6.5 times as massive as the Sun, and 3000 times as luminous. It is 53 million years old.
γ Aquarii, also called Sadachbia, is a white main sequence star of spectral type star of spectral type A0V that is between 158 and 315 million years old and is around 2.5 times the Sun's mass (), and double its radius. Its magnitude is 3.85, and it is away, hence its luminosity is . The name Sadachbia comes from the Arabic for "lucky stars of the tents", sa'd al-akhbiya.
δ Aquarii, also known as Skat or Scheat is a blue-white spectral type A2 star with apparent magnitude 3.27 and luminosity .
ε Aquarii, also known as Albali, is a blue-white spectral type A1 star with apparent magnitude 3.77, absolute magnitude 1.2, and a luminosity of .
ζ Aquarii is a spectral type F2 double star; both stars are white. In combination, they appear to be magnitude 3.6 with luminosity . The primary has magnitude 4.53 and the secondary's magnitude is 4.31, but both have absolute The system's orbital period is 760 years; currently the two components are moving farther apart.
θ Aquarii, sometimes called Ancha, is spectral type G8 with apparent magnitude 4.16 and an absolute
κ Aquarii, also called Situla, has an apparent
λ Aquarii, also called Hudoor or Ekchusis, is spectral type M2 with magnitude 3.74 and luminosity .
ξ Aquarii, also called Bunda, is spectral type A7 with an apparent magnitude 4.69 and an absolute
π Aquarii, also called Seat, is spectral type B0 with apparent magnitude 4.66 and absolute
Planetary systems
Twelve exoplanet systems have been found in Aquarius as of 2013. Gliese 876, one of the nearest stars to Earth at a distance of 15 light-years, was the first red dwarf star to be found to possess a planetary system. It is orbited by four planets, including one terrestrial planet 6.6 times the mass of Earth. The planets vary in orbital period from 2 days to 124 days. 91 Aquarii is an orange giant star orbited by one planet, 91 Aquarii b. The planet's mass is 2.9 times the mass of Jupiter, and its orbital period is 182 days. Gliese 849 is a red dwarf star orbited by the first known long-period Jupiter-like planet, Gliese 849 b. The planet's mass is 0.99 times that of Jupiter and its orbital period is 1,852 days.
There are also less-prominent systems in Aquarius. WASP-6, a type G8 star of magnitude 12.4, is host to one exoplanet, WASP-6 b. The star is 307 parsecs from Earth and has a mass of 0.888 solar masses and a radius of 0.87 solar radii. WASP-6 b was discovered in 2008 by the transit method. It orbits its parent star every 3.36 days at a distance of 0.042 astronomical units (AU). It is 0.503 Jupiter masses but has a proportionally larger radius of 1.224 Jupiter radii. HD 206610, a K0 star located 194 parsecs from Earth, is host to one planet, HD 206610 b. The host star is larger than the Sun; more massive at 1.56 solar masses and larger at 6.1 solar radii. The planet was discovered by the radial velocity method in 2010 and has a mass of 2.2 Jupiter masses. It orbits every 610 days at a distance of 1.68 AU. Much closer to its sun is WASP-47 b, which orbits every 4.15 days only 0.052 AU from its sun, yellow dwarf (G9V) WASP-47. WASP-47 is close in size to the Sun, having a radius of 1.15 solar radii and a mass even closer at 1.08 solar masses. WASP-47 b was discovered in 2011 by the transit method, like WASP-6 b. It is slightly larger than Jupiter with a mass of 1.14 Jupiter masses and a radius of 1.15 Jupiter masses.
There are several more single-planet systems in Aquarius. HD 210277, a magnitude 6.63 yellow star located 21.29 parsecs from Earth, is host to one known planet: HD 210277 b. The 1.23 Jupiter mass planet orbits at nearly the same distance as Earth orbits the Sun1.1 AU, though its orbital period is significantly longer at around 442 days. HD 210277 b was discovered earlier than most of the other planets in Aquarius, detected by the radial velocity method in 1998. The star it orbits resembles the Sun beyond their similar spectral class; it has a radius of 1.1 solar radii and a mass of 1.09 solar masses. HD 212771 b, a larger planet at 2.3 Jupiter masses, orbits host star HD 212771 at a distance of 1.22 AU. The star itself, barely below the threshold of naked-eye visibility at magnitude 7.6, is a G8IV (yellow subgiant) star located 131 parsecs from Earth. Though it has a similar mass to the Sun1.15 solar massesit is significantly less dense with its radius of 5 solar radii. Its lone planet was discovered in 2010 by the radial velocity method, like several other exoplanets in the constellation.
As of 2013, there were only two known multiple-planet systems within the bounds of Aquarius: the Gliese 876 and HD 215152 systems. The former is quite prominent; the latter has only two planets and has a host star farther away at 21.5 parsecs. The HD 215152 system consists of the planets HD 215152 b and HD 215152 c orbiting their K0-type, magnitude 8.13 sun. Both discovered in 2011 by the radial velocity method, the two tiny planets orbit very close to their host star. HD 215152 c is the larger at 0.0097 Jupiter masses (still significantly larger than the Earth, which weighs in at 0.00315 Jupiter masses); its smaller sibling is barely smaller at 0.0087 Jupiter masses. The error in the mass measurements (0.0032 and respectively) is large enough to make this discrepancy statistically insignificant. HD 215152 c also orbits further from the star than HD 215152 b, 0.0852 AU compared to 0.0652.
On 23 February 2017, NASA announced that ultracool dwarf star TRAPPIST-1 in Aquarius has seven Earth-like rocky planets. Of these, as many as four may lie within the system's habitable zone, and may have liquid water on their surfaces. The discovery of the TRAPPIST-1 system is seen by astronomers as a significant step toward finding life beyond Earth.
Deep sky objects
Because of its position away from the galactic plane, the majority of deep-sky objects in Aquarius are galaxies, globular clusters, and planetary nebulae. Aquarius contains three deep sky objects that are in the Messier catalog: the globular clusters Messier 2, Messier 72, and the asterism Messier 73. While M73 was originally catalogued as a sparsely populated open cluster, modern analysis indicates the 6 main stars are not close enough together to fit this definition, reclassifying M73 as an asterism. Two well-known planetary nebulae are also located in Aquarius: the Saturn Nebula (NGC 7009), to the southeast of μ Aquarii; and the famous Helix Nebula (NGC 7293), southwest of δ Aquarii.
M2, also catalogued as NGC 7089, is a rich globular cluster located approximately 37,000 light-years from Earth. At magnitude 6.5, it is viewable in small-aperture instruments, but a 100 mm aperture telescope is needed to resolve any stars. M72, also catalogued as NGC 6981, is a small 9th magnitude globular cluster located approximately 56,000 light-years from Earth. M73, also catalogued as NGC 6994, is an open cluster with highly disputed status.
Aquarius is also home to several planetary nebulae. NGC 7009, also known as the Saturn Nebula, is an 8th magnitude planetary nebula located 3,000 light-years from Earth. It was given its moniker by the 19th century astronomer Lord Rosse for its resemblance to the planet Saturn in a telescope; it has faint protrusions on either side that resemble Saturn's rings. It appears blue-green in a telescope and has a central star of magnitude 11.3. Compared to the Helix Nebula, another planetary nebula in Aquarius, it is quite small. NGC 7293, also known as the Helix Nebula, is the closest planetary nebula to Earth at a distance of 650 light-years. It covers 0.25 square degrees, making it also the largest planetary nebula as seen from Earth. However, because it is so large, it is only viewable as a very faint object, though it has a fairly high integrated magnitude of 6.0.
One of the visible galaxies in Aquarius is NGC 7727, of particular interest for amateur astronomers who wish to discover or observe supernovae. A spiral galaxy (type S), it has an integrated magnitude of 10.7 and is 3 by 3 arcseconds. NGC 7252 is a tangle of stars resulting from the collision of two large galaxies and is known as the Atoms-for-Peace galaxy because of its resemblance to a cartoon atom.
Meteor showers
There are three major meteor showers with radiants in Aquarius: the Eta Aquariids, the Delta Aquariids, and the Iota Aquariids.
The Eta Aquariids are the strongest meteor shower radiating from Aquarius. It peaks between 5 and 6 May with a rate of approximately 35 meteors per hour. Originally discovered by Chinese astronomers in 401, Eta Aquariids can be seen coming from the Water Jar beginning on 21 April and as late as 12 May. The parent body of the shower is Halley's Comet, a periodic comet. Fireballs are common shortly after the peak, approximately between 9 May and 11 May. The normal meteors appear to have yellow trails.
The Delta Aquariids is a double radiant meteor shower that peaks first on 29 July and second on 6 August. The first radiant is located in the south of the constellation, while the second radiant is located in the northern circlet of Pisces asterism. The southern radiant's peak rate is about 20 meteors per hour, while the northern radiant's peak rate is about 10 meteors per hour.
The Iota Aquariids is a fairly weak meteor shower that peaks on 6 August, with a rate of approximately 8 meteors per hour.
Astrology
, the Sun appears in the constellation Aquarius from 16 February to 12 March. In tropical astrology, the Sun is considered to be in the sign Aquarius from 20 January to 19 February, and in sidereal astrology, from 15 February to 14 March.
Aquarius is also associated with the Age of Aquarius, a concept popular in 1960s counterculture and Medieval Alchemy. The date of the start of The Age of Aquarius is a topic of much debate.
Notes
See also
Aquarius (Chinese astronomy)
References
External links
The Deep Photographic Guide to the Constellations: Aquarius
The clickable Aquarius
Warburg Institute Iconographic Database (medieval and early modern images of Aquarius)
Constellations
Equatorial constellations
Constellations listed by Ptolemy | Aquarius (constellation) | [
"Astronomy"
] | 3,960 | [
"Constellations listed by Ptolemy",
"Aquarius (constellation)",
"Constellations",
"Sky regions",
"Equatorial constellations"
] |
840 | https://en.wikipedia.org/wiki/Axiom%20of%20choice | In mathematics, the axiom of choice, abbreviated AC or AoC, is an axiom of set theory equivalent to the statement that a Cartesian product of a collection of non-empty sets is non-empty. Informally put, the axiom of choice says that given any collection of sets, each containing at least one element, it is possible to construct a new set by choosing one element from each set, even if the collection is infinite. Formally, it states that for every indexed family of nonempty sets, there exists an indexed set such that for every . The axiom of choice was formulated in 1904 by Ernst Zermelo in order to formalize his proof of the well-ordering theorem.
The axiom of choice is equivalent to the statement that every partition has a transversal.
In many cases, a set created by choosing elements can be made without invoking the axiom of choice, particularly if the number of sets from which to choose the elements is finite, or if a canonical rule on how to choose the elements is available — some distinguishing property that happens to hold for exactly one element in each set. An illustrative example is sets picked from the natural numbers. From such sets, one may always select the smallest number, e.g. given the sets {{4, 5, 6}, {10, 12}, {1, 400, 617, 8000}}, the set containing each smallest element is {4, 10, 1}. In this case, "select the smallest number" is a choice function. Even if infinitely many sets are collected from the natural numbers, it will always be possible to choose the smallest element from each set to produce a set. That is, the choice function provides the set of chosen elements. But no definite choice function is known for the collection of all non-empty subsets of the real numbers. In that case, the axiom of choice must be invoked.
Bertrand Russell coined an analogy: for any (even infinite) collection of pairs of shoes, one can pick out the left shoe from each pair to obtain an appropriate collection (i.e. set) of shoes; this makes it possible to define a choice function directly. For an infinite collection of pairs of socks (assumed to have no distinguishing features such as being a left sock rather than a right sock), there is no obvious way to make a function that forms a set out of selecting one sock from each pair without invoking the axiom of choice.
Although originally controversial, the axiom of choice is now used without reservation by most mathematicians, and is included in the standard form of axiomatic set theory, Zermelo–Fraenkel set theory with the axiom of choice (ZFC). One motivation for this is that a number of generally accepted mathematical results, such as Tychonoff's theorem, require the axiom of choice for their proofs. Contemporary set theorists also study axioms that are not compatible with the axiom of choice, such as the axiom of determinacy. The axiom of choice is avoided in some varieties of constructive mathematics, although there are varieties of constructive mathematics in which the axiom of choice is embraced.
Statement
A choice function (also called selector or selection) is a function f, defined on a collection X of nonempty sets, such that for every set A in X, f(A) is an element of A. With this concept, the axiom can be stated:
Formally, this may be expressed as follows:
Thus, the negation of the axiom may be expressed as the existence of a collection of nonempty sets which has no choice function. Formally, this may be derived making use of the logical equivalence of to .
Each choice function on a collection X of nonempty sets is an element of the Cartesian product of the sets in X. This is not the most general situation of a Cartesian product of a family of sets, where a given set can occur more than once as a factor; however, one can focus on elements of such a product that select the same element every time a given set appears as factor, and such elements correspond to an element of the Cartesian product of all distinct sets in the family. The axiom of choice asserts the existence of such elements; it is therefore equivalent to:
Given any family of nonempty sets, their Cartesian product is a nonempty set.
Nomenclature
In this article and other discussions of the Axiom of Choice the following abbreviations are common:
AC – the Axiom of Choice. More rarely, AoC is used.
ZF – Zermelo–Fraenkel set theory omitting the Axiom of Choice.
ZFC – Zermelo–Fraenkel set theory, extended to include the Axiom of Choice.
Variants
There are many other equivalent statements of the axiom of choice. These are equivalent in the sense that, in the presence of other basic axioms of set theory, they imply the axiom of choice and are implied by it.
One variation avoids the use of choice functions by, in effect, replacing each choice function with its range:
Given any set X, if the empty set is not an element of X and the elements of X are pairwise disjoint, then there exists a set C such that its intersection with any of the elements of X contains exactly one element.
This can be formalized in first-order logic as:
∀x (
∃o (o ∈ x ∧ ¬∃n (n ∈ o)) ∨
∃a ∃b ∃c (a ∈ x ∧ b ∈ x ∧ c ∈ a ∧ c ∈ b ∧ ¬(a = b)) ∨
∃c ∀e (e ∈ x → ∃a (a ∈ e ∧ a ∈ c ∧ ∀b ((b ∈ e ∧ b ∈ c) → a = b))))
Note that P ∨ Q ∨ R is logically equivalent to (¬P ∧ ¬Q) → R.
In English, this first-order sentence reads:
Given any set X,
X contains the empty set as an element or
the elements of X are not pairwise disjoint or
there exists a set C such that its intersection with any of the elements of X contains exactly one element.
This guarantees for any partition of a set X the existence of a subset C of X containing exactly one element from each part of the partition.
Another equivalent axiom only considers collections X that are essentially powersets of other sets:
For any set A, the power set of A (with the empty set removed) has a choice function.
Authors who use this formulation often speak of the choice function on A, but this is a slightly different notion of choice function. Its domain is the power set of A (with the empty set removed), and so makes sense for any set A, whereas with the definition used elsewhere in this article, the domain of a choice function on a collection of sets is that collection, and so only makes sense for sets of sets. With this alternate notion of choice function, the axiom of choice can be compactly stated as
Every set has a choice function.
which is equivalent to
For any set A there is a function f such that for any non-empty subset B of A, f(B) lies in B.
The negation of the axiom can thus be expressed as:
There is a set A such that for all functions f (on the set of non-empty subsets of A), there is a B such that f(B) does not lie in B.
Restriction to finite sets
The usual statement of the axiom of choice does not specify whether the collection of nonempty sets is finite or infinite, and thus implies that every finite collection of nonempty sets has a choice function. However, that particular case is a theorem of the Zermelo–Fraenkel set theory without the axiom of choice (ZF); it is easily proved by the principle of finite induction. In the even simpler case of a collection of one set, a choice function just corresponds to an element, so this instance of the axiom of choice says that every nonempty set has an element; this holds trivially. The axiom of choice can be seen as asserting the generalization of this property, already evident for finite collections, to arbitrary collections.
Usage
Until the late 19th century, the axiom of choice was often used implicitly, although it had not yet been formally stated. For example, after having established that the set X contains only non-empty sets, a mathematician might have said "let F(s) be one of the members of s for all s in X" to define a function F. In general, it is impossible to prove that F exists without the axiom of choice, but this seems to have gone unnoticed until Zermelo.
Examples
The nature of the individual nonempty sets in the collection may make it possible to avoid the axiom of choice even for certain infinite collections. For example, suppose that each member of the collection X is a nonempty subset of the natural numbers. Every such subset has a smallest element, so to specify our choice function we can simply say that it maps each set to the least element of that set. This gives us a definite choice of an element from each set, and makes it unnecessary to add the axiom of choice to our axioms of set theory.
The difficulty appears when there is no natural choice of elements from each set. If we cannot make explicit choices, how do we know that our selection forms a legitimate set (as defined by the other ZF axioms of set theory)? For example, suppose that X is the set of all non-empty subsets of the real numbers. First we might try to proceed as if X were finite. If we try to choose an element from each set, then, because X is infinite, our choice procedure will never come to an end, and consequently we shall never be able to produce a choice function for all of X. Next we might try specifying the least element from each set. But some subsets of the real numbers do not have least elements. For example, the open interval (0,1) does not have a least element: if x is in (0,1), then so is x/2, and x/2 is always strictly smaller than x. So this attempt also fails.
Additionally, consider for instance the unit circle S, and the action on S by a group G consisting of all rational rotations, that is, rotations by angles which are rational multiples of π. Here G is countable while S is uncountable. Hence S breaks up into uncountably many orbits under G. Using the axiom of choice, we could pick a single point from each orbit, obtaining an uncountable subset X of S with the property that all of its translates by G are disjoint from X. The set of those translates partitions the circle into a countable collection of pairwise disjoint sets, which are all pairwise congruent. Since X is not measurable for any rotation-invariant countably additive finite measure on S, finding an algorithm to form a set from selecting a point in each orbit requires that one add the axiom of choice to our axioms of set theory. See non-measurable set for more details.
In classical arithmetic, the natural numbers are well-ordered: for every nonempty subset of the natural numbers, there is a unique least element under the natural ordering. In this way, one may specify a set from any given subset. One might say, "Even though the usual ordering of the real numbers does not work, it may be possible to find a different ordering of the real numbers which is a well-ordering. Then our choice function can choose the least element of every set under our unusual ordering." The problem then becomes that of constructing a well-ordering, which turns out to require the axiom of choice for its existence; every set can be well-ordered if and only if the axiom of choice holds.
Criticism and acceptance
A proof requiring the axiom of choice may establish the existence of an object without explicitly defining the object in the language of set theory. For example, while the axiom of choice implies that there is a well-ordering of the real numbers, there are models of set theory with the axiom of choice in which no individual well-ordering of the reals is definable. Similarly, although a subset of the real numbers that is not Lebesgue measurable can be proved to exist using the axiom of choice, it is consistent that no such set is definable.
The axiom of choice asserts the existence of these intangibles (objects that are proved to exist, but which cannot be explicitly constructed), which may conflict with some philosophical principles. Because there is no canonical well-ordering of all sets, a construction that relies on a well-ordering may not produce a canonical result, even if a canonical result is desired (as is often the case in category theory). This has been used as an argument against the use of the axiom of choice.
Another argument against the axiom of choice is that it implies the existence of objects that may seem counterintuitive. One example is the Banach–Tarski paradox, which says that it is possible to decompose the 3-dimensional solid unit ball into finitely many pieces and, using only rotations and translations, reassemble the pieces into two solid balls each with the same volume as the original. The pieces in this decomposition, constructed using the axiom of choice, are non-measurable sets.
Moreover, paradoxical consequences of the axiom of choice for the no-signaling principle in physics have recently been pointed out.
Despite these seemingly paradoxical results, most mathematicians accept the axiom of choice as a valid principle for proving new results in mathematics. But the debate is interesting enough that it is considered notable when a theorem in ZFC (ZF plus AC) is logically equivalent (with just the ZF axioms) to the axiom of choice, and mathematicians look for results that require the axiom of choice to be false, though this type of deduction is less common than the type that requires the axiom of choice to be true.
Theorems of ZF hold true in any model of that theory, regardless of the truth or falsity of the axiom of choice in that particular model. The implications of choice below, including weaker versions of the axiom itself, are listed because they are not theorems of ZF. The Banach–Tarski paradox, for example, is neither provable nor disprovable from ZF alone: it is impossible to construct the required decomposition of the unit ball in ZF, but also impossible to prove there is no such decomposition. Such statements can be rephrased as conditional statements—for example, "If AC holds, then the decomposition in the Banach–Tarski paradox exists." Such conditional statements are provable in ZF when the original statements are provable from ZF and the axiom of choice.
In constructive mathematics
As discussed above, in the classical theory of ZFC, the axiom of choice enables nonconstructive proofs in which the existence of a type of object is proved without an explicit instance being constructed. In fact, in set theory and topos theory, Diaconescu's theorem shows that the axiom of choice implies the law of excluded middle. The principle is thus not available in constructive set theory, where non-classical logic is employed.
The situation is different when the principle is formulated in Martin-Löf type theory. There and higher-order Heyting arithmetic, the appropriate statement of the axiom of choice is (depending on approach) included as an axiom or provable as a theorem. A cause for this difference is that the axiom of choice in type theory does not have the extensionality properties that the axiom of choice in constructive set theory does. The type theoretical context is discussed further below.
Different choice principles have been thoroughly studied in the constructive contexts and the principles' status varies between different school and varieties of the constructive mathematics.
Some results in constructive set theory use the axiom of countable choice or the axiom of dependent choice, which do not imply the law of the excluded middle. Errett Bishop, who is notable for developing a framework for constructive analysis, argued that an axiom of choice was constructively acceptable, saying
Although the axiom of countable choice in particular is commonly used in constructive mathematics, its use has also been questioned.
Independence
It has been known since as early as 1922 that the axiom of choice may fail in a variant of ZF with urelements, through the technique of permutation models introduced by Abraham Fraenkel and developed further by Andrzej Mostowski. The basic technique can be illustrated as follows: Let xn and yn be distinct urelements for , and build a model where each set is symmetric under the interchange xn ↔ yn for all but a finite number of n. Then the set can be in the model but sets such as cannot, and thus X cannot have a choice function.
In 1938, Kurt Gödel showed that the negation of the axiom of choice is not a theorem of ZF by constructing an inner model (the constructible universe) that satisfies ZFC, thus showing that ZFC is consistent if ZF itself is consistent. In 1963, Paul Cohen employed the technique of forcing, developed for this purpose, to show that, assuming ZF is consistent, the axiom of choice itself is not a theorem of ZF. He did this by constructing a much more complex model that satisfies ZF¬C (ZF with the negation of AC added as axiom) and thus showing that ZF¬C is consistent. Cohen's model is a symmetric model, which is similar to permutation models, but uses "generic" subsets of the natural numbers (justified by forcing) in place of urelements.
Together these results establish that the axiom of choice is logically independent of ZF. The assumption that ZF is consistent is harmless because adding another axiom to an already inconsistent system cannot make the situation worse. Because of independence, the decision whether to use the axiom of choice (or its negation) in a proof cannot be made by appeal to other axioms of set theory. It must be made on other grounds.
One argument in favor of using the axiom of choice is that it is convenient because it allows one to prove some simplifying propositions that otherwise could not be proved. Many theorems provable using choice are of an elegant general character: the cardinalities of any two sets are comparable, every nontrivial ring with unity has a maximal ideal, every vector space has a basis, every connected graph has a spanning tree, and every product of compact spaces is compact, among many others. Frequently, the axiom of choice allows generalizing a theorem to "larger" objects. For example, it is provable without the axiom of choice that every vector space of finite dimension has a basis, but the generalization to all vector spaces requires the axiom of choice. Likewise, a finite product of compact spaces can be proven to be compact without the axiom of choice, but the generalization to infinite products (Tychonoff's theorem) requires the axiom of choice.
The proof of the independence result also shows that a wide class of mathematical statements, including all statements that can be phrased in the language of Peano arithmetic, are provable in ZF if and only if they are provable in ZFC. Statements in this class include the statement that P = NP, the Riemann hypothesis, and many other unsolved mathematical problems. When attempting to solve problems in this class, it makes no difference whether ZF or ZFC is employed if the only question is the existence of a proof. It is possible, however, that there is a shorter proof of a theorem from ZFC than from ZF.
The axiom of choice is not the only significant statement that is independent of ZF. For example, the generalized continuum hypothesis (GCH) is not only independent of ZF, but also independent of ZFC. However, ZF plus GCH implies AC, making GCH a strictly stronger claim than AC, even though they are both independent of ZF.
Stronger axioms
The axiom of constructibility and the generalized continuum hypothesis each imply the axiom of choice and so are strictly stronger than it. In class theories such as Von Neumann–Bernays–Gödel set theory and Morse–Kelley set theory, there is an axiom called the axiom of global choice that is stronger than the axiom of choice for sets because it also applies to proper classes. The axiom of global choice follows from the axiom of limitation of size. Tarski's axiom, which is used in Tarski–Grothendieck set theory and states (in the vernacular) that every set belongs to Grothendieck universe, is stronger than the axiom of choice.
Equivalents
There are important statements that, assuming the axioms of ZF but neither AC nor ¬AC, are equivalent to the axiom of choice. The most important among them are Zorn's lemma and the well-ordering theorem. In fact, Zermelo initially introduced the axiom of choice in order to formalize his proof of the well-ordering theorem.
Set theory
Tarski's theorem about choice: For every infinite set A, there is a bijective map between the sets A and A×A.
Trichotomy: If two sets are given, then either they have the same cardinality, or one has a smaller cardinality than the other.
Given two non-empty sets, one has a surjection to the other.
Every surjective function has a right inverse.
The Cartesian product of any family of nonempty sets is nonempty. In other words, every family of nonempty sets has a choice function (i.e. a function which maps each of the nonempty sets to one of its elements).
König's theorem: Colloquially, the sum of a sequence of cardinals is strictly less than the product of a sequence of larger cardinals. (The reason for the term "colloquially" is that the sum or product of a "sequence" of cardinals cannot itself be defined without some aspect of the axiom of choice.)
Well-ordering theorem: Every set can be well-ordered. Consequently, every cardinal has an initial ordinal.
Zorn's lemma: Every non-empty partially ordered set in which every chain (i.e., totally ordered subset) has an upper bound contains at least one maximal element.
Hausdorff maximal principle: Every partially ordered set has a maximal chain. Equivalently, in any partially ordered set, every chain can be extended to a maximal chain.
Tukey's lemma: Every non-empty collection of finite character has a maximal element with respect to inclusion.
Antichain principle: Every partially ordered set has a maximal antichain. Equivalently, in any partially ordered set, every antichain can be extended to a maximal antichain.
The powerset of any ordinal can be well-ordered.
Abstract algebra
Every vector space has a basis (i.e., a linearly independent spanning subset). In other words, vector spaces are equivalent to free modules.
Krull's theorem: Every unital ring (other than the trivial ring) contains a maximal ideal. Equivalently, in any nontrivial unital ring, every ideal can be extended to a maximal ideal.
For every non-empty set S there is a binary operation defined on S that gives it a group structure. (A cancellative binary operation is enough, see group structure and the axiom of choice.)
Every free abelian group is projective.
Baer's criterion: Every divisible abelian group is injective.
Every set is a projective object in the category Set of sets.
Functional analysis
The closed unit ball of the dual of a normed vector space over the reals has an extreme point.
Point-set topology
The Cartesian product of any family of connected topological spaces is connected.
Tychonoff's theorem: The Cartesian product of any family of compact topological spaces is compact.
In the product topology, the closure of a product of subsets is equal to the product of the closures.
Mathematical logic
If S is a set of sentences of first-order logic and B is a consistent subset of S, then B is included in a set that is maximal among consistent subsets of S. The special case where S is the set of all first-order sentences in a given signature is weaker, equivalent to the Boolean prime ideal theorem; see the section "Weaker forms" below.
Graph theory
Every connected graph has a spanning tree. Equivalently, every nonempty graph has a spanning forest.
Category theory
Several results in category theory invoke the axiom of choice for their proof. These results might be weaker than, equivalent to, or stronger than the axiom of choice, depending on the strength of the technical foundations. For example, if one defines categories in terms of sets, that is, as sets of objects and morphisms (usually called a small category), then there is no category of all sets, and so it is difficult for a category-theoretic formulation to apply to all sets. On the other hand, other foundational descriptions of category theory are considerably stronger, and an identical category-theoretic statement of choice may be stronger than the standard formulation, à la class theory, mentioned above.
Examples of category-theoretic statements which require choice include:
Every small category has a skeleton.
If two small categories are weakly equivalent, then they are equivalent.
Every continuous functor on a small-complete category which satisfies the appropriate solution set condition has a left-adjoint (the Freyd adjoint functor theorem).
Weaker forms
There are several weaker statements that are not equivalent to the axiom of choice but are closely related. One example is the axiom of dependent choice (DC). A still weaker example is the axiom of countable choice (ACω or CC), which states that a choice function exists for any countable set of nonempty sets. These axioms are sufficient for many proofs in elementary mathematical analysis, and are consistent with some principles, such as the Lebesgue measurability of all sets of reals, that are disprovable from the full axiom of choice.
Given an ordinal parameter α ≥ ω+2 — for every set S with rank less than α, S is well-orderable. Given an ordinal parameter α ≥ 1 — for every set S with Hartogs number less than ωα, S is well-orderable. As the ordinal parameter is increased, these approximate the full axiom of choice more and more closely.
Other choice axioms weaker than axiom of choice include the Boolean prime ideal theorem and the axiom of uniformization. The former is equivalent in ZF to Tarski's 1930 ultrafilter lemma: every filter is a subset of some ultrafilter.
Results requiring AC (or weaker forms) but weaker than it
One of the most interesting aspects of the axiom of choice is the large number of places in mathematics where it shows up. Here are some statements that require the axiom of choice in the sense that they are not provable from ZF but are provable from ZFC (ZF plus AC). Equivalently, these statements are true in all models of ZFC but false in some models of ZF.
Set theory
The ultrafilter lemma (with ZF) can be used to prove the Axiom of choice for finite sets: Given and a collection of non-empty sets, their product is not empty.
The union of any countable family of countable sets is countable (this requires countable choice but not the full axiom of choice).
If the set A is infinite, then there exists an injection from the natural numbers N to A (see Dedekind infinite).
Eight definitions of a finite set are equivalent.
Every infinite game in which is a Borel subset of Baire space is determined.
Every infinite cardinal κ satisfies 2×κ = κ.
Measure theory
The Vitali theorem on the existence of non-measurable sets, which states that there exists a subset of the real numbers that is not Lebesgue measurable.
There exist Lebesgue-measurable subsets of the real numbers that are not Borel sets. That is, the Borel σ-algebra on the real numbers (which is generated by all real intervals) is strictly included the Lebesgue-measure σ-algebra on the real numbers.
The Hausdorff paradox.
The Banach–Tarski paradox.
Algebra
Every field has an algebraic closure.
Every field extension has a transcendence basis.
Every infinite-dimensional vector space contains an infinite linearly independent subset (this requires dependent choice, but not the full axiom of choice).
Stone's representation theorem for Boolean algebras needs the Boolean prime ideal theorem.
The Nielsen–Schreier theorem, that every subgroup of a free group is free.
The additive groups of R and C are isomorphic.
Functional analysis
The Hahn–Banach theorem in functional analysis, allowing the extension of linear functionals.
The theorem that every Hilbert space has an orthonormal basis.
The Banach–Alaoglu theorem about compactness of sets of functionals.
The Baire category theorem about complete metric spaces, and its consequences, such as the open mapping theorem and the closed graph theorem.
On every infinite-dimensional topological vector space there is a discontinuous linear map.
General topology
A uniform space is compact if and only if it is complete and totally bounded.
Every Tychonoff space has a Stone–Čech compactification.
Mathematical logic
Gödel's completeness theorem for first-order logic: every consistent set of first-order sentences has a completion. That is, every consistent set of first-order sentences can be extended to a maximal consistent set.
The compactness theorem: If is a set of first-order (or alternatively, zero-order) sentences such that every finite subset of has a model, then has a model.
Possibly equivalent implications of AC
There are several historically important set-theoretic statements implied by AC whose equivalence to AC is open. Zermelo cited the partition principle, which was formulated before AC itself, as a justification for believing AC. In 1906, Russell declared PP to be equivalent, but whether the partition principle implies AC is the oldest open problem in set theory, and the equivalences of the other statements are similarly hard old open problems. In every known model of ZF where choice fails, these statements fail too, but it is unknown whether they can hold without choice.
Set theory
Partition principle: if there is a surjection from A to B, there is an injection from B to A. Equivalently, every partition P of a set S is less than or equal to S in size.
Converse Schröder–Bernstein theorem: if two sets have surjections to each other, they are equinumerous.
Weak partition principle: if there is an injection and a surjection from A to B, then A and B are equinumerous. Equivalently, a partition of a set S cannot be strictly larger than S. If WPP holds, this already implies the existence of a non-measurable set. Each of the previous three statements is implied by the preceding one, but it is unknown if any of these implications can be reversed.
There is no infinite decreasing sequence of cardinals. The equivalence was conjectured by Schoenflies in 1905.
Abstract algebra
Hahn embedding theorem: Every ordered abelian group G order-embeds as a subgroup of the additive group endowed with a lexicographical order, where Ω is the set of Archimedean equivalence classes of G. This equivalence was conjectured by Hahn in 1907.
Stronger forms of the negation of AC
If we abbreviate by BP the claim that every set of real numbers has the property of Baire, then BP is stronger than ¬AC, which asserts the nonexistence of any choice function on perhaps only a single set of nonempty sets. Strengthened negations may be compatible with weakened forms of AC. For example, ZF + DC + BP is consistent, if ZF is.
It is also consistent with ZF + DC that every set of reals is Lebesgue measurable, but this consistency result, due to Robert M. Solovay, cannot be proved in ZFC itself, but requires a mild large cardinal assumption (the existence of an inaccessible cardinal). The much stronger axiom of determinacy, or AD, implies that every set of reals is Lebesgue measurable, has the property of Baire, and has the perfect set property (all three of these results are refuted by AC itself). ZF + DC + AD is consistent provided that a sufficiently strong large cardinal axiom is consistent (the existence of infinitely many Woodin cardinals).
Quine's system of axiomatic set theory, New Foundations (NF), takes its name from the title ("New Foundations for Mathematical Logic") of the 1937 article that introduced it. In the NF axiomatic system, the axiom of choice can be disproved.
Statements implying the negation of AC
There are models of Zermelo-Fraenkel set theory in which the axiom of choice is false. We shall abbreviate "Zermelo-Fraenkel set theory plus the negation of the axiom of choice" by ZF¬C. For certain models of ZF¬C, it is possible to validate the negation of some standard ZFC theorems. As any model of ZF¬C is also a model of ZF, it is the case that for each of the following statements, there exists a model of ZF in which that statement is true.
The negation of the weak partition principle: There is a set that can be partitioned into strictly more equivalence classes than the original set has elements, and a function whose domain is strictly smaller than its range. In fact, this is the case in all known models.
There is a function f from the real numbers to the real numbers such that f is not continuous at a, but f is sequentially continuous at a, i.e., for any sequence {xn} converging to a, limn f(xn)=f(a).
There is an infinite set of real numbers without a countably infinite subset.
The real numbers are a countable union of countable sets. This does not imply that the real numbers are countable: As pointed out above, to show that a countable union of countable sets is itself countable requires the Axiom of countable choice.
There is a field with no algebraic closure.
In all models of ZF¬C there is a vector space with no basis.
There is a vector space with two bases of different cardinalities.
There is a free complete Boolean algebra on countably many generators.
There is a set that cannot be linearly ordered.
There exists a model of ZF¬C in which every set in Rn is measurable. Thus it is possible to exclude counterintuitive results like the Banach–Tarski paradox which are provable in ZFC. Furthermore, this is possible whilst assuming the Axiom of dependent choice, which is weaker than AC but sufficient to develop most of real analysis.
In all models of ZF¬C, the generalized continuum hypothesis does not hold.
For proofs, see .
Additionally, by imposing definability conditions on sets (in the sense of descriptive set theory) one can often prove restricted versions of the axiom of choice from axioms incompatible with general choice. This appears, for example, in the Moschovakis coding lemma.
Axiom of choice in type theory
In type theory, a different kind of statement is known as the axiom of choice. This form begins with two types, σ and τ, and a relation R between objects of type σ and objects of type τ. The axiom of choice states that if for each x of type σ there exists a y of type τ such that R(x,y), then there is a function f from objects of type σ to objects of type τ such that R(x,f(x)) holds for all x of type σ:
Unlike in set theory, the axiom of choice in type theory is typically stated as an axiom scheme, in which R varies over all formulas or over all formulas of a particular logical form.
Notes
References
Per Martin-Löf, "100 years of Zermelo's axiom of choice: What was the problem with it?", in Logicism, Intuitionism, and Formalism: What Has Become of Them?, Sten Lindström, Erik Palmgren, Krister Segerberg, and Viggo Stoltenberg-Hansen, editors (2008).
, available as a Dover Publications reprint, 2013, .
Herman Rubin, Jean E. Rubin: Equivalents of the axiom of choice. North Holland, 1963. Reissued by Elsevier, April 1970. .
Herman Rubin, Jean E. Rubin: Equivalents of the Axiom of Choice II. North Holland/Elsevier, July 1985, .
George Tourlakis, Lectures in Logic and Set Theory. Vol. II: Set Theory, Cambridge University Press, 2003.
Ernst Zermelo, "Untersuchungen über die Grundlagen der Mengenlehre I," Mathematische Annalen 65: (1908) pp. 261–81. PDF download via digizeitschriften.de
Translated in: Jean van Heijenoort, 2002. From Frege to Gödel: A Source Book in Mathematical Logic, 1879–1931. New edition. Harvard University Press.
1904. "Proof that every set can be well-ordered," 139-41.
1908. "Investigations in the foundations of set theory I," 199–215.
External links
Axiom of Choice entry in the Springer Encyclopedia of Mathematics.
Axiom of Choice and Its Equivalents entry at ProvenMath. Includes formal statement of the Axiom of Choice, Hausdorff's Maximal Principle, Zorn's Lemma and formal proofs of their equivalence down to the finest detail.
Consequences of the Axiom of Choice , based on the book by Paul Howard and Jean Rubin.
. | Axiom of choice | [
"Mathematics"
] | 8,023 | [
"Axiom of choice",
"Axioms of set theory",
"Mathematical axioms"
] |
851 | https://en.wikipedia.org/wiki/Alfred%20Nobel | Alfred Bernhard Nobel ( , ; 21 October 1833 – 10 December 1896) was a Swedish chemist, inventor, engineer and businessman. He is known for inventing dynamite as well as having bequeathed his fortune to establish the Nobel Prizes. He also made several other important contributions to science, holding 355 patents during his life.
A member of the prominent Nobel family, Nobel displayed an early aptitude for science and learning, particularly in chemistry and languages; he became fluent in six languages and filed his first patent at the age of 24. He embarked on many business ventures with his family, most notably owning the company Bofors, which was an iron and steel producer that he had developed into a major manufacturer of cannons and other armaments. Nobel's most famous invention, dynamite, was an explosive using nitroglycerin that was patented in 1867. He further invented gelignite in 1875 and ballistite in 1887.
Upon his death, Nobel donated his fortune to a foundation to fund Nobel Prizes, which annually recognize those who "conferred the greatest benefit to humankind". The synthetic element nobelium was named after him, and his name and legacy also survive in companies such as Dynamit Nobel and AkzoNobel, which descend from mergers with companies he founded. Nobel was elected a member of the Royal Swedish Academy of Sciences, which, pursuant to his will, would be responsible for choosing the Nobel laureates in physics and in chemistry.
Biography
Early life and education
Alfred Nobel was born in Stockholm, Sweden on 21 October 1833. He was the third son of Immanuel Nobel (1801–1872), an inventor and engineer, and Andriette Nobel (née Ahlsell 1805–1889). The couple married in 1827 and had eight children. The family was impoverished and only Alfred and his three brothers survived beyond childhood. Through his father, Alfred Nobel was a descendant of the Swedish scientist Olaus Rudbeck (1630–1702). Nobel's father was an alumnus of Royal Institute of Technology in Stockholm and was an engineer and inventor who built bridges and buildings and experimented with different ways of blasting rocks. He encouraged and taught Nobel from a young age.
Following various business failures caused by the loss of some barges of building material, Immanuel Nobel was forced into bankruptcy, Nobel's father moved to Saint Petersburg, Russia, and grew successful there as a manufacturer of machine tools and explosives. He invented the veneer lathe, which made possible the production of modern plywood, and started work on the naval mine. In 1842, the family joined him in the city. Now prosperous, his parents were able to send Nobel to private tutors, and the boy excelled in his studies, particularly in chemistry and languages, achieving fluency in English, French, German, and Russian. For 18 months, from 1841 to 1842, Nobel attended the Jacobs Apologistic School in Stockholm, his only schooling; he never attended university.
Nobel gained proficiency in Swedish, French, Russian, English, German, and Italian. He also developed sufficient literary skill to write poetry in English. His Nemesis is a prose tragedy in four acts about the Italian noblewoman Beatrice Cenci. It was printed while he was dying, but the entire stock was destroyed immediately after his death except for three copies, being regarded as scandalous and blasphemous. It was published in Sweden in 2003 and has been translated into Slovenian, French, Italian, and Spanish.
Scientific career
As a young man, Nobel studied with chemist Nikolai Zinin; then, in 1850, went to Paris to further the work. There he met Ascanio Sobrero, who had synthesized nitroglycerin three years before. Sobrero strongly opposed the use of nitroglycerin because it was unpredictable, exploding when subjected to variable heat or pressure. But Nobel became interested in finding a way to control and use nitroglycerin as a commercially usable explosive; it had much more power than gunpowder. In 1851 at age 18, he went to the United States for one year to study, working for a short period under Swedish-American inventor John Ericsson, who designed the American Civil War ironclad, USS Monitor. Nobel filed his first patent, an English patent for a gas meter, in 1857, while his first Swedish patent, which he received in 1863, was on "ways to prepare gunpowder". The family factory produced armaments for the Crimean War (1853–1856), but had difficulty switching back to regular domestic production when the fighting ended and they filed for bankruptcy. In 1859, Nobel's father left his factory in the care of the second son, Ludvig Nobel (1831–1888), who greatly improved the business. Nobel and his parents returned to Sweden from Russia and Nobel devoted himself to the study of explosives, and especially to the safe manufacture and use of nitroglycerin. Nobel invented a detonator in 1863, and in 1865 designed the blasting cap.
On 3 September 1864, a shed used for preparation of nitroglycerin exploded at the factory in Heleneborg, Stockholm, Sweden, killing five people, including Nobel's younger brother Emil. He was then deprived of his license to produce explosives. Fazed by the accident, Nobel founded the company Nitroglycerin AB in Vinterviken so that he could continue to work in a more isolated area. Nobel invented dynamite in 1867, a substance easier and safer to handle than the more unstable nitroglycerin. Dynamite was patented in the US and the UK and was used extensively in mining and the building of transport networks internationally. In 1875, Nobel invented gelignite, more stable and powerful than dynamite, and in 1887, patented ballistite, a predecessor of cordite.
Nobel was elected a member of the Royal Swedish Academy of Sciences in 1884, the same institution that would later select laureates for two of the Nobel prizes, and he received an honorary doctorate from Uppsala University in 1893. Nobel's brothers Ludvig and Robert founded the oil company Branobel and became hugely rich in their own right. Nobel invested in these and amassed great wealth through the development of these new oil regions. It operated mainly in Baku, Azerbaijan, but also in Cheleken, Turkmenistan. During his life, Nobel was issued 355 patents internationally, and by his death, his business had established more than 90 armaments factories, despite his apparently pacifist character.
Inventions
Nobel found that when nitroglycerin was incorporated in an absorbent inert substance like kieselguhr (diatomaceous earth) it became safer and more convenient to handle, and this mixture he patented in 1867 as "dynamite". Nobel demonstrated his explosive for the first time that year, at a quarry in Redhill, Surrey, England. In order to help reestablish his name and improve the image of his business from the earlier controversies associated with dangerous explosives, Nobel had also considered naming the highly powerful substance "Nobel's Safety Powder", which is the text used in his patent, but settled with Dynamite instead, referring to the Greek word for "power" ().
Nobel later combined nitroglycerin with various nitrocellulose compounds, similar to collodion, but settled on a more efficient recipe combining another nitrate explosive, and obtained a transparent, jelly-like substance, which was a more powerful explosive than dynamite. Gelignite, or blasting gelatin, as it was named, was patented in 1876; and was followed by a host of similar combinations, modified by the addition of potassium nitrate and various other substances. Gelignite was more stable, powerful, transportable and conveniently formed to fit into bored holes, like those used in drilling and mining, than the previously used compounds. It was adopted as the standard technology for mining in the "Age of Engineering", bringing Nobel a great amount of financial success, though at a cost to his health. An offshoot of this research resulted in Nobel's invention of ballistite, the precursor of many modern smokeless powder explosives and still used as a rocket propellant.
Nobel Prize
There is a well known story about the origin of the Nobel Prize, although historians have been unable to verify it and some dismiss the story as a myth. In 1888, the death of his brother Ludvig supposedly caused several newspapers to publish obituaries of Alfred in error. One French newspaper condemned him for his invention of military explosives—in many versions of the story, dynamite is quoted, although this was mainly used for civilian applications—and this is said to have brought about his decision to leave a better legacy after his death. The obituary stated, ("The merchant of death is dead"), and went on to say, "Dr. Alfred Nobel, who became rich by finding ways to kill more people faster than ever before, died yesterday." Nobel read the obituary and was appalled at the idea that he would be remembered in this way. His decision to posthumously donate the majority of his wealth to found the Nobel Prize has been credited to him wanting to leave behind a better legacy. However, it has been questioned whether or not the obituary in question actually existed.
On 27 November 1895, at the Swedish-Norwegian Club in Paris, Nobel signed his last will and testament and set aside the bulk of his estate to establish the Nobel Prizes, to be awarded annually without distinction of nationality. After taxes and bequests to individuals, Nobel's will allocated 94% of his total assets, 31,225,000 Swedish kronor, to establish the five Nobel Prizes. By 2022, the foundation had approximately 6 billion Swedish Kronor of invested capital.
The first three of these prizes are awarded for eminence in physical science, in chemistry and in medical science or physiology; the fourth is for literary work "in an ideal direction" and the fifth prize is to be given to the person or society that renders the greatest service to the cause of international fraternity, in the suppression or reduction of standing armies, or in the establishment or furtherance of peace congresses.
The formulation for the literary prize being given for a work "in an ideal direction" ( in Swedish), is cryptic and has caused much confusion. For many years, the Swedish Academy interpreted "ideal" as "idealistic" () and used it as a reason not to give the prize to important but less romantic authors, such as Henrik Ibsen and Leo Tolstoy. This interpretation has since been revised, and the prize has been awarded to, for example, Dario Fo and José Saramago, who do not belong to the camp of literary idealism.
There was room for interpretation by the bodies he had named for deciding on the physical sciences and chemistry prizes, given that he had not consulted them before making the will. In his one-page testament, he stipulated that the money go to discoveries or inventions in the physical sciences and to discoveries or improvements in chemistry. He had opened the door to technological awards, but had not left instructions on how to deal with the distinction between science and technology. Since the deciding bodies he had chosen were more concerned with the former, the prizes went to scientists more often than engineers, technicians or other inventors.
Sweden's central bank Sveriges Riksbank celebrated its 300th anniversary in 1968 by donating a large sum of money to the Nobel Foundation to be used to set up a sixth prize in the field of economics in honor of Alfred Nobel. In 2001, Alfred Nobel's great-great-nephew, Peter Nobel (born 1931), asked the Bank of Sweden to differentiate its award to economists given "in Alfred Nobel's memory" from the five other awards. This request added to the controversy over whether the Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel is actually a legitimate "Nobel Prize".
Health issues and death
In his letters to his mistress, Hess, Nobel described constant pain, debilitating migraines, and "paralyzing" fatigue, leading some to believe that he suffered from fibromyalgia. However, his concerns at the time were dismissed as hypochondria, leading to further depression.
By 1895, Nobel had developed angina pectoris.
On 27 November 1895, he finalized his will and testament, leaving most of his wealth in trust, unbeknownst to his family, to fund the Nobel Prize awards.
On 10 December 1896, he suffered a stroke/intracerebral hemorrhage and was first partially paralyzed and then died, aged 63. He is buried in Norra begravningsplatsen in Stockholm.
Based on his experimentation with explosives, his strenuous work habit, and the decline in his health at the end of the 1870s, some hypothesize that nitroglycerine poisoning was a contributing factor to his death at a premature age.
Personal life
Religion
Nobel was Lutheran and, during his years living in Paris, he regularly attended the Church of Sweden Abroad led by pastor Nathan Söderblom who received the Nobel Peace Prize in 1930. He was an agnostic in youth and became an atheist later in life, though he still donated generously to the Church.
Romantic relationships and personality
Nobel remained a solitary character, given to periods of depression. He never married, although his biographers note that he had at least three loves. His first love was in Russia with a girl named Alexandra who rejected his marriage proposal.
In 1876, Austro-Bohemian Countess Bertha von Suttner became his secretary, but she left him after a brief stay to marry her previous lover Baron Arthur Gundaccar von Suttner. Her contact with Nobel was brief, yet she corresponded with him until his death in 1896, and probably influenced his decision to include the Nobel Peace Prize in his will. She was awarded the 1905 Nobel Peace prize "for her sincere peace activities".
Nobel's longest-lasting romance was an 18-year relationship with Sofija Hess from Celje whom he met in 1876 in Baden bei Wien, where she worked as an employee in a flower shop that catered to wealthy clientele. The extent of their relationship was revealed by a collection of 221 letters sent by Nobel to Hess over 15 years. At the time that they met, Nobel was 43 years old while Hess was 26. Their relationship, which was not merely platonic, ended when she became pregnant from another man, although Nobel continued to support her financially until Hess married her child's father to avoid being ostracized as a whore. Hess was a Jewish Christian and the letters include remarks by Nobel characterized as antisemitism. Nobel also displayed characteristics of chauvinism in the letters writing to Hess: "You neither work, nor write, nor read, nor think" and guilted her, writing "I have for years now sacrificed out of purely noble motives my time, my duties, my intellectual life, my reputation".
Residences
Nobel traveled for much of his business life, maintaining companies in Europe and America. From 1865 to 1873, Nobel lived in Krümmel (now in the municipality of Geesthacht, near Hamburg). From 1873 to 1891, he lived in a house in the Avenue Malakoff in Paris.
In 1891, after being accused of high treason against France for selling Ballistite to Italy, he moved from Paris to Sanremo, Italy, acquiring Villa Nobel, overlooking the Mediterranean Sea, where he died in 1896.
In 1894, when he acquired Bofors-Gullspång, the Björkborn Manor was included, where he stayed during the summers. It is now a museum.
Monument to Alfred Nobel
The Monument to Alfred Nobel (, ) is in Saint Petersburg along the Bolshaya Nevka River on Petrogradskaya Embankment, the street where Nobel's family lived until 1859. It was dedicated in 1991 to mark the 90th anniversary of the first Nobel Prize presentation. Diplomat Thomas Bertelman and Professor Arkady Melua were initiators of the creation of the monument in 1989 and they provided funds for the establishment of the monument. The abstract metal sculpture was designed by local artists Sergey Alipov and Pavel Shevchenko, and appears to be an explosion or branches of a tree.
Criticism
Criticism of Nobel focuses on his leading role in weapons manufacturing and sales. Some people question his motives in creating his prizes, suggesting they are intended to improve his reputation.
Antisemitism
Nobel has also been criticized for displays of antisemitism. In his letters to Hess, he wrote "In my experience, [Jews] never do anything out of good will. They act merely out of selfishness or a desire to show off .... among selfish and inconsiderate people they are the most selfish and inconsiderate... all others exist to be fleeced."
References
Further reading
Asbrink, Brita (Summer 2002). "The Nobels in Baku" in Azerbaijan International, Vol 10.2, 56–59.
Evlanoff, M. and Fluor, M. Alfred Nobel – The Loneliest Millionaire. Los Angeles, Ward Ritchie Press, 1969.
Schück, H, and Sohlman, R., (1929). The Life of Alfred Nobel, transl. Brian Lunn, London: William Heineman Ltd.
Sohlman, R. The Legacy of Alfred Nobel, transl. Schubert E. London: The Bodley Head, 1983 (Swedish original, Ett Testamente, published in 1950).
Alfred Nobel US Patent No 78,317, dated 26 May 1868
External links
The Man Behind the Prize – Alfred Nobel
Biography at the Norwegian Nobel Institute
Documents of Life and Activity of The Nobel Family. Under the editorship of Professor Arkady Melua. Series of books. (mostly in Russian)
Alfred Nobel and his unknown coworker
1833 births
1896 deaths
19th-century Swedish businesspeople
19th-century Swedish chemists
19th-century Swedish engineers
19th-century Swedish philanthropists
19th-century Swedish scientists
Bofors people
Burials at Norra begravningsplatsen
Engineers from Stockholm
Explosives engineers
Members of the Royal Swedish Academy of Sciences
Alfred
Nobel Prize | Alfred Nobel | [
"Engineering"
] | 3,712 | [
"Explosives engineering",
"Explosives engineers"
] |
856 | https://en.wikipedia.org/wiki/Apple%20Inc. | Apple Inc. is an American multinational corporation and technology company headquartered in Cupertino, California, in Silicon Valley. It is best known for its consumer electronics, software, and services. Founded in 1976 as Apple Computer Company by Steve Jobs, Steve Wozniak and Ronald Wayne, the company was incorporated by Jobs and Wozniak as Apple Computer, Inc. the following year. It was renamed Apple Inc. in 2007 as the company had expanded its focus from computers to consumer electronics. Apple is the largest technology company by revenue, with billion in the 2024 fiscal year.
The company was founded to produce and market Wozniak's Apple I personal computer. Its second computer, the Apple II, became a best seller as one of the first mass-produced microcomputers. Apple introduced the Lisa in 1983 and the Macintosh in 1984, as some of the first computers to use a graphical user interface and a mouse. By 1985, internal company problems led to Jobs leaving to form NeXT, Inc., and Wozniak withdrawing to other ventures; John Sculley served as long-time CEO for over a decade. In the 1990s, Apple lost considerable market share in the personal computer industry to the lower-priced Wintel duopoly of the Microsoft Windows operating system on Intel-powered PC clones. In 1997, Apple was weeks away from bankruptcy. To resolve its failed operating system strategy, it bought NeXT, effectively bringing Jobs back to the company, who guided Apple back to profitability over the next decade with the introductions of the iMac, iPod, iPhone, and iPad devices to critical acclaim as well as the iTunes Store, launching the "Think different" advertising campaign, and opening the Apple Store retail chain. These moves elevated Apple to consistently be one of the world's most valuable brands since about 2010. Jobs resigned in 2011 for health reasons, and died two months later; he was succeeded as CEO by Tim Cook.
Apple's current product lineup includes portable and home hardware such as the iPhone, iPad, Apple Watch, Mac, and Apple TV; operating systems such as iOS, iPadOS, and macOS; and various software and services including Apple Pay, iCloud, and multimedia streaming services like Apple Music and Apple TV+. Apple is one of the Big Five American information technology companies; for the most part since 2011, Apple has been the world's largest company by market capitalization, and, , is the largest manufacturing company by revenue, the fourth-largest personal computer vendor by unit sales, the largest vendor of tablet computers, and the largest vendor of mobile phones in the world. Apple became the first publicly traded U.S. company to be valued at over $1 trillion in 2018, and, , is valued at just over $3.74 trillion.
Apple has received criticism regarding its contractors' labor practices, its relationship with trade unions, its environmental practices, and its business ethics, including anti-competitive practices and materials sourcing. Nevertheless, the company has a large following and enjoys a high level of brand loyalty.
History
1976–1980: Founding and incorporation
Apple Computer Company was founded on April 1, 1976, by Steve Jobs, Steve Wozniak, and Ronald Wayne as a partnership. The company's first product is the Apple I, a computer designed and hand-built entirely by Wozniak. To finance its creation, Jobs sold his Volkswagen Bus, and Wozniak sold his HP-65 calculator. Neither received the full selling price but in total earned . Wozniak debuted the first prototype at the Homebrew Computer Club in July 1976. The Apple I was sold as a motherboard with CPU, RAM, and basic textual-video chips—a base kit concept which was not yet marketed as a complete personal computer. It was priced soon after debut for . Wozniak later said he was unaware of the coincidental mark of the beast in the number 666, and that he came up with the price because he liked "repeating digits".
Apple Computer, Inc. was incorporated in Cupertino, California, on January 3, 1977, without Wayne, who had left and sold his share of the company back to Jobs and Wozniak for $800 only twelve days after having co-founded it. Multimillionaire Mike Markkula provided essential business expertise and funding of to Jobs and Wozniak during the incorporation of Apple. During the first five years of operations, revenues grew exponentially, doubling about every four months. Between September 1977 and September 1980, yearly sales grew from $775,000 to million, an average annual growth rate of 533%.
The Apple II, also designed by Wozniak, was introduced on April 16, 1977, at the first West Coast Computer Faire. It differs from its major rivals, the TRS-80 and Commodore PET, because of its character cell-based color graphics and open architecture. The Apple I and early Apple II models use ordinary audio cassette tapes as storage devices, which were superseded by the -inch floppy disk drive and interface called the Disk II in 1978.
The Apple II was chosen to be the desktop platform for the first killer application of the business world: VisiCalc, a spreadsheet program released in 1979. VisiCalc created a business market for the Apple II and gave home users an additional reason to buy an Apple II: compatibility with the office, but Apple II market share remained behind home computers made by competitors such as Atari, Commodore, and Tandy.
On December 12, 1980, Apple (ticker symbol "AAPL") went public selling 4.6 million shares at $22 per share ($.10 per share when adjusting for stock splits ), generating over $100 million, which was more capital than any IPO since Ford Motor Company in 1956. By the end of the day, around 300 millionaires were created, including Jobs and Wozniak, from a stock price of $29 per share and a market cap of $1.778 billion.
1980–1990: Success with Macintosh
In December 1979, Steve Jobs and Apple employees, including Jef Raskin, visited Xerox PARC, where they observed the Xerox Alto, featuring a graphical user interface (GUI). Apple subsequently negotiated access to PARC's technology, leading to Apple's option to buy shares at a preferential rate. This visit influenced Jobs to implement a GUI in Apple's products, starting with the Apple Lisa. Despite being pioneering as a mass-marketed GUI computer, the Lisa suffered from high costs and limited software options, leading to commercial failure.
Jobs, angered by being pushed off the Lisa team, took over the company's Macintosh division. Wozniak and Raskin had envisioned the Macintosh as a low-cost computer with a text-based interface like the Apple II, but a plane crash in 1981 forced Wozniak to step back from the project. Jobs quickly redefined the Macintosh as a graphical system that would be cheaper than the Lisa, undercutting his former division. Jobs was also hostile to the Apple II division, which at the time, generated most of the company's revenue.
In 1984, Apple launched the Macintosh, the first personal computer without a bundled programming language. Its debut was signified by "1984", a million television advertisement directed by Ridley Scott that aired during the third quarter of Super Bowl XVIII on January 22, 1984. This was hailed as a watershed event for Apple's success and was called a "masterpiece" by CNN and one of the greatest TV advertisements of all time by TV Guide.
The advertisement created great interest in Macintosh, and sales were initially good, but began to taper off dramatically after the first three months as reviews started to come in. Jobs had required of RAM, which limited its speed and software in favor of aspiring for a projected price point of . The Macintosh shipped for , a price panned by critics due to its slow performance. In early 1985, this sales slump triggered a power struggle between Steve Jobs and CEO John Sculley, who had been hired away from Pepsi two years earlier by Jobs saying, "Do you want to sell sugar water for the rest of your life or come with me and change the world?" Sculley removed Jobs as the head of the Macintosh division, with unanimous support from the Apple board of directors.
The board of directors instructed Sculley to contain Jobs and his ability to launch expensive forays into untested products. Rather than submit to Sculley's direction, Jobs attempted to oust him from leadership. Jean-Louis Gassée informed Sculley that Jobs had been attempting to organize a boardroom coup and called an emergency meeting at which Apple's executive staff sided with Sculley and stripped Jobs of all operational duties. Jobs resigned from Apple in September 1985 and took several Apple employees with him to found NeXT. Wozniak had also quit his active employment at Apple earlier in 1985 to pursue other ventures, expressing his frustration with Apple's treatment of the Apple II division and stating that the company had "been going in the wrong direction for the last five years". Wozniak remained employed by Apple as a representative, receiving a stipend estimated to be $120,000 per year. Jobs and Wozniak remained Apple shareholders following their departures.
After the departures of Jobs and Wozniak in 1985, Sculley launched the Macintosh 512K that year with quadruple the RAM, and introduced the LaserWriter, the first reasonably priced PostScript laser printer. PageMaker, an early desktop publishing application taking advantage of the PostScript language, was also released by Aldus Corporation in July 1985. It has been suggested that the combination of Macintosh, LaserWriter, and PageMaker was responsible for the creation of the desktop publishing market.
This dominant position in the desktop publishing market allowed the company to focus on higher price points, the so-called "high-right policy" named for the position on a chart of price vs. profits. Newer models selling at higher price points offered higher profit margin, and appeared to have no effect on total sales as power users snapped up every increase in speed. Although some worried about pricing themselves out of the market, the high-right policy was in full force by the mid-1980s, due to Jean-Louis Gassée's slogan of "fifty-five or die", referring to the 55% profit margins of the Macintosh II.
This policy began to backfire late in the decade as desktop publishing programs appeared on IBM PC compatibles with some of the same functionality of the Macintosh at far lower price points. The company lost its dominant position in the desktop publishing market and estranged many of its original consumer customer base who could no longer afford Apple products. The Christmas season of 1989 was the first in the company's history to have declining sales, which led to a 20% drop in Apple's stock price. During this period, the relationship between Sculley and Gassée deteriorated, leading Sculley to effectively demote Gassée in January 1990 by appointing Michael Spindler as the chief operating officer. Gassée left the company later that year to set up a rival, Be Inc.
1990–1997: Decline and restructuring
The company pivoted strategy and, in October 1990, introduced three lower-cost models: the Macintosh Classic, the Macintosh LC, and the Macintosh IIsi, all of which generated significant sales due to pent-up demand. In 1991, Apple introduced the hugely successful PowerBook with a design that set the current shape for almost all modern laptops. The same year, Apple introduced System 7, a major upgrade to the Macintosh operating system, adding color to the interface and introducing new networking capabilities.
The success of the lower-cost Macs and PowerBook brought increasing revenue. For some time, Apple was doing very well, introducing fresh new products at increasing profits. The magazine MacAddict named the period between 1989 and 1991 as the "first golden age" of the Macintosh.
The success of lower-cost consumer Macs, especially the LC, cannibalized higher-priced machines. To address this, management introduced several new brands, selling largely identical machines at different price points, for different markets: the high-end Quadra series, the mid-range Centris series, and the consumer-marketed Performa series. This led to significant consumer confusion between so many models.
In 1993, the Apple II series was discontinued. It was expensive to produce, and the company decided it was still absorbing sales from lower-cost Macintosh models. After the launch of the LC, Apple encouraged developers to create applications for Macintosh rather than Apple II, and authorized salespersons to redirect consumers from Apple II and toward Macintosh. The Apple IIe was discontinued in 1993.
Apple experimented with several other unsuccessful consumer targeted products during the 1990s, including QuickTake digital cameras, PowerCD portable CD audio players, speakers, the Pippin video game console, the eWorld online service, and Apple Interactive Television Box. Enormous resources were invested in the problematic Newton tablet division, based on John Sculley's unrealistic market forecasts.
Throughout this period, Microsoft continued to gain market share with Windows by focusing on delivering software to inexpensive personal computers, while Apple was delivering a richly engineered but expensive experience. Apple relied on high profit margins and never developed a clear response; it sued Microsoft for making a GUI similar to the Lisa in Apple Computer, Inc. v. Microsoft Corp. The lawsuit dragged on for years and was finally dismissed. The major product flops and the rapid loss of market share to Windows sullied Apple's reputation, and in 1993 Sculley was replaced as CEO by Michael Spindler.
Under Spindler, Apple, IBM, and Motorola formed the AIM alliance in 1994 to create a new computing platform (the PowerPC Reference Platform or PReP), with IBM and Motorola hardware coupled with Apple software. The AIM alliance hoped that PReP's performance and Apple's software would leave the PC far behind and thus counter the dominance of Windows. That year, Apple introduced the Power Macintosh, the first of many computers with Motorola's PowerPC processor.
In the wake of the alliance, Apple opened up to the idea of allowing Motorola and other companies to build Macintosh clones. Over the next two years, 75 distinct Macintosh clone models were introduced. However, by 1996, Apple executives were worried that the clones were cannibalizing sales of its own high-end computers, where profit margins were highest.
In 1996, Spindler was replaced as CEO by Gil Amelio, who was hired for his reputation as a corporate rehabilitator. Amelio made deep changes, including extensive layoffs and cost-cutting.
This period was also marked by numerous failed attempts to modernize the Macintosh operating system (MacOS). The original Macintosh operating system (System 1) was not built for multitasking (running several applications at once). The company attempted to correct this by introducing cooperative multitasking in System 5, but still decided it needed a more modern approach. This led to the Pink project in 1988, A/UX that same year, Copland in 1994, and evaluated the purchase of BeOS in 1996. Talks with Be stalled when the CEO, former Apple executive Jean-Louis Gassée, demanded $300 million in contrast to Apple's $125 million offer. Only weeks away from bankruptcy, Apple's board preferred NeXTSTEP and purchased NeXT in late 1996 for $400 million, retaining Steve Jobs.
1997–2007: Return to profitability
The NeXT acquisition was finalized on February 9, 1997, and the board brought Jobs back to Apple as an advisor. On July 9, 1997, Jobs staged a boardroom coup that resulted in Amelio's resignation after overseeing a three-year record-low stock price and crippling financial losses. The board named Jobs as interim CEO and he immediately reviewed the product lineup. Jobs canceled 70% of models, ending 3,000 jobs and paring to the core of its computer offerings.
The next month, in August 1997, Steve Jobs convinced Microsoft to make a $150 million investment in Apple and a commitment to continue developing Mac software. This was seen as an "antitrust insurance policy" for Microsoft which had recently settled with the Department of Justice over anti-competitive practices in the United States v. Microsoft Corp. case. Around then, Jobs donated Apple's internal library and archives to Stanford University, to focus more on the present and the future rather than the past. He ended the Mac clone deals and in September 1997, purchased the largest clone maker, Power Computing. On November 10, 1997, the Apple Store website launched, which was tied to a new build-to-order manufacturing model similar to PC manufacturer Dell's success. The moves paid off for Jobs; at the end of his first year as CEO, the company had a $309 million profit.
On May 6, 1998, Apple introduced a new all-in-one computer reminiscent of the original Macintosh: the iMac. The iMac was a huge success, with 800,000 units sold in its first five months, and ushered in major shifts in the industry by abandoning legacy technologies like the -inch diskette, being an early adopter of the USB connector, and coming pre-installed with Internet connectivity (the "i" in iMac) via Ethernet and a dial-up modem. Its striking teardrop shape and translucent materials were designed by Jonathan Ive, who had been hired by Amelio, and who collaborated with Jobs for more than a decade to reshape Apple's product design.
A little more than a year later on July 21, 1999, Apple introduced the iBook consumer laptop. It culminated Jobs's strategy to produce only four products: refined versions of the Power Macintosh G3 desktop and PowerBook G3 laptop for professionals, and the iMac desktop and iBook laptop for consumers. Jobs said the small product line allowed for a greater focus on quality and innovation.
Around then, Apple also completed numerous acquisitions to create a portfolio of digital media production software for both professionals and consumers. Apple acquired Macromedia's Key Grip digital video editing software project which was launched as Final Cut Pro in April 1999. Key Grip's development also led to Apple's release of the consumer video-editing product iMovie in October 1999. Apple acquired the German company Astarte in April 2000, which had developed the DVD authoring software DVDirector, which Apple repackaged as the professional-oriented DVD Studio Pro, and reused its technology to create iDVD for the consumer market. In 2000, Apple purchased the SoundJam MP audio player software from Casady & Greene. Apple renamed the program iTunes, and simplified the user interface and added CD burning.
In 2001, Apple changed course with three announcements. First, on March 24, 2001, Apple announced the release of a new modern operating system, Mac OS X. This was after numerous failed attempts in the early 1990s, and several years of development. Mac OS X is based on NeXTSTEP, OpenStep, and BSD Unix, to combine the stability, reliability, and security of Unix with the ease of use of an overhauled user interface. Second, in May 2001, the first two Apple Store retail locations opened in Virginia and California, offering an improved presentation of the company's products. At the time, many speculated that the stores would fail, but they became highly successful, and the first of more than 500 stores around the world. Third, on October 23, 2001, the iPod portable digital audio player debuted. The product was first sold on November 10, 2001, and was extremely successful, with over 100 million units sold within six years.
In 2003, the iTunes Store was introduced with music downloads for 99¢ a song and iPod integration. It quickly became the market leader in online music services, with over 5 billion downloads by June 19, 2008. Two years later, the iTunes Store was the world's largest music retailer.
In 2002, Apple purchased Nothing Real for its advanced digital compositing application Shake, and Emagic for the music productivity application Logic. The purchase of Emagic made Apple the first computer manufacturer to own a music software company. The acquisition was followed by the development of Apple's consumer-level GarageBand application. The release of iPhoto that year completed the iLife suite.
At the Worldwide Developers Conference keynote address on June 6, 2005, Jobs announced that Apple would move away from PowerPC processors, and the Mac would transition to Intel processors in 2006. On January 10, 2006, the new MacBook Pro and iMac became the first Apple computers to use Intel's Core Duo CPU. By August 7, 2006, Apple made the transition to Intel chips for the entire Mac product line—over one year sooner than announced. The Power Mac, iBook, and PowerBook brands were retired during the transition; the Mac Pro, MacBook, and MacBook Pro became their respective successors. Apple also introduced Boot Camp in 2006 to help users install Windows XP or Windows Vista on their Intel Macs alongside Mac OS X.
Apple's success during this period was evident in its stock price. Between early 2003 and 2006, the price of Apple's stock increased more than tenfold, from around $6 per share (split-adjusted) to over $80. When Apple surpassed Dell's market cap in January 2006, Jobs sent an email to Apple employees saying Dell's CEO Michael Dell should eat his words. Nine years prior, Dell had said that if he ran Apple he would "shut it down and give the money back to the shareholders".
2007–2011: Success with mobile devices
During his keynote speech at the Macworld Expo on January 9, 2007, Jobs announced the renaming of Apple Computer, Inc. to Apple Inc., because the company had broadened its focus from computers to consumer electronics. This event also saw the announcement of the iPhone and the Apple TV. The company sold 270,000 first-generation iPhones during the first 30 hours of sales, and the device was called "a game changer for the industry".
In an article posted on Apple's website on February 6, 2007, Jobs wrote that Apple would be willing to sell music on the iTunes Store without digital rights management, thereby allowing tracks to be played on third-party players if record labels would agree to drop the technology. On April 2, 2007, Apple and EMI jointly announced the removal of DRM technology from EMI's catalog in the iTunes Store, effective in May 2007. Other record labels eventually followed suit and Apple published a press release in January 2009 to announce that all songs on the iTunes Store are available without their FairPlay DRM.
In July 2008, Apple launched the App Store to sell third-party applications for the iPhone and iPod Touch. Within a month, the store sold 60 million applications and registered an average daily revenue of $1 million, with Jobs speculating in August 2008 that the App Store could become a billion-dollar business for Apple. By October 2008, Apple was the third-largest mobile handset supplier in the world due to the popularity of the iPhone.
On January 14, 2009, Jobs announced in an internal memo that he would be taking a six-month medical leave of absence from Apple until the end of June 2009 and would spend the time focusing on his health. In the email, Jobs stated that "the curiosity over my personal health continues to be a distraction not only for me and my family, but everyone else at Apple as well", and explained that the break would allow the company "to focus on delivering extraordinary products". Though Jobs was absent, Apple recorded its best non-holiday quarter (Q1 FY 2009) during the recession, with revenue of $8.16 billion and profit of $1.21 billion.
After years of speculation and multiple rumored "leaks", Apple unveiled a large screen, tablet-like media device known as the iPad on January 27, 2010. The iPad ran the same touch-based operating system as the iPhone, and all iPhone apps were compatible with the iPad. This gave the iPad a large app catalog on launch, though having very little development time before the release. Later that year on April 3, 2010, the iPad was launched in the U.S. It sold more than 300,000 units on its first day, and 500,000 by the end of the first week. In May 2010, Apple's market cap exceeded that of competitor Microsoft for the first time since 1989.
In June 2010, Apple released the iPhone 4, which introduced video calling using FaceTime, multitasking, and a new design with an exposed stainless steel frame as the phone's antenna system. Later that year, Apple again refreshed the iPod line by introducing a multi-touch iPod Nano, an iPod Touch with FaceTime, and an iPod Shuffle that brought back the clickwheel buttons of earlier generations. It also introduced the smaller, cheaper second-generation Apple TV which allowed the rental of movies and shows.
On January 17, 2011, Jobs announced in an internal Apple memo that he would take another medical leave of absence for an indefinite period to allow him to focus on his health. Chief operating officer Tim Cook assumed Jobs's day-to-day operations at Apple, although Jobs would still remain "involved in major strategic decisions". Apple became the most valuable consumer-facing brand in the world. In June 2011, Jobs surprisingly took the stage and unveiled iCloud, an online storage and syncing service for music, photos, files, and software which replaced MobileMe, Apple's previous attempt at content syncing. This would be the last product launch Jobs would attend before his death.
On August 24, 2011, Jobs resigned his position as CEO of Apple. He was replaced by Cook and Jobs became Apple's chairman. Apple did not have a chairman at the time and instead had two co-lead directors—Andrea Jung and Arthur D. Levinson—who continued with those titles until Levinson replaced Jobs as chairman of the board in November after Jobs's death.
2011–present: Post-Jobs era, Tim Cook
On October 5, 2011, Steve Jobs died, marking the end of an era for Apple. The next major product announcement by Apple was on January 19, 2012, when Apple's Phil Schiller introduced iBooks Textbooks for iOS and iBook Author for Mac OS X in New York City. Jobs stated in the biography Steve Jobs that he wanted to reinvent the textbook industry and education.
From 2011 to 2012, Apple released the iPhone 4s and iPhone 5, which featured improved cameras, an intelligent software assistant named Siri, and cloud-synced data with iCloud; the third- and fourth-generation iPads, which featured Retina displays; and the iPad Mini, which featured a 7.9-inch screen in contrast to the iPad's 9.7-inch screen. These launches were successful, with the iPhone 5 (released September 21, 2012) becoming Apple's biggest iPhone launch with over two million pre-orders and sales of three million iPads in three days following the launch of the iPad Mini and fourth-generation iPad (released November 3, 2012). Apple also released a third-generation 13-inch MacBook Pro with a Retina display and new iMac and Mac Mini computers.
On August 20, 2012, Apple's rising stock price increased the company's market capitalization to a then-record $624 billion. This beat the non-inflation-adjusted record for market capitalization previously set by Microsoft in 1999. On August 24, 2012, a US jury ruled that Samsung should pay Apple $1.05 billion (£665m) in damages in an intellectual property lawsuit. Samsung appealed the damages award, which was reduced by $450 million and further granted Samsung's request for a new trial. On November 10, 2012, Apple confirmed a global settlement that dismissed all existing lawsuits between Apple and HTC up to that date, in favor of a ten-year license agreement for current and future patents between the two companies. It is predicted that Apple will make million per year from this deal with HTC.
In May 2014, Apple confirmed its intent to acquire Dr. Dre and Jimmy Iovine's audio company Beats Electronics—producer of the "Beats by Dr. Dre" line of headphones and speaker products, and operator of the music streaming service Beats Music—for billion, and to sell their products through Apple's retail outlets and resellers. Iovine believed that Beats had always "belonged" with Apple, as the company modeled itself after Apple's "unmatched ability to marry culture and technology". The acquisition was the largest purchase in Apple's history.
During a press event on September 9, 2014, Apple introduced a smartwatch called the Apple Watch. Initially, Apple marketed the device as a fashion accessory and a complement to the iPhone, that would allow people to look at their smartphones less. Over time, the company has focused on developing health and fitness-oriented features on the watch, in an effort to compete with dedicated activity trackers. In January 2016, Apple announced that over one billion Apple devices were in active use worldwide.
On June 6, 2016, Fortune released Fortune 500, its list of companies ranked on revenue generation. In the trailing fiscal year of 2015, Apple was listed as the top tech company. It ranked third, overall, with billion in revenue. This represents a movement upward of two spots from the previous year's list.
In June 2017, Apple announced the HomePod, its smart speaker aimed to compete against Sonos, Google Home, and Amazon Echo. Toward the end of the year, TechCrunch reported that Apple was acquiring Shazam, a company that introduced its products at WWDC and specializing in music, TV, film and advertising recognition. The acquisition was confirmed a few days later, reportedly costing Apple million, with media reports that the purchase looked like a move to acquire data and tools bolstering the Apple Music streaming service. The purchase was approved by the European Union in September 2018.
Also in June 2017, Apple appointed Jamie Erlicht and Zack Van Amburg to head the newly formed worldwide video unit. In November 2017, Apple announced it was branching out into original scripted programming: a drama series starring Jennifer Aniston and Reese Witherspoon, and a reboot of the anthology series Amazing Stories with Steven Spielberg. In June 2018, Apple signed the Writers Guild of America's minimum basic agreement and Oprah Winfrey to a multi-year content partnership. Additional partnerships for original series include Sesame Workshop and DHX Media and its subsidiary Peanuts Worldwide, and a partnership with A24 to create original films.
During the Apple Special Event in September 2017, the AirPower wireless charger was announced alongside the iPhone X, iPhone 8, and Watch Series 3. The AirPower was intended to wirelessly charge multiple devices, simultaneously. Though initially set to release in early 2018, the AirPower would be canceled in March 2019, marking the first cancellation of a device under Cook's leadership. On August 19, 2020, Apple's share price briefly topped $467.77, making it the first US company with a market capitalization of trillion.
During its annual WWDC keynote speech on June 22, 2020, Apple announced it would move away from Intel processors, and the Mac would transition to processors developed in-house. The announcement was expected by industry analysts, and it has been noted that Macs featuring Apple's processors would allow for big increases in performance over current Intel-based models. On November 10, 2020, the MacBook Air, MacBook Pro, and the Mac Mini became the first Macs powered by an Apple-designed processor, the Apple M1.
In April 2022, it was reported that Samsung Electro-Mechanics would be collaborating with Apple on its M2 chip instead of LG Innotek. Developer logs showed that at least nine Mac models with four different M2 chips were being tested.
The Wall Street Journal reported that Apple's effort to develop its own chips left it better prepared to deal with the semiconductor shortage that emerged during the COVID-19 pandemic, which led to increased profitability, with sales of M1-based Mac computers rising sharply in 2020 and 2021. It also inspired other companies like Tesla, Amazon, and Meta Platforms to pursue a similar path.
In April 2022, Apple opened an online store that allowed anyone in the U.S. to view repair manuals and order replacement parts for specific recent iPhones, although the difference in cost between this method and official repair is anticipated to be minimal.
In May 2022, a trademark was filed for RealityOS, an operating system reportedly intended for virtual and augmented reality headsets, first mentioned in 2017. According to Bloomberg, the headset may come out in 2023. Further insider reports state that the device uses iris scanning for payment confirmation and signing into accounts.
On June 18, 2022, the Apple Store in Towson, Maryland, became the first to unionize in the U.S., with the employees voting to join the International Association of Machinists and Aerospace Workers.
On July 7, 2022, Apple added Lockdown Mode to macOS 13 and iOS 16, as a response to the earlier Pegasus revelations; the mode increases security protections for high-risk users against targeted zero-day malware.
Apple launched a buy now, pay later service called 'Apple Pay Later' for its Apple Wallet users in March 2023. The program allows its users to apply for loans between $50 and $1,000 to make online or in-app purchases and then repaying them through four installments spread over six weeks without any interest or fees.
In November 2023, Apple agreed to a $25 million settlement in a U.S. Department of Justice case that alleged Apple was discriminating against U.S. citizens in hiring. Apple created jobs that were not listed online and required paper submission to apply for, while advertising these jobs to foreign workers as part of recruitment for PERM.
In January 2024, Apple announced compliance with the European Union's competition law, with major changes to the App Store and other services, effective on March 7. This enables iOS users in the 27-nation bloc to use alternative app stores, and alternative payment methods within apps. This adds a menu in Safari for downloading alternative browsers, such as Chrome or Firefox.
In June 2024, Apple introduced Apple Intelligence to incorporate on-device artificial intelligence capabilities.
On November 1, 2024, Apple announced its acquisition of Pixelmator, a company known for its image editing applications for iPhone and Mac. Apple had previously showcased Pixelmator's apps during its product launches, including naming Pixelmator Pro its Mac App of the Year in 2018 for its innovative use of machine learning and AI. In the announcement, Pixelmator stated that there would be no significant changes to its existing apps following the acquisition.
On December 31, 2024, a preliminary settlement was filed in the Oakland, California federal court that accused Apple of unlawfully recording private conversations through unintentional Siri activations and shared them with third parties, including advertisers. Apple agreed to a $95 million cash settlement to resolve this lawsuit in which its Siri assistant violated user privacy. While denying any wrongdoing, Apple settled the case, allowing affected users to potentially claim up to $20 per device. Attorneys sought $28.5 million in fees from the settlement fund.
Products
Since the company's founding and into the early 2000s, Apple primarily sold computers, which are marketed as Macintosh since the mid-1980s. Since then, the company has expanded its product categories to include various portable devices, starting with the now discontinued iPod (2001), and later with the iPhone (2007) and iPad (2010). Apple also sells several other products that it categorizes as "Wearables, Home and Accessories", such as the Apple Watch, Apple TV, AirPods, HomePod, and Apple Vision Pro.
Apple devices have been praised for creating a cohesive ecosystem when used in conjunction with other Apple products, though have received criticism for not functioning as well or with as many features when used with competitive devices and instead often relying on Apple's proprietary features, software, and services to work as intended by Apple, an approach often described as "walled garden". As of 2023, there are over 2 billion Apple devices in active use worldwide.
Mac
Mac, which is short for Macintosh—its official name until 1999—is Apple's line of personal computers that use the company's proprietary macOS operating system. Personal computers were Apple's original business line, but they account for only about eight percent of the company's revenue.
There are six Mac computer families in production:
iMac: Consumer all-in-one desktop computer, introduced in 1998.
Mac Mini: Consumer sub-desktop computer, introduced in 2005.
MacBook Pro: Professional notebook, introduced in 2006.
Mac Pro: Professional workstation, introduced in 2006.
MacBook Air: Consumer ultra-thin notebook, introduced in 2008.
Mac Studio: Professional small form-factor workstation, introduced in 2022.
Often described as a walled garden, Macs use Apple silicon chips, run the macOS operating system, and include Apple software like the Safari web browser, iMovie for home movie editing, GarageBand for music creation, and the iWork productivity suite. Apple also sells pro apps: Final Cut Pro for video production, Logic Pro for musicians and producers, and Xcode for software developers. Apple also sells a variety of accessories for Macs, including the Pro Display XDR, Apple Studio Display, Magic Mouse, Magic Trackpad, and Magic Keyboard.
iPhone
The iPhone is Apple's line of smartphones, which run the iOS operating system. The first iPhone was unveiled by Steve Jobs on January 9, 2007. Since then, new iPhone models have been released every year. When it was introduced, its multi-touch screen was described as "revolutionary" and a "game-changer" for the mobile phone industry. The device has been credited with creating the app economy.
iOS is one of the two major smartphone platforms in the world, alongside Android. The iPhone has generated large profits for the company, and is credited with helping to make Apple one of the world's most valuable publicly traded companies. , the iPhone accounts for nearly half of the company's revenue.
iPad
The iPad is Apple's line of tablets which run iPadOS. The first-generation iPad was announced on January 27, 2010. The iPad is mainly marketed for consuming multimedia, creating art, working on documents, videoconferencing, and playing games. The iPad lineup consists of several base iPad models, and the smaller iPad Mini, upgraded iPad Air, and high-end iPad Pro. Apple has consistently improved the iPad's performance, with the iPad Pro adopting the same M1 and M2 chips as the Mac; but the iPad still receives criticism for its limited OS.
Apple has sold more than 500 million iPads, though sales peaked in 2013. The iPad still remains the most popular tablet computer by sales , and accounted for seven percent of the company's revenue . Apple sells several iPad accessories, including the Apple Pencil, Smart Keyboard, Smart Keyboard Folio, Magic Keyboard, and several adapters.
Other products
Apple makes several other products that it categorizes as "Wearables, Home and Accessories". These products include the AirPods line of wireless headphones, Apple TV digital media players, Apple Watch smartwatches, Beats headphones, HomePod smart speakers, and the Vision Pro mixed reality headset. , this broad line of products comprises about ten percent of the company's revenues.
Services
Apple offers a broad line of services, including advertising in the App Store and Apple News app, the AppleCare+ extended warranty plan, the iCloud+ cloud-based data storage service, payment services through the Apple Card credit card and the Apple Pay processing platform, digital content services including Apple Books, Apple Fitness+, Apple Music, Apple News+, Apple TV+, and the iTunes Store. , services comprise about 26% of the company's revenue. In 2019, Apple announced it would be making a concerted effort to expand its service revenues.
Marketing
Branding
According to Steve Jobs, the company's name was inspired by his visit to an apple farm while on a fruitarian diet. Apple's first logo, designed by Ron Wayne, depicts Sir Isaac Newton sitting under an apple tree. It was almost immediately replaced by Rob Janoff's "rainbow Apple", the now-familiar rainbow-colored silhouette of an apple with a bite taken out of it. This logo has been erroneously referred to as a tribute to Alan Turing, with the bite mark a reference to his method of suicide.
On August 27, 1999, Apple officially dropped the rainbow scheme and began to use monochromatic logos nearly identical in shape to the previous rainbow incarnation. An Aqua-themed version of the monochrome logo was used from 1998 until 2003, and a glass-themed version was used from 2007 until 2013.
Apple evangelists were actively engaged by the company at one time, but this was after the phenomenon had already been firmly established. Apple evangelist Guy Kawasaki has called the brand fanaticism "something that was stumbled upon", while Ive claimed in 2014 that "people have an incredibly personal relationship" with Apple's products.
Fortune magazine named Apple the most admired company in the United States in 2008, and in the world from 2008 to 2012. On September 30, 2013, Apple surpassed Coca-Cola to become the world's most valuable brand in the Omnicom Group's "Best Global Brands" report. Boston Consulting Group has ranked Apple as the world's most innovative brand every year . 1.65 billion Apple products were in active use. In February 2023, that number exceeded 2 billion devices. In 2023, the World Intellectual Property Organization (WIPO)'s Madrid Yearly Review ranked Apple Inc.'s number of marks applications filled under the Madrid System as 10th in the world, with 74 trademarks applications submitted during 2023.
Apple has been ranked #3 company in the world in the Fortune 500 list for the year 2024.
Advertising
Apple's first slogan, "Byte into an Apple", was coined in the late 1970s. From 1997 to 2002, the slogan "Think different" was used in advertising campaigns, and is still closely associated with Apple. Apple also has slogans for specific product lines—for example, "iThink, therefore iMac" was used in 1998 to promote the iMac, and "Say hello to iPhone" has been used in iPhone advertisements. "Hello" was also used to introduce the original Macintosh, Newton, iMac ("hello (again)"), and iPod.
From the introduction of the Macintosh in 1984, with the 1984 Super Bowl advertisement to the more modern Get a Mac adverts, Apple has been recognized for its efforts toward effective advertising and marketing for its products. However, claims made by later campaigns were criticized, particularly the 2005 Power Mac ads. Apple's product advertisements gained significant attention as a result of their eye-popping graphics and catchy tunes. Musicians who benefited from an improved profile as a result of their songs being included on Apple advertisements include Canadian singer Feist with the song "1234" and Yael Naïm with the song "New Soul".
Stores
The first Apple Stores were originally opened as two locations in May 2001 by then-CEO Steve Jobs, after years of attempting but failing store-within-a-store concepts. Seeing a need for improved retail presentation of the company's products, he began an effort in 1997 to revamp the retail program to get an improved relationship to consumers, and hired Ron Johnson in 2000. Jobs relaunched Apple's online store in 1997, and opened the first two physical stores in 2001. The media initially speculated that Apple would fail, but its stores were highly successful, bypassing the sales numbers of competing nearby stores, and within three years reached US$1 billion in annual sales, becoming the fastest retailer in history to do so.
Over the years, Apple has expanded the number of retail locations and its geographical coverage, with 499 stores across 22 countries worldwide . Strong product sales have placed Apple among the top-tier retail stores, with sales over $16 billion globally in 2011. Apple Stores underwent a period of significant redesign, beginning in May 2016. This redesign included physical changes to the Apple Stores, such as open spaces and re-branded rooms, and changes in function to facilitate interaction between consumers and professionals.
Many Apple Stores are located inside shopping malls, but Apple has built several stand-alone "flagship" stores in high-profile locations. It has been granted design patents and received architectural awards for its stores' designs and construction, specifically for its use of glass staircases and cubes. The success of Apple Stores have had significant influence over other consumer electronics retailers, who have lost traffic, control and profits due to a perceived higher quality of service and products at Apple Stores. Due to the popularity of the brand, Apple receives a large number of job applications, many of which come from young workers. Although Apple Store employees receive above-average pay, are offered money toward education and health care, and receive product discounts, there are limited or no paths of career advancement.
Market power
On March 16, 2020, France fined Apple €1.1 billion for colluding with two wholesalers to stifle competition and keep prices high by handicapping independent resellers. The arrangement created aligned prices for Apple products such as iPads and personal computers for about half the French retail market. According to the French regulators, the abuses occurred between 2005 and 2017 but were first discovered after a complaint by an independent reseller, eBizcuss, in 2012.
On August 13, 2020, Epic Games, the maker of the popular game Fortnite, sued both Apple and Google after Fortnite was removed from Apple's and Google's app stores. The lawsuits came after Apple and Google blocked the game after it introduced a direct payment system that bypassed the fees that Apple and Google had imposed. In September 2020, Epic Games founded the Coalition for App Fairness together with thirteen other companies, which aims for better conditions for the inclusion of apps in the app stores. Later, in December 2020, Facebook agreed to assist Epic in their legal game against Apple, planning to support the company by providing materials and documents to Epic. Facebook had, however, stated that the company would not participate directly with the lawsuit, although did commit to helping with the discovery of evidence relating to the trial of 2021. In the months prior to their agreement, Facebook had been dealing with feuds against Apple relating to the prices of paid apps and privacy rule changes. Head of ad products for Facebook Dan Levy commented, saying that "this is not really about privacy for them, this is about an attack on personalized ads and the consequences it's going to have on small-business owners," commenting on the full-page ads placed by Facebook in various newspapers in December 2020.
Privacy
Apple has publicly taken a pro-privacy stance, actively making privacy-conscious features and settings part of its conferences, promotional campaigns, and public image. With its iOS 8 mobile operating system in 2014, the company started encrypting all contents of iOS devices through users' passcodes, making it impossible at the time for the company to provide customer data to law enforcement requests seeking such information. With the popularity rise of cloud storage solutions, Apple began a technique in 2016 to do deep learning scans for facial data in photos on the user's local device and encrypting the content before uploading it to Apple's iCloud storage system. It also introduced "differential privacy", a way to collect crowdsourced data from many users, while keeping individual users anonymous, in a system that Wired described as "trying to learn as much as possible about a group while learning as little as possible about any individual in it". Users are explicitly asked if they want to participate, and can actively opt-in or opt-out.
However, Apple has aided law enforcement in criminal investigations by providing iCloud backups of users' devices, and the company's commitment to privacy has been questioned by its efforts to promote biometric authentication technology in its newer iPhone models, which do not have the same level of constitutional privacy as a passcode in the United States.
With Apple's release of an update to iOS 14, Apple required all developers of iPhone, iPad, and iPod Touch applications to directly ask iPhone users permission to track them. The feature, called "App Tracking Transparency", received heavy criticism from Facebook, whose primary business model revolves around the tracking of users' data and sharing such data with advertisers so users can see more relevant ads, a technique commonly known as targeted advertising. After Facebook's measures, including purchasing full-page newspaper advertisements protesting App Tracking Transparency, Apple released the update in early 2021. A study by Verizon subsidiary Flurry Analytics reported only 4% of iOS users in the United States and 12% worldwide have opted into tracking.
Prior to the release of iOS 15, Apple announced new efforts at combating child sexual abuse material on iOS and Mac platforms. Parents of minor iMessage users can now be alerted if their child sends or receives nude photographs. Additionally, on-device hashing would take place on media destined for upload to iCloud, and hashes would be compared to a list of known abusive images provided by law enforcement; if enough matches were found, Apple would be alerted and authorities informed. The new features received praise from law enforcement and victims rights advocates. However, privacy advocates, including the Electronic Frontier Foundation, condemned the new features as invasive and highly prone to abuse by authoritarian governments.
Ireland's Data Protection Commission launched a privacy investigation to examine whether Apple complied with the EU's GDPR law following an investigation into how the company processes personal data with targeted ads on its platform.
In December 2019, security researcher Brian Krebs discovered that the iPhone 11 Pro would still show the arrow indicator –signifying location services are being used– at the top of the screen while the main location services toggle is enabled, despite all individual location services being disabled. Krebs was unable to replicate this behavior on older models and when asking Apple for comment, he was told by Apple that "It is expected behavior that the Location Services icon appears in the status bar when Location Services is enabled. The icon appears for system services that do not have a switch in Settings."
Apple later further clarified that this behavior was to ensure compliance with ultra-wideband regulations in specific countries, a technology Apple started implementing in iPhones starting with iPhone 11 Pro, and emphasized that "the management of ultra wideband compliance and its use of location data is done entirely on the device and Apple is not collecting user location data." Will Strafach, an executive at security firm Guardian Firewall, confirmed the lack of evidence that location data was sent off to a remote server. Apple promised to add a new toggle for this feature and in later iOS revisions Apple provided users with the option to tap on the location services indicator in Control Center to see which specific service is using the device's location.
According to published reports by Bloomberg News on March 30, 2022, Apple turned over data such as phone numbers, physical addresses, and IP addresses to hackers posing as law enforcement officials using forged documents. The law enforcement requests sometimes included forged signatures of real or fictional officials. When asked about the allegations, an Apple representative referred the reporter to a section of the company policy for law enforcement guidelines, which stated, "We review every data request for legal sufficiency and use advanced systems and processes to validate law enforcement requests and detect abuse."
Corporate affairs
Business trends
The key trends for Apple are, as of each financial year ending September 24:
Leadership
Senior management
, the management of Apple Inc. includes:
Tim Cook (chief executive officer)
Jeff Williams (chief operating officer)
Kevan Parekh (senior vice president and chief financial officer)
Katherine L. Adams (senior vice president and general counsel)
Eddy Cue (senior vice president – Internet Software and Services)
Craig Federighi (senior vice president – Software Engineering)
John Giannandrea (senior vice president – Machine Learning and AI Strategy)
Deirdre O'Brien (senior vice president – Retail + People)
John Ternus (senior vice president – Hardware Engineering)
Greg Joswiak (senior vice president – Worldwide Marketing)
Johny Srouji (senior vice president – Hardware Technologies)
Sabih Khan (senior vice president – Operations)
Board of directors
, the board of directors of Apple Inc. includes:
Arthur D. Levinson (chairman)
Tim Cook (executive director and CEO)
James A. Bell
Alex Gorsky
Andrea Jung
Monica Lozano
Ronald Sugar
Susan Wagner
Previous CEOs
Michael Scott (1977–1981)
Mike Markkula (1981–1983)
John Sculley (1983–1993)
Michael Spindler (1993–1996)
Gil Amelio (1996–1997)
Steve Jobs (1997–2011)
Ownership
, the largest shareholders of Apple were:
The Vanguard Group (1,317,966,471 shares, 8.54%)
BlackRock (1,042,391,808 shares, 6.75%)
Berkshire Hathaway (905,560,000 shares, 5.86%)
State Street Corporation (586,052,057 shares, 3.80%)
Geode Capital Management (300,822,623 shares, 1.95%)
Fidelity Investments (299,871,352 shares, 1.94%)
Morgan Stanley (217,961,227 shares, 1.41%)
T. Rowe Price (210,827,097 shares, 1.37%)
Norges Bank (176,141,203 shares, 1.14%)
Northern Trust (162,115,200 shares, 1.05%)
Corporate culture
Apple is one of several highly successful companies founded in the 1970s that bucked the traditional notions of corporate culture. Jobs often walked around the office barefoot even after Apple became a Fortune 500 company. By the time of the "1984" television advertisement, Apple's informal culture had become a key trait that differentiated it from its competitors. According to a 2011 report in Fortune, this has resulted in a corporate culture more akin to a startup rather than a multinational corporation. In a 2017 interview, Wozniak credited watching Star Trek and attending Star Trek conventions in his youth as inspiration for co-founding Apple.
As the company has grown and been led by a series of differently opinionated chief executives, some media have suggested that it has lost some of its original character. Nonetheless, it has maintained a reputation for fostering individuality and excellence that reliably attracts talented workers, particularly after Jobs returned. Numerous Apple employees have stated that projects without Jobs's involvement often took longer than others.
The Apple Fellows program awards employees for extraordinary technical or leadership contributions to personal computing. Recipients include Bill Atkinson, Steve Capps, Rod Holt, Alan Kay, Guy Kawasaki, Al Alcorn, Don Norman, Rich Page, Steve Wozniak, and Phil Schiller.
Jobs intended that employees were to be specialists who are not exposed to functions outside their area of expertise. For instance, Ron Johnson—Senior Vice President of Retail Operations until November 1, 2011—was responsible for site selection, in-store service, and store layout, yet had no control of the inventory in his stores. This was done by Tim Cook, who had a background in supply-chain management. Apple is known for strictly enforcing accountability. Each project has a "directly responsible individual" or "DRI" in Apple jargon. Unlike other major U.S. companies, Apple provides a relatively simple compensation policy for executives that does not include perks enjoyed by other CEOs like country club fees or private use of company aircraft. The company typically grants stock options to executives every other year.
In 2015, Apple had 110,000 full-time employees. This increased to 116,000 full-time employees the next year, a notable hiring decrease, largely due to its first revenue decline. Apple does not specify how many of its employees work in retail, though its 2014 SEC filing put the number at approximately half of its employee base. In September 2017, Apple announced that it had over 123,000 full-time employees.
Apple has a strong culture of corporate secrecy, and has an anti-leak Global Security team that recruits from the National Security Agency, the Federal Bureau of Investigation, and the United States Secret Service. In December 2017, Glassdoor said Apple was the 48th best place to work, having originally entered at rank 19 in 2009, peaking at rank 10 in 2012, and falling down the ranks in subsequent years. In 2023, Bloomberg Mark Gurman revealed the existence of Apple's Exploratory Design Group (XDG), which was working to add glucose monitoring to the Apple Watch. Gurman compared XDG to Alphabet's X "moonshot factory".
Offices
Apple Inc.'s world corporate headquarters are located in Cupertino, in the middle of California's Silicon Valley, at Apple Park, a massive circular groundscraper building with a circumference of . The building opened in April 2017 and houses more than 12,000 employees. Apple co-founder Steve Jobs wanted Apple Park to look less like a business park and more like a nature refuge, and personally appeared before the Cupertino City Council in June 2011 to make the proposal, in his final public appearance before his death.
Apple also operates from the Apple Campus (also known by its address, 1 Infinite Loop), a grouping of six buildings in Cupertino that total located about to the west of Apple Park. The Apple Campus was the company's headquarters from its opening in 1993, until the opening of Apple Park in 2017. The buildings, located at 1–6 Infinite Loop, are arranged in a circular pattern around a central green space, in a design that has been compared to that of a university.
In addition to Apple Park and the Apple Campus, Apple occupies an additional thirty office buildings scattered throughout the city of Cupertino, including three buildings as prior headquarters: Stephens Creek Three from 1977 to 1978, Bandley One from 1978 to 1982, and Mariani One from 1982 to 1993. In total, Apple occupies almost 40% of the available office space in the city.
Apple's headquarters for Europe, the Middle East and Africa (EMEA) are located in Cork in the south of Ireland, called the Hollyhill campus. The facility, which opened in 1980, houses 5,500 people and was Apple's first location outside of the United States. Apple's international sales and distribution arms operate out of the campus in Cork.
Apple has two campuses near Austin, Texas: a campus opened in 2014 houses 500 engineers who work on Apple silicon and a campus opened in 2021 where 6,000 people work in technical support, supply chain management, online store curation, and Apple Maps data management. The company also has several other locations in Boulder, Colorado; Culver City, California; Herzliya (Israel), London, New York, Pittsburgh, San Diego, and Seattle that each employ hundreds of people.
Litigation
Apple has been a participant in various legal proceedings and claims since it began operation. In particular, Apple is known for and promotes itself as actively and aggressively enforcing its intellectual property interests. Some litigation examples include Apple v. Samsung, Apple v. Microsoft, Motorola Mobility v. Apple Inc., and Apple Corps v. Apple Computer. Apple has also had to defend itself against charges on numerous occasions of violating intellectual property rights. Most have been dismissed in the courts as shell companies known as patent trolls, with no evidence of actual use of patents in question. On December 21, 2016, Nokia announced that in the U.S. and Germany, it has filed a suit against Apple, claiming that the latter's products infringe on Nokia's patents.
Most recently, in November 2017, the United States International Trade Commission announced an investigation into allegations of patent infringement in regards to Apple's remote desktop technology; Aqua Connect, a company that builds remote desktop software, has claimed that Apple infringed on two of its patents. In January 2022, Ericsson sued Apple over payment of royalty of 5G technology. On June 24, 2024, the European Commission accused Apple of violating the Digital Markets Act by preventing "app developers from freely steering consumers to alternative channels for offers and content".
Finances
, Apple is the world's largest technology company by revenue, with US$383.28 billion; the world's largest technology company by total assets; the fourth-largest personal computer vendor by unit sales; and the world's largest mobile phone manufacturer.
In its fiscal year ending in September 2011, Apple Inc. reported a total of $108 billion in annual revenues—a significant increase from its 2010 revenues of $65 billion—and nearly $82 billion in cash reserves. On March 19, 2012, Apple announced plans for a $2.65-per-share dividend beginning in fourth quarter of 2012, per approval by their board of directors.
The company's worldwide annual revenue in 2013 totaled $170 billion. In May 2013, Apple entered the top ten of the Fortune 500 list of companies for the first time, rising 11 places above its 2012 ranking to take the sixth position. , Apple has around US$234 billion of cash and marketable securities, of which 90% is located outside the United States for tax purposes.
Apple amassed 65% of all profits made by the eight largest worldwide smartphone manufacturers in quarter one of 2014, according to a report by Canaccord Genuity. In the first quarter of 2015, the company garnered 92% of all earnings.
On April 30, 2017, The Wall Street Journal reported that Apple had cash reserves of $250 billion, officially confirmed by Apple as specifically $256.8 billion a few days later.
, Apple was the largest publicly traded corporation in the world by market capitalization. On August 2, 2018, Apple became the first publicly traded U.S. company to reach a $1 trillion market value, and , is valued at just over $3.2 trillion. Apple was ranked No. 4 on the 2018 Fortune 500 rankings of the largest United States corporations by revenue.
In July 2022, Apple reported an 11% decline in Q3 profits compared to 2021. Its revenue in the same period rose 2% year-on-year to $83 billion, though this figure was also lower than in 2021, where the increase was at 36%. The general downturn is reportedly caused by the slowing global economy and supply chain disruptions in China. That year, Apple was one of the largest corporate spenders on research and development worldwide, with R&D expenditure amounting to over $27 billion.
In May 2023, Apple reported a decline in its sales for the first quarter of 2023. Compared to that of 2022, revenue for 2023 fell by 3%. This is Apple's second consecutive quarter of sales decline. This fall is attributed to the slowing economy and consumers putting off purchases of iPads and computers due to increased pricing. However, iPhone sales held up with a year-on-year increase of 1.5%. According to Apple, demands for such devices were strong, particularly in Latin America and South Asia.
Taxes
Apple has created subsidiaries in low-tax places such as Ireland, the Netherlands, Luxembourg, and the British Virgin Islands to cut the taxes it pays around the world. According to The New York Times, in the 1980s Apple was among the first tech companies to designate overseas salespeople in high-tax countries in a manner that allowed the company to sell on behalf of low-tax subsidiaries on other continents, sidestepping income taxes. In the late 1980s, Apple was a pioneer of an accounting technique known as the "Double Irish with a Dutch sandwich", which reduces taxes by routing profits through Irish subsidiaries and the Netherlands and then to the Caribbean.
British Conservative Party Member of Parliament Charlie Elphicke published research on October 30, 2012, which showed that some multinational companies, including Apple Inc., were making billions of pounds of profit in the UK, but were paying an effective tax rate to the UK Treasury of only 3 percent, well below standard corporate tax rates. He followed this research by calling on the Chancellor of the Exchequer George Osborne to force these multinationals, which also included Google and The Coca-Cola Company, to state the effective rate of tax they pay on their UK revenues. Elphicke also said that government contracts should be withheld from multinationals who do not pay their fair share of UK tax.
According to a US Senate report on the company's offshore tax structure concluded in May 2013, Apple has held billions of dollars in profits in Irish subsidiaries to pay little or no taxes to any government by using an unusual global tax structure. The main subsidiary, a holding company that includes Apple's retail stores throughout Europe, has not paid any corporate income tax in the last five years. "Apple has exploited a difference between Irish and U.S. tax residency rules", the report said. On May 21, 2013, Apple CEO Tim Cook defended his company's tax tactics at a Senate hearing.
Apple says that it is the single largest taxpayer in the U.S., with an effective tax rate of approximately of 26% as of Q2 FY2016. In an interview with the German newspaper FAZ in October 2017, Tim Cook stated that Apple was the biggest taxpayer worldwide.
In 2016, after a two-year investigation, the European Commission claimed that Apple's use of a hybrid Double Irish tax arrangement constituted "illegal state aid" from Ireland, and ordered Apple to pay 13 billion euros ($14.5 billion) in unpaid taxes, the largest corporate tax fine in history. This was later annulled, after the European General Court ruled that the commission had provided insufficient evidence. In 2018, Apple repatriated $285 billion to America, resulting in a $38 billion tax payment spread over the following eight years.
Charity
Apple is a partner of Product Red, a fundraising campaign for AIDS charity. In November 2014, Apple arranged for all App Store revenue in a two-week period to go to the fundraiser, generating more than US$20 million, and in March 2017, it released an iPhone 7 with a red color finish.
Apple contributes financially to fundraisers in times of natural disasters. In November 2012, it donated $2.5 million to the American Red Cross to aid relief efforts after Hurricane Sandy, and in 2017 it donated $5 million to relief efforts for both Hurricane Irma and Hurricane Harvey, and for the 2017 Central Mexico earthquake. The company has used its iTunes platform to encourage donations in the wake of environmental disasters and humanitarian crises, such as the 2010 Haiti earthquake, the 2011 Japan earthquake, Typhoon Haiyan in the Philippines in November 2013, and the 2015 European migrant crisis. Apple emphasizes that it does not incur any processing or other fees for iTunes donations, sending 100% of the payments directly to relief efforts, though it also acknowledges that the Red Cross does not receive any personal information on the users donating and that the payments may not be tax deductible.
On April 14, 2016, Apple and the World Wide Fund for Nature (WWF) announced that they have engaged in a partnership to, "help protect life on our planet". Apple released a special page in the iTunes App Store, Apps for Earth. In the arrangement, Apple has committed that through April 24, WWF will receive 100% of the proceeds from the applications participating in the App Store via both the purchases of any paid apps and the In-App Purchases. Apple and WWF's Apps for Earth campaign raised more than $8 million in total proceeds to support WWF's conservation work. WWF announced the results at WWDC 2016 in San Francisco.
During the COVID-19 pandemic, Apple's CEO Cook announced that the company will be donating "millions" of masks to health workers in the United States and Europe. On January 13, 2021, Apple announced a $100 million Racial Equity and Justice Initiative to help combat institutional racism worldwide after the 2020 murder of George Floyd. In June 2023, Apple announced doubling this and then distributed more than $200 million to support organizations focused on education, economic growth, and criminal justice. Half is philanthropic grants and half is centered on equity.
Environment
Apple Energy
Apple Energy, LLC is a wholly-owned subsidiary of Apple Inc. that sells solar energy. , Apple's solar farms in California and Nevada have been declared to provide 217.9 megawatts of solar generation capacity. Apple has received regulatory approval to construct a landfill gas energy plant in North Carolina to use the methane emissions to generate electricity. Apple's North Carolina data center is already powered entirely by renewable sources.
Energy and resources
In 2010, Climate Counts, a nonprofit organization dedicated to directing consumers toward the greenest companies, gave Apple a score of 52 points out of a possible 100, which puts Apple in their top category "Striding". This was an increase from May 2008, when Climate Counts only gave Apple 11 points out of 100, which placed the company last among electronics companies, at which time Climate Counts also labeled Apple with a "stuck icon", adding that Apple at the time was "a choice to avoid for the climate-conscious consumer".
Following a Greenpeace protest, Apple released a statement on April 17, 2012, committing to ending its use of coal and shifting to 100% renewable clean energy. By 2013, Apple was using 100% renewable energy to power their data centers. Overall, 75% of the company's power came from clean renewable sources.
In May 2015, Greenpeace evaluated the state of the Green Internet and commended Apple on their environmental practices saying, "Apple's commitment to renewable energy has helped set a new bar for the industry, illustrating in very concrete terms that a 100% renewable Internet is within its reach, and providing several models of intervention for other companies that want to build a sustainable Internet."
, Apple states that 100% of its U.S. operations run on renewable energy, 100% of Apple's data centers run on renewable energy and 93% of Apple's global operations run on renewable energy. However, the facilities are connected to the local grid which usually contains a mix of fossil and renewable sources, so Apple carbon offsets its electricity use. The Electronic Product Environmental Assessment Tool (EPEAT) allows consumers to see the effect a product has on the environment. Each product receives a Gold, Silver, or Bronze rank depending on its efficiency and sustainability. Every Apple tablet, notebook, desktop computer, and display that EPEAT ranks achieves a Gold rating, the highest possible. Although Apple's data centers recycle water 35 times, the increased activity in retail, corporate and data centers also increase the amount of water use to in 2015.
During an event on March 21, 2016, Apple provided a status update on its environmental initiative to be 100% renewable in all of its worldwide operations. Lisa P. Jackson, Apple's vice president of Environment, Policy and Social Initiatives who reports directly to CEO, Tim Cook, announced that , 93% of Apple's worldwide operations are powered with renewable energy. Also featured was the company's efforts to use sustainable paper in their product packaging; 99% of all paper used by Apple in the product packaging comes from post-consumer recycled paper or sustainably managed forests, as the company continues its move to all paper packaging for all of its products.
Apple announced on August 16, 2016, that Lens Technology, one of its major suppliers in China, has committed to power all its glass production for Apple with 100 percent renewable energy by 2018. The commitment is a large step in Apple's efforts to help manufacturers lower their carbon footprint in China. Apple also announced that all 14 of its final assembly sites in China are now compliant with UL's Zero Waste to Landfill validation. The standard, which started in January 2015, certifies that all manufacturing waste is reused, recycled, composted, or converted into energy (when necessary). Since the program began, nearly 140,000 metric tons of waste have been diverted from landfills.
On July 21, 2020, Apple announced its plan to become carbon neutral across its entire business, manufacturing supply chain, and product life cycle by 2030. In the next 10 years, Apple will try to lower emissions with a series of innovative actions, including: low carbon product design, expanding energy efficiency, renewable energy, process and material innovations, and carbon removal.
In June 2024, the United States Environmental Protection Agency (EPA) published a report about an electronic computer manufacturing facility leased by Apple in 2015 in Santa Clara, California, code named Aria. The EPA report stated that Apple was potentially in violation of federal regulations under the Resource Conservation and Recovery Act (RCRA). According to a report from Bloomberg in 2018, the facility is used to develop microLED screens under the code name T159. The inspection found that Apple was potentially mistreating waste as only subject to California regulations and that they had potentially miscalculated the effectiveness of Apple's activated carbon filters, which filter volatile organic compounds (VOCs) from the air. The EPA inspected the facility in August 2023 due to a tip from a former Apple employee who posted the report on X.
Toxins
Following further campaigns by Greenpeace, in 2008, Apple became the first electronics manufacturer to eliminate all polyvinyl chloride (PVC) and brominated flame retardants (BFRs) in its complete product line. In June 2007, Apple began replacing the cold cathode fluorescent lamp (CCFL) backlit LCD displays in its computers with mercury-free LED-backlit LCD displays and arsenic-free glass, starting with the upgraded MacBook Pro. Apple offers comprehensive and transparent information about the CO2e, emissions, materials, and electrical usage concerning every product they currently produce or have sold in the past (and which they have enough data needed to produce the report), in their portfolio on their homepage. Allowing consumers to make informed purchasing decisions on the products they offer for sale. In June 2009, Apple's iPhone 3GS was free of PVC, arsenic, and BFRs. Since 2009, all Apple products have mercury-free LED-backlit LCD displays, arsenic-free glass, and non-PVC cables. All Apple products have EPEAT Gold status and beat the latest Energy Star guidelines in each product's respective regulatory category.
In November 2011, Apple was featured in Greenpeace's Guide to Greener Electronics, which ranks electronics manufacturers on sustainability, climate and energy policy, and how "green" their products are. The company ranked fourth of fifteen electronics companies (moving up five places from the previous year) with a score of 4.6/10. Greenpeace praised Apple's sustainability, noting that the company exceeded its 70% global recycling goal in 2010. Apple continues to score well on product ratings, with all of their products now being free of PVC plastic and BFRs. However, the guide criticized Apple on the Energy criteria for not seeking external verification of its greenhouse gas emissions data, and for not setting any targets to reduce emissions. In January 2012, Apple requested that its cable maker, Volex, begin producing halogen-free USB and power cables.
Green bonds
In February 2016, Apple issued a billion green bond (climate bond), the first ever of its kind by a U.S. tech company. The green bond proceeds are dedicated to the financing of environmental projects.
Supply chain
Apple products were made in America in Apple-owned factories until the late 1990s; however, as a result of outsourcing initiatives in the 2000s, almost all of its manufacturing is now handled abroad. According to a report by The New York Times, Apple insiders "believe the vast scale of overseas factories, as well as the flexibility, diligence and industrial skills of foreign workers, have so outpaced their American counterparts that 'Made in the U.S.A.' is no longer a viable option for most Apple products".
The company's manufacturing, procurement, and logistics enable it to execute massive product launches without having to maintain large, profit-sapping inventories. In 2011, Apple's profit margins were 40 percent, compared with between 10 and 20 percent for most other hardware companies. Cook's catchphrase to describe his focus on the company's operational arm is: "Nobody wants to buy sour milk."
In May 2017, the company announced a $1 billion funding project for "advanced manufacturing" in the United States, and subsequently invested $200 million in Corning Inc., a manufacturer of toughened Gorilla Glass technology used in Apple's iPhones. The following December, Apple's chief operating officer, Jeff Williams, told CNBC that the "$1 billion" amount was "absolutely not" the final limit on its spending, elaborating that "We're not thinking in terms of a fund limit... We're thinking about, where are the opportunities across the U.S. to help nurture companies that are making the advanced technology — and the advanced manufacturing that goes with that — that quite frankly is essential to our innovation."
During the Mac's early history, Apple generally refused to adopt prevailing industry standards for hardware, instead creating their own. This trend was largely reversed in the late 1990s, beginning with Apple's adoption of the PCI bus in the 7500/8500/9500 Power Macs. Apple has since joined the industry standards groups to influence the future direction of technology standards such as USB, AGP, HyperTransport, Wi-Fi, NVMe, PCIe and others in its products. FireWire is an Apple-originated standard that was widely adopted across the industry after it was standardized as IEEE 1394 and is a legally mandated port in all cable TV boxes in the United States.
Apple has gradually expanded its efforts in getting its products into the Indian market. In July 2012, during a conference call with investors, CEO Tim Cook said that he "[loves] India", but that Apple saw larger opportunities outside the region. India's requirement that 30% of products sold be manufactured in the country was described as "really adds cost to getting product to market". In May 2016, Apple opened an iOS app development center in Bangalore and a maps development office for 4,000 staff in Hyderabad. In March, The Wall Street Journal reported that Apple would begin manufacturing iPhone models in India "over the next two months", and in May, the Journal wrote that an Apple manufacturer had begun production of the iPhone SE in the country, while Apple told CNBC that the manufacturing was for a "small number" of units. In April 2019, Apple initiated manufacturing of the iPhone 7 at its Bengaluru facility, keeping in mind demand from local customers even as they seek more incentives from the government of India. At the beginning of 2020, Tim Cook announced that Apple schedules the opening of its first physical outlet in India for 2021, while an online store is to be launched by the end of the year. The opening of the Apple Store was postponed, and finally took place in April 2023, while the online store was launched in September 2020.
Worker organizations
Apple directly employs 147,000 workers including 25,000 corporate employees in Apple Park and across Silicon Valley. The vast majority of its employees work at the over 500 retail Apple Stores globally. Apple relies on a larger, outsourced workforce for manufacturing, particularly in China where Apple directly employs 10,000 workers across its retail and corporate divisions. In addition, one further million workers are contracted by Apple's suppliers to assemble Apple products, including Foxconn and Pegatron. Zhengzhou Technology Park alone employs 350,000 Chinese workers in Zhengzhou to exclusively work on the iPhone. , Apple uses hardware components from 43 different countries. The majority of assembling is done by Taiwanese original design manufacturer firms Foxconn, Pegatron, Wistron and Compal Electronics in factories primarily located inside China, and, to a lesser extent, Foxconn plants in Brazil, and India.
Apple workers around the globe have been involved in organizing since the 1990s. Apple unions are made up of retail, corporate, and outsourced workers. Apple employees have joined trade unions or formed works councils in Australia, France, Germany, Italy, Japan, the United Kingdom and the United States. In 2021, Apple Together, a solidarity union, sought to bring together the company's global worker organizations. The majority of industrial labor disputes (including union recognition) involving Apple occur indirectly through its suppliers and contractors, notably Foxconn plants in China and, to a lesser extent, in Brazil and India.
Democratic Republic of the Congo
In 2019, Apple was named as a defendant in a forced labour and child slavery lawsuit by Congolese families of children injured and killed in cobalt mines owned by Glencore and Zhejiang Huayou Cobalt, which supply battery materials to Apple and other companies.
In April 2024, lawyers representing the Democratic Republic of the Congo notified Apple of evidence that Apple may be sourcing minerals from conflict areas of eastern Congo. Apple policies and documentation describe mitigation efforts against conflict minerals, however the lawyers identify discrepancies in supplier reporting as well as a Global Witness report describing a lack of "meaningful mitigation" on Apple's part. In December 2024, DRC filed a lawsuit against Apple's European subsidiaries.
See also
List of Apple Inc. media events
Outline of Apple Inc.
Notes
References
Bibliography
Further reading
External links
1976 establishments in California
1980s initial public offerings
American brands
Companies based in Cupertino, California
Companies in the Dow Jones Industrial Average
Companies in the PRISM network
Companies listed on the Nasdaq
Computer companies established in 1976
Computer companies of the United States
Computer hardware companies
Computer systems companies
Display technology companies
Electronics companies of the United States
Home computer hardware companies
Mobile phone manufacturers
Multinational companies headquartered in the United States
Networking hardware companies
Portable audio player manufacturers
Retail companies of the United States
Software companies based in the San Francisco Bay Area
Software companies established in 1976
Steve Jobs
Technology companies based in the San Francisco Bay Area
Technology companies established in 1976
Technology companies of the United States
Companies in the Dow Jones Global Titans 50 | Apple Inc. | [
"Technology"
] | 16,955 | [
"Computer hardware companies",
"Computer systems companies",
"Computers",
"Computer systems"
] |
880 | https://en.wikipedia.org/wiki/ABBA | ABBA were a Swedish pop group formed in Stockholm in 1972 by Agnetha Fältskog, Björn Ulvaeus, Benny Andersson, Anni-Frid Lyngstad. They are one of the most popular and successful musical groups of all time, and are one of the best-selling music acts in the history of popular music.
In , ABBA became 's first winner of the Eurovision Song Contest with the song "Waterloo", which in 2005 was chosen as the best song in the competition's history as part of the 50th anniversary celebration of the contest. During the band's main active years, it consisted of two married couples: Fältskog and Ulvaeus, and Lyngstad and Andersson. With the increase of their popularity, their personal lives suffered, which eventually resulted in the collapse of both marriages. The relationship changes were reflected in the group's music, with later songs featuring darker and more introspective lyrics. After ABBA disbanded in December 1982, Andersson and Ulvaeus continued their success writing music for multiple audiences including stage, musicals and movies, while Fältskog and Lyngstad pursued solo careers. Ten years after the group broke up, a compilation, ABBA Gold, was released, becoming a worldwide best-seller. In 1999, ABBA's music was adapted into Mamma Mia!, a stage musical that toured worldwide and, as of October 2024, is still in the top-ten longest running productions on both Broadway (closed in 2015) and the West End (still running). A film of the same name, released in 2008, became the highest-grossing film in the United Kingdom that year. A sequel, Mamma Mia! Here We Go Again, was released in 2018.
ABBA are among the best-selling music artists in history, with record sales estimated to be between 150 million to 385 million sold worldwide and the group were ranked 3rd best-selling singles artists in the United Kingdom with a total of 11.3 million singles sold by 3 November 2012. In May 2023, ABBA were awarded the BRIT Billion Award, which celebrates those who have surpassed the milestone of one billion UK streams in their career. ABBA were the first group from a non-English-speaking country to achieve consistent success in the charts of English-speaking countries, including the United Kingdom, Australia, United States, Republic of Ireland, Canada, New Zealand and South Africa. They are the best-selling Swedish band of all time and the best-selling band originating in continental Europe. ABBA had eight consecutive number-one albums in the UK. The group also enjoyed significant success in Latin America and recorded a collection of their hit songs in Spanish. ABBA were inducted into the Vocal Group Hall of Fame in 2002. The group were inducted into the Rock and Roll Hall of Fame in 2010, the first recording artists to receive this honour from outside an Anglophonic country. In 2015, their song "Dancing Queen" was inducted into the Recording Academy's Grammy Hall of Fame. In 2024, the United States Library of Congress included the album Arrival (1976) in the National Recording Registry, which recognises works "worthy of preservation for all time based on their cultural, historical or aesthetic importance in the nation’s recorded sound heritage".
In 2016, the group reunited and started working on a digital avatar concert tour. Newly recorded songs were announced in 2018. Voyage, their first new album in 40 years, was released on 5 November 2021 to positive critical reviews and strong sales in numerous countries. ABBA Voyage, a concert residency featuring ABBA as virtual avatars, opened in May 2022 in London.
History
1958–1970: before ABBA
Member origins and collaboration
Agnetha Fältskog (born 5 April 1950 in Jönköping, Sweden) sang with a local dance band (headed by Bernt Enghardt) who sent a demo recording of their music to Karl-Gerhard Lundkvist. The demo tape featured a song written and sung by Agnetha: "Jag var så kär" ("I Was So in Love"). Lundkvist was so impressed with her voice that he was convinced she would be a star. After going through considerable effort to locate the singer, he arranged for Agnetha to come to Stockholm and to record two of her own songs. This led to Agnetha at the age of 18 having a number-one record in Sweden with a self-composed song, which later went on to sell over 80,000 copies. She was soon noticed by the critics and songwriters as a talented singer/songwriter of schlager style songs. Fältskog's main inspiration in her early years was singers such as Connie Francis. Along with her own compositions, she recorded covers of foreign hits and performed them on tours in Swedish folkparks. Most of her biggest hits were self-composed, which was quite unusual for a female singer in the 1960s. Agnetha released four solo LPs between 1968 and 1971. She had many successful singles in the Swedish charts.
Björn Ulvaeus (born 25 April 1945 in Gothenburg, Sweden) also began his musical career at the age of 18 (as a singer and guitarist), when he fronted the Hootenanny Singers, a popular Swedish folk–skiffle group. Ulvaeus started writing English-language songs for his group and even had a brief solo career alongside. The Hootenanny Singers and the Hep Stars sometimes crossed paths while touring. In June 1966, Ulvaeus and Andersson decided to write a song together. Their first attempt was "Isn't It Easy to Say", a song that was later recorded by the Hep Stars. Stig Anderson was the manager of the Hootenanny Singers and founder of the Polar Music label. He saw potential in the collaboration, and encouraged them to write more. The two also began playing occasionally with the other's bands on stage and on record, although it was not until 1969 that the pair wrote and produced some of their first real hits together: "Ljuva sextital" ("Sweet Sixties"), recorded by Brita Borg, and the Hep Stars' 1969 hit "Speleman" ("Fiddler").
Benny Andersson (born 16 December 1946 in Stockholm, Sweden) became (at age 18) a member of a popular Swedish pop-rock group, the Hep Stars, that performed, among other things, covers of international hits. The Hep Stars were known as "the Swedish Beatles". They also set up Hep House, their equivalent of Apple Corps. Andersson played the keyboard and eventually started writing original songs for his band, many of which became major hits, including "No Response", which hit number three in 1965, and "Sunny Girl", "Wedding", and "Consolation", all of which hit number one in 1966. Andersson also had a fruitful songwriting collaboration with Lasse Berghagen, with whom he wrote his first Svensktoppen entry, "Sagan om lilla Sofie" ("The tale of Little Sophie") in 1968.
Andersson wrote and submitted the song "Hej, Clown" for Melodifestivalen 1969, the national festival to select the Swedish entry to the Eurovision Song Contest. The song tied for first place, but re-voting relegated Andersson's song to second place. On that occasion Andersson briefly met his future spouse, singer Anni-Frid Lyngstad, who also participated in the contest. A month later, the two had become a couple. As their respective bands began to break up during 1969, Andersson and Ulvaeus teamed up and recorded their first album together in 1970, called Lycka ("Happiness"), which included original songs sung by both men. Their partners were often present in the recording studio, and sometimes added backing vocals; Fältskog even co-wrote a song with the two. Ulvaeus still occasionally recorded and performed with the Hootenanny Singers until the middle of 1974, and Andersson took part in producing their records.
Anni-Frid "Frida" Lyngstad (born 15 November 1945 in Bjørkåsen in Ballangen Municipality, Norway) sang from the age of 13 with various dance bands, and worked mainly in a jazz-oriented cabaret style. She also formed her own band, the Anni-Frid Four. In the middle of 1967, she won a national talent competition with "En ledig dag" ("A Day Off"), a Swedish version of the bossa nova song "A Day in Portofino", which is included in the EMI compilation Frida 1967–1972. The first prize was a recording contract with EMI Sweden and to perform live on the most popular TV shows in the country. This TV performance, among many others, is included in the -hour documentary Frida – The DVD. Lyngstad released several schlager style singles on EMI with mixed success. When Benny Andersson started to produce her recordings in 1971, she had her first number-one single, "Min egen stad" ("My Own Town"), written by Benny and featuring all the future ABBA members on backing vocals. Lyngstad toured and performed regularly in the folkpark circuit and made appearances on radio and TV. She had a second number-one single with "Man Vill Ju Leva Lite Dessemellan" in late 1972. She had met Ulvaeus briefly in 1963 during a talent contest, and Fältskog during a TV show in early 1968.
Lyngstad linked up with her future bandmates in 1969. On 1 March 1969, she participated in the Melodifestival, where she met Andersson for the first time. A few weeks later they met again during a concert tour in southern Sweden and they soon became a couple. Andersson produced her single "Peter Pan" in September 1969—her first collaboration with Benny & Björn, as they had written the song. Andersson would then produce Lyngstad's debut studio album, Frida, which was released in March 1971. Lyngstad also played in several revues and cabaret shows in Stockholm between 1969 and 1973. After ABBA formed, she recorded another successful album in 1975, Frida ensam, which included the original Swedish rendition of "Fernando", a hit on the Swedish radio charts before the English version was released by ABBA.
During filming of a Swedish TV special in May 1969, Fältskog met Ulvaeus and they married on 6 July 1971. Fältskog and Ulvaeus eventually were involved in each other's recording sessions, and soon even Andersson and Lyngstad added backing vocals to Fältskog's third studio album, Som jag är ("As I Am") (1970). In 1972, Fältskog starred as Mary Magdalene in the original Swedish production of Jesus Christ Superstar and attracted favourable reviews. Between 1967 and 1975, Fältskog released five studio albums.
First live performance and the start of "Festfolket"
An attempt at combining their talents occurred in April 1970 when the two couples went on holiday together to the island of Cyprus. What started as singing for fun on the beach ended up as an improvised live performance in front of the United Nations soldiers stationed on the island. Andersson and Ulvaeus were at this time recording their first album together, Lycka, which was to be released in September 1970. Fältskog and Lyngstad added backing vocals on several tracks during June, and the idea of their working together saw them launch a stage act, "Festfolket" (which translates from Swedish to "Party People" and in pronunciation also "engaged couples"), on 1 November 1970 in Gothenburg.
The cabaret show attracted generally negative reviews, except for the performance of the Andersson and Ulvaeus hit "Hej, gamle man" ("Hello, Old Man")—the first Björn and Benny recording to feature all four. They also performed solo numbers from respective albums, but the lukewarm reception convinced the foursome to shelve plans for working together for the time being, and each soon concentrated on individual projects again.
First record together "Hej, gamle man"
"Hej, gamle man", a song about an old Salvation Army soldier, became the quartet's first hit. The record was credited to Björn & Benny and reached number five on the sales charts and number one on Svensktoppen, staying on the latter chart (which was not a chart linked to sales or airplay) for 15 weeks.
It was during 1971 that the four artists began working together more, adding vocals to the others' recordings. Fältskog, Andersson and Ulvaeus toured together in May, while Lyngstad toured on her own. Frequent recording sessions brought the foursome closer together during the summer.
1970–1973: forming the group
After the 1970 release of Lycka, two more singles credited to "Björn & Benny" were released in Sweden, "Det kan ingen doktor hjälpa" ("No Doctor Can Help with That") and "Tänk om jorden vore ung" ("Imagine If Earth Was Young"), with more prominent vocals by Fältskog and Lyngstad–and moderate chart success. Fältskog and Ulvaeus, now married, started performing together with Andersson on a regular basis at the Swedish folkparks in the middle of 1971.
Stig Anderson, founder and owner of Polar Music, was determined to break into the mainstream international market with music by Andersson and Ulvaeus. "One day the pair of you will write a song that becomes a worldwide hit," he predicted. Stig Anderson encouraged Ulvaeus and Andersson to write a song for Melodifestivalen, and after two rejected entries in 1971, Andersson and Ulvaeus submitted their new song "Säg det med en sång" ("Say It with a Song") for the 1972 contest, choosing newcomer Lena Anderson to perform. The song came in third place, encouraging Stig Anderson, and became a hit in Sweden.
The first signs of foreign success came as a surprise, as the Andersson and Ulvaeus single "She's My Kind of Girl" was released through Epic Records in Japan in March 1972, giving the duo a Top 10 hit. Two more singles were released in Japan, "En Carousel" ("En Karusell" in Scandinavia, an earlier version of "Merry-Go-Round") and "Love Has Its Ways" (a song they wrote with Kōichi Morita).
First hit as Björn, Benny, Agnetha and Anni-Frid
Ulvaeus and Andersson persevered with their songwriting and experimented with new sounds and vocal arrangements. "People Need Love" was released in June 1972, featuring guest vocals by the women, who were now given much greater prominence. Stig Anderson released it as a single, credited to Björn & Benny, Agnetha & Anni-Frid. The song peaked at number 17 in the Swedish combined single and album charts, enough to convince them they were on to something.
"People Need Love" also became the first record to chart for the quartet in the United States, where it peaked at number 114 on the Cashbox singles chart and number 117 on the Record World singles chart. Labelled as Björn & Benny (with Svenska Flicka) meaning Swedish Girl, it was released there through Playboy Records. According to Stig Anderson, "People Need Love" could have been a much bigger American hit, but a small label like Playboy Records did not have the distribution resources to meet the demand for the single from retailers and radio programmers.
"Ring Ring"
In 1973, the band and their manager Stig Anderson decided to have another try at Melodifestivalen, this time with the song "Ring Ring". The studio sessions were handled by Michael B. Tretow, who experimented with a "wall of sound" production technique that became a distinctive new sound thereafter associated with ABBA. Stig Anderson arranged an English translation of the lyrics by Neil Sedaka and Phil Cody and they thought this would be a success. However, on 10 February 1973, the song came third in Melodifestivalen; thus it never reached the Eurovision Song Contest itself. Nevertheless, the group released their debut studio album, also called Ring Ring. The album did well and the "Ring Ring" single was a hit in many parts of Europe and also in South Africa. However, Stig Anderson felt that the true breakthrough could only come with a UK or US hit.
When Agnetha Fältskog gave birth to her daughter Linda in 1973, she was replaced for a short period by Inger Brundin on a trip to West Germany.
Official naming
In 1973, Stig Anderson, tired of unwieldy names, started to refer to the group privately and publicly as ABBA (a palindrome). At first, this was a play on words, as Abba is also the name of a well-known fish-canning company in Sweden, and itself an abbreviation. However, since the fish-canners were unknown outside Sweden, Anderson came to believe the name would work in international markets. A competition to find a suitable name for the group was held in a Gothenburg newspaper and it was officially announced in the summer that the group were to be known as "ABBA". The group negotiated with the canners for the rights to the name. Fred Bronson reported for Billboard that Fältskog told him in a 1988 interview that "[ABBA] had to ask permission and the factory said, 'O.K., as long as you don't make us feel ashamed for what you're doing.
"ABBA" is an acronym formed from the first letters of each group member's first name: Agnetha, Björn, Benny, Anni-Frid, although there has never been any official confirmation of who each letter in the sequence refers to. The earliest known example of "ABBA" written on paper is on a recording session sheet from the Metronome Studio in Stockholm dated 16 October 1973. This was first written as "Björn, Benny, Agnetha & Frida", but was subsequently crossed out with "ABBA" written in large letters on top.
Official logo
Their official logo, with its distinctive backward "B", was designed by Rune Söderqvist, who designed most of ABBA's record sleeves. The ambigram first appeared on the French compilation album, Golden Double Album, released in May 1976 by Disques Vogue, and would henceforth be used for all official releases.
The idea for the official logo was made by the German photographer on a velvet jumpsuit photo shoot for the teenage magazine Bravo. In the photo, the ABBA members held giant initial letters of their names. After the pictures were made, Heilemann found out that Benny Andersson reversed his letter "B;" this prompted discussions about the mirrored "B", and the members of ABBA agreed on the mirrored letter. From 1976 onward, the first "B" in the logo version of the name was "mirror-image" reversed on the band's promotional material.
Following their acquisition of the group's catalogue, PolyGram began using variations of the ABBA logo, employing a different font. In 1992, Polygram added a crown emblem to it for the first release of the ABBA Gold: Greatest Hits compilation. After Universal Music purchased PolyGram (and, thus, ABBA's label Polar Music International), control of the group's catalogue returned to Stockholm. Since then, the original logo has been reinstated on all official products.
1973–1976: breakthrough
Eurovision Song Contest 1974
ABBA entered the Melodifestivalen with "Ring Ring" but did not qualify as the 1973 Swedish entry. Stig Anderson started planning for the 1974 contest. Ulvaeus, Andersson and Stig Anderson saw possibilities in using the Eurovision Song Contest to make the music business aware of them as songwriters, as well as to publicise the band. In late 1973 they were invited by Swedish television to contribute a song for the Melodifestivalen 1974, and the upbeat song "Waterloo" was chosen. The group were now inspired by the growing glam rock scene in England.
With this third attempt, ABBA were more experienced and better prepared for the Eurovision Song Contest, and they won the nation's hearts on Swedish television on 9 February 1974. Winning the 1974 Eurovision Song Contest on 6 April 1974, and singing "Waterloo" in English instead of their native language, gave them the chance to tour Europe and perform on major television shows, as a result of which the "Waterloo" single charted in many European countries. After winning the contest, ABBA spent an evening of glory partying in the appropriately named first-floor Napoleon suite of The Grand Brighton Hotel.
"Waterloo" was ABBA's first major hit and their first number-one single in nine western and northern European countries, including the major markets of the UK and West Germany, and in South Africa. It made the top ten in other countries, rising to number three in Spain, number four in Australia and France, and number seven in Canada. In the United States, the song peaked at number six on the Billboard Hot 100 chart, paving the way for their first album and their first trip to the US as a group. Although only a short promotional visit, this included their first performance on American television, on The Mike Douglas Show. The Waterloo album peaked at only number 145 on the Billboard 200 chart, but received unanimous praise from US critics. The Los Angeles Times said the album was a "compelling and fascinating debut album" that captured the spirit of mainstream pop, and described it as "immensely enjoyable and pleasant", while Creem said it was "a perfect blend of exceptional, lovable compositions".
ABBA's follow-up single, "Honey, Honey", peaked at number 27 on the US Billboard Hot 100, reached the top twenty in several other countries, and was a number-two hit in West Germany, although it only reached the top 30 in Australia and the US. In the UK, ABBA's British record label, Epic, decided to re-release a remixed version of "Ring Ring" instead of "Honey, Honey". A cover version of "Honey, Honey" by Sweet Dreams peaked at number 10, and both records debuted on the UK chart within a week of each other. "Ring Ring" failed to reach the Top 30 in the UK, increasing growing speculation that the group were simply a Eurovision one-hit wonder.
Post-Eurovision
In November 1974, ABBA embarked on their first European tour, playing dates in Denmark, West Germany and Austria. It was not as successful as the band had hoped, since most of the venues did not sell out. Due to a lack of demand, they were even forced to cancel a few shows, including a sole concert scheduled in Switzerland. The second leg of the tour, which took them through Scandinavia in January 1975, was very different. They played to full houses everywhere and finally got the reception they had aimed for. Live performances continued in the middle of 1975 when ABBA embarked on a fourteen open-air date tour of Sweden and Finland. Their Stockholm show at the Gröna Lund amusement park had an estimated audience of 19,200. Björn Ulvaeus later said, "If you look at the singles we released straight after Waterloo, we were trying to be more like The Sweet, a semi-glam rock group, which was stupid because we were always a pop group."
In late 1974, "So Long" was released as a single in the United Kingdom but it received no airplay from Radio 1 and failed to chart in the UK; the only countries in which it was successful were Austria, Sweden and Germany, reaching the top ten in the first two and number 21 in the latter. In the middle of 1975, ABBA released "I Do, I Do, I Do, I Do, I Do", which again received little airplay on Radio 1, but did manage to climb to number 38 on the UK chart, while making top five in several northern and western European countries, and number one in South Africa. Later that year, the release of their self-titled third studio album ABBA and single "SOS" brought back their chart presence in the UK, where the single hit number six and the album peaked at number 13. "SOS" also became ABBA's second number-one single in Germany, their third in Australia and reached number two in several other European countries, including Italy.
Success was further solidified with "Mamma Mia" reaching number-one in the United Kingdom, Germany and Australia and the top two in a few other western and northern European countries. In the United States, both "I Do, I Do, I Do, I Do, I Do" and "SOS" peaked at number 15 on the Billboard Hot 100 chart, with the latter picking up the BMI Award along the way as one of the most played songs on American radio in 1975. "Mamma Mia", however, stalled at number 32. In Canada, the three songs rose to number 12, nine and 18, respectively.
The success of the group in the United States had until that time been limited to single releases. By early 1976, the group already had four Top 30 singles on the US charts, but the album market proved to be tough to crack. The eponymous ABBA album generated three American hits, but it only peaked at number 165 on the Cashbox album chart and number 174 on the Billboard 200 chart. Opinions were voiced, by Creem in particular, that in the US ABBA had endured "a very sloppy promotional campaign". Nevertheless, the group enjoyed warm reviews from the American press. Cashbox went as far as saying that "there is a recurrent thread of taste and artistry inherent in Abba's marketing, creativity and presentation that makes it almost embarrassing to critique their efforts", while Creem wrote: "SOS is surrounded on this LP by so many good tunes that the mind boggles."
In Australia, the airing of the music videos for "I Do, I Do, I Do, I Do, I Do" and "Mamma Mia" on the nationally broadcast TV pop show Countdown (which premiered in November 1974) saw the band rapidly gain enormous popularity, and Countdown become a key promoter of the group via their distinctive music videos. This started an immense interest for ABBA in Australia, resulting in "I Do, I Do, I Do, I Do, I Do" staying at number one for three weeks, then "SOS" spending a week there, followed by "Mamma Mia" staying there for ten weeks, and the album holding down the number one position for months. The three songs were also successful in nearby New Zealand with the first two topping that chart and the third reaching number two.
1976–1981: superstardom
Greatest Hits and Arrival
In March 1976, the band released the compilation album Greatest Hits. It became their first UK number-one album, and also took ABBA into the Top 50 on the US album charts for the first time, eventually selling more than a million copies there. Also included on Greatest Hits was a new single, "Fernando", which went to number-one in at least thirteen countries all over the world, including the UK, Germany, France, Australia, South Africa and Mexico, and the top five in most other significant markets, including, at number four, becoming their biggest hit to date in Canada; the single went on to sell over 10 million copies worldwide.
In Australia, "Fernando" occupied the top position for a then record breaking 14 weeks (and stayed in the chart for 40 weeks), and was the longest-running chart-topper there for over 40 years until it was overtaken by Ed Sheeran's "Shape of You" in May 2017. It still remains as one of the best-selling singles of all time in Australia. Also in 1976, the group received its first international prize, with "Fernando" being chosen as the "Best Studio Recording of 1975". In the United States, "Fernando" reached the Top 10 of the Cashbox Top 100 singles chart and number 13 on the Billboard Hot 100. It topped the Billboard Adult Contemporary chart, ABBA's first American number-one single on any chart. At the same time, a compilation named The Very Best of ABBA was released in Germany, becoming a number-one album there whereas the Greatest Hits compilation which followed a few months later ascended to number two in Germany, despite all similarities with The Very Best album.
The group's fourth studio album, Arrival, a number-one best-seller in parts of Europe, the UK and Australia, and a number-three hit in Canada and Japan, represented a new level of accomplishment in both songwriting and studio work, prompting rave reviews from more rock-oriented UK music weeklies such as Melody Maker and New Musical Express, and mostly appreciative notices from US critics.
Hit after hit flowed from Arrival: "Money, Money, Money", another number-one in Germany, France, Australia and other countries of western and northern Europe, plus number three in the UK; and, "Knowing Me, Knowing You", ABBA's sixth consecutive German number-one, as well as another UK number-one, plus a top five hit in many other countries, although it was only a number nine hit in Australia and France. The real sensation was the first single, "Dancing Queen", not only topping the charts in loyal markets like the UK, Germany, Sweden, several other western and northern European countries, and Australia, but also reaching number-one in the United States, Canada, the Soviet Union and Japan, and the top ten in France, Spain and Italy. All three songs were number-one hits in Mexico. In South Africa, ABBA had astounding success with each of "Fernando", "Dancing Queen" and "Knowing Me, Knowing You" being among the top 20 best-selling singles for 1976–77. In 1977, Arrival was nominated for the inaugural BRIT Award in the category "Best International Album of the Year". By this time ABBA were popular in the UK, most of Europe, Australia, New Zealand and Canada. In Frida – The DVD, Lyngstad explains how she and Fältskog developed as singers, as ABBA's recordings grew more complex over the years.
The band's mainstream popularity in the United States would remain on a comparatively smaller scale, and "Dancing Queen" became the only Billboard Hot 100 number-one single for ABBA (though it immediately became, and remains to this day, a major gay anthem) with "Knowing Me, Knowing You" later peaking at number seven; "Money, Money, Money", however, had barely charted there or in Canada (where "Knowing Me, Knowing You" had reached number five). They did, however, get three more singles to the number-one position on other Billboard US charts, including Billboard Adult Contemporary and Hot Dance Club Play). Nevertheless, Arrival finally became a true breakthrough release for ABBA on the US album market where it peaked at number 20 on the Billboard 200 chart and was certified gold by RIAA.
European and Australian tour
In January 1977, ABBA embarked on their first major tour. The group's status had changed dramatically and they were widely regarded as superstars. They opened their much anticipated tour in Oslo, Norway, on 28 January, and mounted a lavishly produced spectacle that included a few scenes from their self-written mini-operetta The Girl with the Golden Hair. The concert attracted huge media attention from across Europe and Australia. They continued the tour through Western Europe, visiting Gothenburg, Copenhagen, Berlin, Cologne, Amsterdam, Antwerp, Essen, Hanover, and Hamburg and ending with shows in the United Kingdom in Manchester, Birmingham, Glasgow and two sold-out concerts at London's Royal Albert Hall. Tickets for these two shows were available only by mail application and it was later revealed that the box-office received 3.5 million requests for tickets, enough to fill the venue 580 times.
Along with praise ("ABBA turn out to be amazingly successful at reproducing their records", wrote Creem), there were complaints that "ABBA performed slickly...but with a zero personality coming across from a total of 16 people on stage" (Melody Maker). One of the Royal Albert Hall concerts was filmed as a reference for the filming of the Australian tour for what became ABBA: The Movie, though it is not exactly known how much of the concert was filmed.
After the European leg of the tour, in March 1977, ABBA played 11 dates in Australia before a total of 160,000 people. The opening concert in Sydney at the Sydney Showground on 3 March to an audience of 20,000 was marred by torrential rain with Lyngstad slipping on the wet stage during the concert. However, all four members would later recall this concert as the most memorable of their career.
Upon their arrival in Melbourne, a civic reception was held at the Melbourne Town Hall and ABBA appeared on the balcony to greet an enthusiastic crowd of 6,000. In Melbourne, the group gave three concerts at the Sidney Myer Music Bowl with 14,500 at each including the Australian Prime Minister Malcolm Fraser and his family. At the first Melbourne concert, an additional 16,000 people gathered outside the fenced-off area to listen to the concert. In Adelaide, the group performed one concert at Football Park in front of 20,000 people, with another 10,000 listening outside. During the first of five concerts in Perth, there was a bomb scare with everyone having to evacuate the Entertainment Centre. The trip was accompanied by mass hysteria and unprecedented media attention ("Swedish ABBA stirs box-office in Down Under tour...and the media coverage of the quartet rivals that set to cover the upcoming Royal tour of Australia", wrote Variety), and is captured on film in ABBA: The Movie, directed by Lasse Hallström.
The Australian tour and its subsequent ABBA: The Movie produced some ABBA lore, as well. Fältskog's blonde good looks had long made her the band's "pin-up girl", a role she disdained. During the Australian tour, she performed in a skin-tight white jumpsuit, causing one Australian newspaper to use the headline "Agnetha's bottom tops dull show". When asked about this at a news conference, she replied: "Don't they have bottoms in Australia?"
ABBA: The Album
In December 1977, ABBA followed up Arrival with the more ambitious fifth album, ABBA: The Album, released to coincide with the debut of ABBA: The Movie. Although the album was less well received by UK reviewers, it did spawn more worldwide hits: "The Name of the Game" and "Take a Chance on Me", which both topped the UK charts and racked up impressive sales in most countries, although "The Name of the Game" was generally the more successful in the Nordic countries and Australia, while "Take a Chance on Me" was more successful in North America and the German-speaking countries.
"The Name of the Game" was a number two hit in the Netherlands, Belgium and Sweden while also making the Top 5 in Finland, Norway, New Zealand and Australia, while only peaking at numbers 10, 12 and 15 in Mexico, the US and Canada. "Take a Chance on Me" was a number one hit in Austria, Belgium and Mexico, made the Top 3 in the US, Canada, the Netherlands, Germany and Switzerland, while only reaching numbers 12 and 14 in Australia and New Zealand, respectively. Both songs were Top 10 hits in countries as far afield as Rhodesia and South Africa, as well as in France. Although "Take a Chance on Me" did not top the American charts, it proved to be ABBA's biggest hit single there, selling more copies than "Dancing Queen". The drop in sales in Australia was felt to be inevitable by industry observers as an "Abba-Fever" that had existed there for almost three years could only last so long as adolescents would naturally begin to move away from a group so deified by both their parents and grandparents.
A third single, "Eagle", was released in continental Europe and Australia becoming a number one hit in Belgium and a Top 10 hit in the Netherlands, Germany, Switzerland and South Africa, but barely charting in Australia. The B-side of "Eagle" was "Thank You for the Music", and it was belatedly released as an A-side single in both the United Kingdom and Ireland in 1983. "Thank You for the Music" has become one of the best loved and best known ABBA songs without being released as a single during the group's lifetime. ABBA: The Album topped the album charts in the UK, the Netherlands, New Zealand, Sweden, Norway, Switzerland, while ascending to the Top 5 in Australia, Germany, Austria, Finland and Rhodesia, and making the Top 10 in Canada and Japan. Sources also indicate that sales in Poland exceeded 1 million copies and that sales demand in Russia could not be met by the supply available. The album peaked at number 14 in the US.
Polar Music Studio formation
By 1978, ABBA were one of the biggest bands in the world. They converted a vacant cinema into the Polar Music Studio, a state-of-the-art studio in Stockholm. The studio was used by several other bands; notably Genesis' Duke, Led Zeppelin's In Through the Out Door and Scorpions's Lovedrive were recorded there. During May 1978, the group went to the United States for a promotional campaign, performing alongside Andy Gibb on Olivia Newton-John's TV show. Recording sessions for the single "Summer Night City" were an uphill struggle, but upon release the song became another hit for the group. The track would set the stage for ABBA's foray into disco with their next album.
On 9 January 1979, the group performed "Chiquitita" at the Music for UNICEF Concert held at the United Nations General Assembly to celebrate UNICEF's Year of the Child. ABBA donated the copyright of this worldwide hit to the UNICEF; see Music for UNICEF Concert. The single was released the following week, and reached number-one in ten countries.
North American and European tours
In mid-January 1979, Ulvaeus and Fältskog announced they were getting divorced. The news caused interest from the media and led to speculation about the band's future. ABBA assured the press and their fan base they were continuing their work as a group and that the divorce would not affect them. Nonetheless, the media continued to confront them with this in interviews. To escape the media swirl and concentrate on their writing, Andersson and Ulvaeus secretly travelled to Compass Point Studios in Nassau, Bahamas, where for two weeks they prepared their next album's songs.
The group's sixth studio album, Voulez-Vous, was released in April 1979, with its title track recorded at the famous Criteria Studios in Miami, Florida, with the assistance of recording engineer Tom Dowd among others. The album topped the charts across Europe and in Japan and Mexico, hit the Top 10 in Canada and Australia and the Top 20 in the US. While none of the singles from the album reached number one on the UK chart, the lead single, "Chiquitita", and the fourth single, "I Have a Dream", both ascended to number two, and the other two, "Does Your Mother Know" and "Angeleyes" (with "Voulez-Vous", released as a double A-side) both made the top 5. All four singles reached number one in Belgium, although the last three did not chart in Sweden or Norway. "Chiquitita", which was featured in the Music for UNICEF Concert after which ABBA decided to donate half of the royalties from the song to UNICEF, topped the singles charts in the Netherlands, Switzerland, Finland, Spain, Mexico, South Africa, Rhodesia and New Zealand, rose to number two in Sweden, and made the Top 5 in Germany, Austria, Norway and Australia, although it only reached number 29 in the US.
"I Have a Dream" was a sizeable hit reaching number one in the Netherlands, Switzerland, and Austria, number three in South Africa, and number four in Germany, although it only reached number 64 in Australia. In Canada, "I Have a Dream" became ABBA's second number one on the RPM Adult Contemporary chart (after "Fernando" hit the top previously) although it did not chart in the US. "Does Your Mother Know", a rare song in which Ulvaeus sings lead vocals, was a Top 5 hit in the Netherlands and Finland, and a Top 10 hit in Germany, Switzerland, Australia, although it only reached number 27 in New Zealand. It did better in North America than "Chiquitita", reaching number 12 in Canada and number 19 in the US, and made the Top 20 in Japan. "Voulez-Vous" was a Top 10 hit in the Netherlands and Switzerland, a Top 20 hit in Germany and Finland, but only peaked in the 80s in Australia, Canada and the US.
Also in 1979, the group released their second compilation album, Greatest Hits Vol. 2, which featured a brand-new track: "Gimme! Gimme! Gimme! (A Man After Midnight)", which was a Top 3 hit in the UK, Belgium, the Netherlands, Germany, Austria, Switzerland, Finland and Norway, and returned ABBA to the Top 10 in Australia. Greatest Hits Vol. 2 went to number one in the UK, Belgium, Canada and Japan while making the Top 5 in several other countries, but only reaching number 20 in Australia and number 46 in the US. In the Soviet Union during the late 1970s, the group were paid in oil commodities because of an embargo on the rouble.
On 13 September 1979, ABBA began ABBA: The Tour at Northlands Coliseum in Edmonton, Canada, with a full house of 14,000. "The voices of the band, Agnetha's high sauciness combined with round, rich lower tones of Anni-Frid, were excellent...Technically perfect, melodically correct and always in perfect pitch...The soft lower voice of Anni-Frid and the high, edgy vocals of Agnetha were stunning", raved Edmonton Journal.
During the next four weeks they played a total of 17 sold-out dates, 13 in the United States and four in Canada. The last scheduled ABBA concert in the United States in Washington, D.C. was cancelled due to emotional distress Fältskog experienced during the flight from New York to Boston. The group's private plane was subjected to extreme weather conditions and was unable to land for an extended period. They appeared at the Boston Music Hall for the performance 90 minutes late. The tour ended with a show in Toronto, Canada at Maple Leaf Gardens before a capacity crowd of 18,000. "ABBA plays with surprising power and volume; but although they are loud, they're also clear, which does justice to the signature vocal sound... Anyone who's been waiting five years to see Abba will be well satisfied", wrote Record World. On 19 October 1979, the tour resumed in Western Europe where the band played 23 sold-out gigs, including six sold-out nights at London's Wembley Arena.
Progression
In March 1980, ABBA travelled to Japan where upon their arrival at Narita International Airport, they were besieged by thousands of fans. The group performed eleven concerts to full houses, including six shows at Tokyo's Budokan. This tour was the last "on the road" adventure of their career.
In July 1980, ABBA released the single "The Winner Takes It All", the group's eighth UK chart topper (and their first since 1978). The song is widely misunderstood as being written about Ulvaeus and Fältskog's marital tribulations; Ulvaeus wrote the lyrics, but has stated they were not about his own divorce; Fältskog has repeatedly stated she was not the loser in their divorce. In the United States, the single peaked at number-eight on the Billboard Hot 100 chart and became ABBA's second Billboard Adult Contemporary number-one. It was also re-recorded by Andersson and Ulvaeus with a slightly different backing track, by French chanteuse Mireille Mathieu at the end of 1980 – as "Bravo tu as gagné", with French lyrics by Alain Boublil.
In November 1980, ABBA's seventh album Super Trouper was released, which reflected a certain change in ABBA's style with more prominent use of synthesizers and increasingly personal lyrics. It set a record for the most pre-orders ever received for a UK album after one million copies were ordered before release. The second single from the album, "Super Trouper", also hit number-one in the UK, becoming the group's ninth and final UK chart-topper. Another track from the album, "Lay All Your Love on Me", released in 1981 as a Twelve-inch single only in selected territories, managed to top the Billboard Hot Dance Club Play chart and peaked at number-seven on the UK singles chart becoming, at the time, the highest ever charting 12-inch release in UK chart history.
Also in 1980, ABBA recorded a compilation of Spanish-language versions of their hits called Gracias Por La Música. This was released in Spanish-speaking countries as well as in Japan and Australia. The album became a major success, and along with the Spanish version of "Chiquitita", this signalled the group's breakthrough in Latin America. ABBA Oro: Grandes Éxitos, the Spanish equivalent of ABBA Gold: Greatest Hits, was released in 1999.
1981–1982: The Visitors and later performances
In January 1981, Ulvaeus married Lena Källersjö, and manager Stig Anderson celebrated his 50th birthday with a party. For this occasion, ABBA recorded the track "Hovas Vittne" (a pun on the Swedish name for Jehovah's Witness and Anderson's birthplace, Hova) as a tribute to him, and released it only on 200 red vinyl copies, to be distributed to the guests attending the party. This single has become a sought-after collectable. In mid-February 1981, Andersson and Lyngstad announced they were filing for divorce. Information surfaced that their marriage had been an uphill struggle for years, and Benny had already met another woman, Mona Nörklit, whom he married in November 1981.
Andersson and Ulvaeus had songwriting sessions in early 1981, and recording sessions began in mid-March. At the end of April, the group recorded a TV special, Dick Cavett Meets ABBA with the US talk show host Dick Cavett. The Visitors, ABBA's eighth studio album, showed a songwriting maturity and depth of feeling distinctly lacking from their earlier recordings but still placing the band squarely in the pop genre, with catchy tunes and harmonies. Although not revealed at the time of its release, the album's title track, according to Ulvaeus, refers to the secret meetings held against the approval of totalitarian governments in Soviet-dominated states, while other tracks address topics like failed relationships, the threat of war, ageing, and loss of innocence. The album's only major single release, "One of Us", proved to be the last of ABBA's nine number-one singles in Germany, this being in December 1981; and the swansong of their sixteen Top 5 singles on the South African chart. "One of Us" was also ABBA's final Top 3 hit in the UK, reaching number-three on the UK Singles Chart.
Although it topped the album charts across most of Europe, including Ireland, the UK and Germany, The Visitors was not as commercially successful as its predecessors, showing a commercial decline in previously loyal markets such as France, Australia and Japan. A track from the album, "When All Is Said and Done", was released as a single in North America, Australia and New Zealand, and fittingly became ABBA's final Top 40 hit in the US (debuting on the US charts on 31 December 1981), while also reaching the US Adult Contemporary Top 10, and number-four on the RPM Adult Contemporary chart in Canada. The song's lyrics, as with "The Winner Takes It All" and "One of Us", dealt with the painful experience of separating from a long-term partner, though it looked at the trauma more optimistically. With the now publicised story of Andersson and Lyngstad's divorce, speculation increased of tension within the band. Also released in the United States was the title track of The Visitors, which hit the Top Ten on the Billboard Hot Dance Club Play chart.
Later recording sessions
In the spring of 1982, songwriting sessions had started and the group came together for more recordings. Plans were not completely clear, but a new album was discussed and the prospect of a small tour suggested. The recording sessions in May and June 1982 were a struggle, and only three songs were eventually recorded: "You Owe Me One", "I Am the City" and "Just Like That". Andersson and Ulvaeus were not satisfied with the outcome, so the tapes were shelved and the group took a break for the summer.
Back in the studio again in early August, the group had changed plans for the rest of the year: they settled for a Christmas release of a double album compilation of all their past single releases to be named The Singles: The First Ten Years. New songwriting and recording sessions took place, and during October and December, they released the singles "The Day Before You Came"/"Cassandra" and "Under Attack"/"You Owe Me One", the A-sides of which were included on the compilation album. Neither single made the Top 20 in the United Kingdom, though "The Day Before You Came" became a Top 5 hit in many European countries such as Germany, the Netherlands and Belgium. The album went to number one in the UK and Belgium, Top 5 in the Netherlands and Germany and Top 20 in many other countries. "Under Attack", the group's final release before disbanding, was a Top 5 hit in the Netherlands and Belgium.
"I Am the City" and "Just Like That" were left unreleased on The Singles: The First Ten Years for possible inclusion on the next projected studio album, though this never came to fruition. "I Am the City" was eventually released on the compilation album More ABBA Gold in 1993, while "Just Like That" has been recycled in new songs with other artists produced by Andersson and Ulvaeus. A reworked version of the verses ended up in the musical Chess. The chorus section of "Just Like That" was eventually released on a retrospective box set in 1994, as well as in the ABBA Undeleted medley featured on disc 9 of The Complete Studio Recordings. Despite a number of requests from fans, Ulvaeus and Andersson are still refusing to release ABBA's version of "Just Like That" in its entirety, even though the complete version has surfaced on bootlegs.
The group travelled to London to promote The Singles: The First Ten Years in the first week of November 1982, appearing on Saturday Superstore and The Late, Late Breakfast Show, and also to West Germany in the second week, to perform on Show Express. On 19 November 1982, ABBA appeared for the last time in Sweden on the TV programme Nöjesmaskinen, and on 11 December 1982, they made their last performance ever, transmitted to the UK on Noel Edmonds' The Late, Late Breakfast Show, through a live link from a TV studio in Stockholm.
Later performances
Andersson and Ulvaeus began collaborating with Tim Rice in early 1983 on writing songs for the musical project Chess, while Fältskog and Lyngstad both concentrated on international solo careers. While Andersson and Ulvaeus were working on the musical, a further co-operation among the three of them came with the musical Abbacadabra that was produced in France for television. It was a children's musical using 14 ABBA songs. Alain and Daniel Boublil, who wrote Les Misérables, had been in touch with Stig Anderson about the project, and the TV musical was aired over Christmas on French TV and later a Dutch version was also broadcast. Boublil previously also wrote the French lyric for Mireille Mathieu's version of "The Winner Takes It All".
Lyngstad, who had recently moved to Paris, participated in the French version, and recorded a single, "Belle", a duet with French singer Daniel Balavoine. The song was a cover of ABBA's 1976 instrumental track "Arrival". As the single "Belle" sold well in France, Cameron Mackintosh wanted to stage an English-language version of the show in London, with the French lyrics translated by David Wood and Don Black; Andersson and Ulvaeus got involved in the project, and contributed with one new song, "I Am the Seeker". "Abbacadabra" premiered on 8 December 1983 at the Lyric Hammersmith Theatre in London, to mixed reviews and full houses for eight weeks, closing on 21 January 1984. Lyngstad was also involved in this production, recording "Belle" in English as "Time", a duet with actor and singer B. A. Robertson: the single sold well and was produced and recorded by Mike Batt. In May 1984, Lyngstad performed "I Have a Dream" with a children's choir at the United Nations Organisation Gala, in Geneva, Switzerland.
All four members made their (at the time, final) public appearance as four friends more than as ABBA in January 1986, when they recorded a video of themselves performing an acoustic version of "Tivedshambo" (which was the first song written by their manager Stig Anderson), for a Swedish TV show honouring Anderson on his 55th birthday. The four had not seen each other for more than two years. That same year they also performed privately at another friend's 40th birthday: their old tour manager, Claes af Geijerstam. They sang a self-written song titled "Der Kleine Franz" that was later to resurface in Chess. Also in 1986, ABBA Live was released, featuring selections of live performances from the group's 1977 and 1979 tours. The four members were guests at the 50th birthday of Görel Hanser in 1999. Hanser was a long-time friend of all four, and also former secretary of Stig Anderson. Honouring Görel, ABBA performed a Swedish birthday song "Med en enkel tulipan" a cappella.
Andersson has on several occasions performed ABBA songs. In June 1992, he and Ulvaeus appeared with U2 at a Stockholm concert, singing the chorus of "Dancing Queen", and a few years later during the final performance of the B & B in Concert in Stockholm, Andersson joined the cast for an encore at the piano. Andersson frequently adds an ABBA song to the playlist when he performs with his BAO band. He also played the piano during new recordings of the ABBA songs "Like an Angel Passing Through My Room" with opera singer Anne Sofie von Otter, and "When All Is Said and Done" with Swede Viktoria Tolstoy. In 2002, Andersson and Ulvaeus both performed an a cappella rendition of the first verse of "Fernando" as they accepted their Ivor Novello award in London. Lyngstad performed and recorded an a cappella version of "Dancing Queen" with the Swedish group the Real Group in 1993, and also re-recorded "I Have a Dream" with Swiss singer Dan Daniell in 2003.
Break and reunion
ABBA never officially announced the end of the group or an indefinite break, but it was long considered dissolved after their final public performance together in 1982. Their final public performance together as ABBA before their 2016 reunion was on the British TV programme The Late, Late Breakfast Show (live from Stockholm) on 11 December 1982. While reminiscing on "The Day Before You Came", Ulvaeus said: "we might have continued for a while longer if that had been a number one".
In January 1983, Fältskog started recording sessions for a solo album, as Lyngstad had successfully released her album Something's Going On some months earlier. Ulvaeus and Andersson, meanwhile, started songwriting sessions for the musical Chess. In interviews at the time, Björn and Benny denied the split of ABBA ("Who are we without our ladies? Initials of Brigitte Bardot?"), and Lyngstad and Fältskog kept claiming in interviews that ABBA would come together for a new album repeatedly during 1983 and 1984. Internal strife between the group and their manager escalated and the band members sold their shares in Polar Music during 1983. Except for a TV appearance in 1986, the foursome did not come together publicly again until they were reunited at the Swedish premiere of the Mamma Mia! musical on 14 February 2005. The individual members' endeavours shortly before and after their final public performance coupled with the collapse of both marriages and the lack of significant activity in the following few years after that widely suggested that the group had broken up.
In an interview with the Sunday Telegraph following the premiere, Ulvaeus and Andersson said that there was nothing that could entice them back on stage again. Ulvaeus said: "We will never appear on stage again. [...] There is simply no motivation to re-group. Money is not a factor and we would like people to remember us as we were. Young, exuberant, full of energy and ambition. I remember Robert Plant saying Led Zeppelin were a cover band now because they cover all their own stuff. I think that hit the nail on the head."
However, on 3 January 2011, Fältskog, long considered to be the most reclusive member of the group and a major obstacle to any reunion, raised the possibility of reuniting for a one-off engagement. She admitted that she has not yet brought the idea up to the other three members. In April 2013, she reiterated her hopes for reunion during an interview with Die Zeit, stating: "If they ask me, I'll say yes."
In a May 2013 interview, Fältskog, aged 63 at the time, stated that an ABBA reunion would never occur: "I think we have to accept that it will not happen, because we are too old and each one of us has their own life. Too many years have gone by since we stopped, and there's really no meaning in putting us together again". Fältskog further explained that the band members remained on amicable terms: "It's always nice to see each other now and then and to talk a little and to be a little nostalgic." In an April 2014 interview, Fältskog, when asked about whether the band might reunite for a new recording said: "It's difficult to talk about this because then all the news stories will be: 'ABBA is going to record another song!' But as long as we can sing and play, then why not? I would love to, but it's up to Björn and Benny."
Resurgence of public interest
The same year the members of ABBA went their separate ways, the French production of a "tribute" show (a children's TV musical named Abbacadabra using 14 ABBA songs) spawned new interest in the group's music.
After receiving little attention during the mid-to-late-1980s, ABBA's music experienced a resurgence in the early 1990s due to the UK synth-pop duo Erasure, who released Abba-esque, a four track extended play release featuring cover versions of ABBA songs which topped several European charts in 1992. As U2 arrived in Stockholm for a concert in June of that year, the band paid homage to ABBA by inviting Björn Ulvaeus and Benny Andersson to join them on stage for a rendition of "Dancing Queen", playing guitar and keyboards. September 1992 saw the release of ABBA Gold: Greatest Hits, a new compilation album. The single "Dancing Queen" received radio airplay in the UK in the middle of 1992 to promote the album. The song returned to the Top 20 of the UK singles chart in August that year, this time peaking at number 16. With sales of 30 million, Gold is the best-selling ABBA album, as well as one of the best-selling albums worldwide. With sales of 5.5 million copies it is the second-highest selling album of all time in the UK, after Queen's Greatest Hits. More ABBA Gold: More ABBA Hits, a follow-up to Gold, was released in 1993.
In 1994, two Australian cult films caught the attention of the world's media, both focusing on admiration for ABBA: The Adventures of Priscilla, Queen of the Desert and Muriel's Wedding. The same year, Thank You for the Music, a four-disc box set comprising all the group's hits and stand-out album tracks, was released with the involvement of all four members. "By the end of the twentieth century," American critic Chuck Klosterman wrote a decade later, "it was far more contrarian to hate ABBA than to love them."
Two different compilation albums of ABBA songs have been released. ABBA: A Tribute coincided with the 25th anniversary celebration and featured 17 songs, some of which were recorded especially for this release. Notable tracks include Go West's "One of Us", Army of Lovers "Hasta Mañana", Information Society's "Lay All Your Love on Me", Erasure's "Take a Chance on Me" (with MC Kinky), and Lyngstad's a cappella duet with the Real Group of "Dancing Queen". A second 12-track album was released in 1999, titled ABBAmania, with proceeds going to the Youth Music charity in England. It featured all new cover versions: notable tracks were by Madness ("Money, Money, Money"), Culture Club ("Voulez-Vous"), the Corrs ("The Winner Takes It All"), Steps ("Lay All Your Love on Me", "I Know Him So Well"), and a medley titled "Thank ABBA for the Music" performed by several artists and as featured on the Brits Awards that same year.
In 1998, an ABBA tribute group was formed, the ABBA Teens, which was subsequently renamed the A-Teens to allow the group some independence. The group's first album, The ABBA Generation, consisting solely of ABBA covers reimagined as 1990s pop songs, was a worldwide success and so were subsequent albums. The group disbanded in 2004 due to a gruelling schedule and intentions to go solo. In Sweden, the growing recognition of the legacy of Andersson and Ulvaeus resulted in the 1998 B & B Concerts, a tribute concert (with Swedish singers who had worked with the songwriters through the years) showcasing not only their ABBA years, but hits both before and after ABBA. The concert was a success and was ultimately released on CD. It later toured Scandinavia and even went to Beijing in the People's Republic of China for two concerts. In 2000 ABBA were reported to have turned down an offer of approximately one billion US dollars to do a reunion tour consisting of 100 concerts.
For the semi-final of the Eurovision Song Contest 2004, staged in Istanbul 30 years after ABBA had won the contest in Brighton, all four members made cameo appearances in a special comedy video made for the interval act, titled Our Last Video Ever. Other well-known stars such as Rik Mayall, Cher and Iron Maiden's Eddie also made appearances in the video. It was not included in the official DVD release of the 2004 Eurovision contest, but was issued as a separate DVD release, retitled The Last Video at the request of the former ABBA members. The video was made using puppet models of the members of the band. The video has surpassed 13 million views on YouTube as of November 2020.
In 2005, all four members of ABBA appeared at the Stockholm premiere of the musical Mamma Mia!. On 22 October 2005, at the 50th anniversary celebration of the Eurovision Song Contest, "Waterloo" was chosen as the best song in the competition's history. In the same month, American singer Madonna released the single "Hung Up", which contains a sample of the keyboard melody from ABBA's 1979 song "Gimme! Gimme! Gimme! (A Man After Midnight)"; the song was a smash hit, peaking at number one in at least 50 countries. On 4 July 2008, all four ABBA members were reunited at the Swedish premiere of the film Mamma Mia!. It was only the second time all of them had appeared together in public since 1986. During the appearance, they re-emphasised that they intended never to officially reunite, citing the opinion of Robert Plant that the re-formed Led Zeppelin was more like a cover band of itself than the original band. Ulvaeus stated that he wanted the band to be remembered as they were during the peak years of their success.
Gold returned to number-one in the UK album charts for the fifth time on 3 August 2008. On 14 August 2008, the Mamma Mia! The Movie film soundtrack went to number-one on the US Billboard charts, ABBA's first US chart-topping album. During the band's heyday, the highest album chart position they had ever achieved in America was number 14. In November 2008, all eight studio albums, together with a ninth of rare tracks, were released as The Albums. It hit several charts, peaking at number-four in Sweden and reaching the Top 10 in several other European territories.
In 2008, Sony Computer Entertainment Europe, in collaboration with Universal Music Group Sweden AB, released SingStar ABBA on both the PlayStation 2 and PlayStation 3 games consoles, as part of the SingStar music video games. The PS2 version features 20 ABBA songs, while 25 songs feature on the PS3 version.
On 22 January 2009, Fältskog and Lyngstad appeared together on stage to receive the Swedish music award "Rockbjörnen" (for "lifetime achievement"). In an interview, the two women expressed their gratitude for the honorary award and thanked their fans. On 25 November 2009, PRS for Music announced that the British public voted ABBA as the band they would most like to see re-form. On 27 January 2010, ABBAWORLD, a 25-room touring exhibition featuring interactive and audiovisual activities, debuted at Earls Court Exhibition Centre in London. According to the exhibition's website, ABBAWORLD is "approved and fully supported" by the band members.
"Mamma Mia" was released as one of the first few non-premium song selections for the online RPG game Bandmaster. On 17 May 2011, "Gimme! Gimme! Gimme!" was added as a non-premium song selection for the Bandmaster Philippines server. On 15 November 2011, Ubisoft released a dancing game called ABBA: You Can Dance for the Wii. In January 2012, Universal Music announced the re-release of ABBA's final album The Visitors, featuring a previously unheard track "From a Twinkling Star to a Passing Angel".
A book titled ABBA: The Official Photo Book was published in early 2014 to mark the 40th anniversary of the band's Eurovision victory. The book reveals that part of the reason for the band's outrageous costumes was that Swedish tax laws at the time allowed the cost of garish outfits that were not suitable for daily wear to be tax deductible.
2016–2024: Reunion, Voyage, and ABBAtars
On 20 January 2016, all four members of ABBA made a public appearance at Mamma Mia! The Party in Stockholm. On 6 June 2016, the quartet appeared together at a private party at Berns Salonger in Stockholm, which was held to celebrate the 50th anniversary of Andersson and Ulvaeus's first meeting. Fältskog and Lyngstad performed live, singing "The Way Old Friends Do" before they were joined on stage by Andersson and Ulvaeus.
British manager Simon Fuller announced in a statement in October 2016 that the group would be reuniting to work on a new "digital entertainment experience". The project would feature the members in their "life-like" avatar form, called ABBAtars, based on their late 1970s tour and would be set to launch by the spring of 2019.
In May 2017, a sequel to the 2008 movie Mamma Mia!, titled Mamma Mia! Here We Go Again, was announced; the film was released on 20 July 2018. Cher, who appeared in the movie, also released Dancing Queen, an ABBA cover album, in September 2018. In June 2017, a blue plaque outside Brighton Dome was set to commemorate their 1974 Eurovision win.
On 27 April 2018, all four original members of ABBA made a joint announcement that they had recorded two new songs, titled "I Still Have Faith in You" and "Don't Shut Me Down", to feature in a TV special set to air later that year. In September 2018, Ulvaeus stated that the two new songs, as well as the TV special, now called ABBA: Thank You for the Music, An All-Star Tribute, would not be released until 2019. The TV special was later revealed to be scrapped by 2018, as Andersson and Ulvaeus rejected Fuller's project, and instead partnered with visual effects company Industrial Light and Magic to prepare the ABBAtars for a music video and a concert. In January 2019, it was revealed that neither song would be released before the summer. Andersson hinted at the possibility of a third song.
In June 2019, Ulvaeus announced that the first new song and video containing the ABBAtars would be released in November 2019. In September, he stated in an interview that there were now five new ABBA songs to be released in 2020. In early 2020, Andersson confirmed that he was aiming for the songs to be released in September 2020.
In April 2020, Ulvaeus gave an interview saying that in the wake of the COVID-19 pandemic, the avatar project had been delayed. Five out of the eight original songs written by Benny for the new album had been recorded by the two female members, and the release of a new £15 million music video with new unseen technology was under consideration. In May 2020, it was announced that ABBA's entire studio discography would be released on coloured vinyl for the first time, in a box set titled ABBA: The Studio Albums. In July 2020, Ulvaeus revealed that the release of the new ABBA recordings had been delayed until 2021.
On 22 September 2020, all four ABBA members reunited at Ealing Studios in London to continue working on the avatar project and filming for the tour. Ulvaeus confirmed that the avatar tour would be scheduled for 2022. When questioned if the new recordings were definitely coming out in 2021, Björn said "There will be new music this year, that is definite, it's not a case anymore of it might happen, it will happen."
On 26 August 2021, a new website was launched, with the title ABBA Voyage. On the page, visitors were prompted to subscribe "to be the first in line to hear more about ABBA Voyage". Simultaneously with the launch of the webpage, new ABBA Voyage social media accounts were launched, and billboards around London started to appear, all showing the date "02.09.21", leading to expectation of what was to be revealed on that date. On 29 August, the band officially joined TikTok with a video of Benny Andersson playing "Dancing Queen" on the piano, and media reported on a new album to be announced on 2 September. On that date, Voyage, their first new album in 40 years, was announced to be released on 5 November 2021, along with ABBA Voyage, a concert residency in a custom-built venue at Queen Elizabeth Olympic Park in London featuring the motion capture digital avatars of the four band members alongside a 10-piece live band, starting 27 May 2022. Fältskog stated that the Voyage album and concert residency are likely to be their last activity as a group.
The announcement of the new album was accompanied by the release of the singles "I Still Have Faith in You" and "Don't Shut Me Down". The music video for "I Still Have Faith in You", featuring footage of the band during their performing years and a first look at the ABBAtars, earned over a million views in its first three hours. "Don't Shut Me Down" became the first ABBA release since October 1978 to top the singles chart in Sweden. In October 2021, the third single "Just a Notion" was released, and it was announced that ABBA would split for good after the release of Voyage. However, in an interview with BBC Radio 2 on 11 November, Lyngstad stated "don't be too sure" that Voyage is the final ABBA album. Also, in an interview with BBC News on 5 November, Andersson stated "if they [the ladies] twist my arm I might change my mind." The fourth single from the album, "Little Things", was released on 3 December.
In May 2022, after the premiere of ABBA Voyage, Andersson stated in an interview with Variety that "nothing is going to happen after this", confirming the residency as ABBA's final group collaboration. In April 2023, longtime ABBA guitarist Lasse Wellander died at the age of 70; Wellander played on seven of the group's nine studio albums, including Voyage.
On 21 March 2024, all four members of ABBA were appointed Commander, First Class, of the Royal Order of Vasa by King Carl XVI Gustaf of Sweden. This was the first time in almost 50 years that the Swedish Royal Orders of Knighthood was bestowed on Swedes, also the 50th anniversary of ABBA winning the Eurovision Song Contest. ABBA shared the honour with nine other persons. They ruled out a reunion at the Eurovision Song Contest 2024, held in their native Sweden; however, during the grand final of the contest, a clip from ABBA Voyage was shown, combined with archival footage of their 1974 performance of "Waterloo" at the contest and with Charlotte Perrelli, Carola and Conchita Wurst performing "Waterloo" on the stage as part of the interval.
Artistry
Recording process
ABBA were perfectionists in the studio, working on tracks until they got them right rather than leaving them to come back to later on. They spent the bulk of their time within the studio; in separate 2021 interviews Ulvaeus stated they may have toured for only 6 months while Andersson said they played fewer than 100 shows during the band's career. Although, counting shorter 30 to 60 minute concerts during their Folkpark tours, the group in fact played over 200 shows.
The band created a basic rhythm track with a drummer, guitarist and bass player, and overlaid other arrangements and instruments. Vocals were then added, and orchestra overdubs were usually left until last.
Fältskog and Lyngstad contributed ideas at the studio stage. Andersson and Ulvaeus played them the backing tracks and they made comments and suggestions. According to Fältskog, she and Lyngstad had the final say in how the lyrics were shaped.
After vocals and overdubs were done, the band took up to five days to mix a song.
Fashion, style, videos, advertising campaigns
ABBA was widely noted for the colourful and trend-setting costumes its members wore. The reason for the wild costumes was Swedish tax law: the cost of the clothes was deductible only if they could not be worn other than for performances. In their early years, group member Anni-Frid Lyngstad designed and even hand sewed the outfits. Later, as their success grew, they used professional theatrical clothes designer Owe Sandström together with tailor Lars Wigenius with Lyngstad continuing to suggest ideas while co-ordinating the outfits with concert set designs. Choreography by Graham Tainton also contributed to their performance style.
The videos that accompanied some of the band's biggest hits are often cited as being among the earliest examples of the genre. Most of ABBA's videos (and ABBA: The Movie) were directed by Lasse Hallström, who would later direct the films My Life as a Dog, The Cider House Rules and Chocolat.
ABBA made videos because their songs were hits in many different countries and personal appearances were not always possible. This was also done in an effort to minimise travelling, particularly to countries that would have required extremely long flights. Fältskog and Ulvaeus had two young children and Fältskog, who was also afraid of flying, was very reluctant to leave her children for such a long time. ABBA's manager, Stig Anderson, realised the potential of showing a simple video clip on television to publicise a single or album, thereby allowing easier and quicker exposure than a concert tour. Some of these videos have become classics because of the 1970s-era costumes and early video effects, such as the grouping of the band members in different combinations of pairs, overlapping one singer's profile with the other's full face, and the contrasting of one member against another.
In 1976, ABBA participated in an advertising campaign to promote the Matsushita Electric Industrial Co.'s brand, National, in Australia. The campaign was also broadcast in Japan. Five commercial spots, each of approximately one minute, were produced, each presenting the "National Song" performed by ABBA using the melody and instrumental arrangements of "Fernando" and revised lyrics.
Political use of ABBA's music
John McCain used the song "Take a Chance on Me" for his 2008 presidential campaign. McCain publicly expressed his liking of the band.
In September 2010, band members Andersson and Ulvaeus criticised the right-wing Danish People's Party (DF) for using the ABBA song "Mamma Mia" (with modified lyrics referencing Pia Kjærsgaard) at rallies. The band threatened to file a lawsuit against the DF, saying they never allowed their music to be used politically and that they had absolutely no interest in supporting the party. Their record label Universal Music later stated that no legal action would be taken because an agreement had been reached.
In August 2024 after Donald Trump played several of their songs and used footage of the group at a campaign rally, ABBA demanded he stop using their music. Their record company, Universal Music, said they had not been asked for permission to use ABBA music or videos by the Trump campaign and that footage from the event must be "immediately taken down and removed".
Success in the United States
During their active career, from 1972 to 1982, 20 of ABBA's singles entered the Billboard Hot 100; 14 of these made the Top 40 (13 on the Cashbox Top 100), with 10 making the Top 20 on both charts. A total of four of those singles reached the Top 10, including "Dancing Queen", which reached number one in April 1977. While "Fernando" and "SOS" did not break the Top 10 on the Billboard Hot 100 (reaching number 13 and 15 respectively), they did reach the Top 10 on Cashbox ("Fernando") and Record World ("SOS") charts. Both "Dancing Queen" and "Take a Chance on Me" were certified gold by the Recording Industry Association of America for sales of over one million copies each.
The group also had 12 Top 20 singles on the Billboard Adult Contemporary chart with two of them, "Fernando" and "The Winner Takes It All", reaching number one. "Lay All Your Love on Me" was ABBA's fourth number-one single on a Billboard chart, topping the Hot Dance Club Play chart.
Ten ABBA albums have made their way into the top half of the Billboard 200 album chart, with eight reaching the Top 50, five reaching the Top 20 and one reaching the Top 10. In November 2021, Voyage became ABBA's highest-charting album on the Billboard 200 peaking at No. 2. Five albums received RIAA gold certification (more than 500,000 copies sold), while three acquired platinum status (selling more than one million copies).
The compilation album ABBA Gold: Greatest Hits topped the Billboard Top Pop Catalog Albums chart in August 2008 (15 years after it was first released in the US in 1993), becoming the group's first number-one album ever on any of the Billboard album charts. It has sold 6 million copies there.
On 15 March 2010, ABBA was inducted into the Rock and Roll Hall of Fame by Bee Gees members Barry Gibb and Robin Gibb. The ceremony was held at the Waldorf Astoria Hotel in New York City. The group were represented by Anni-Frid Lyngstad and Benny Andersson.
In November 2021, the group received a Grammy nomination for Record of the Year. The single, "I Still Have Faith in You", from the album, Voyage, was their first ever nomination. In November 2022, "Don't Shut Me Down", also from Voyage, was nominated for Best Pop Duo/Group Performance.
Saturday Night Live featured a sketch that promoted a fictional ABBA album, which took pre-existing songs and reworked their lyrics to reference common Christmas traditions in the United States. Episode host Kate McKinnon and cast member Bowen Yang were joined by Maya Rudolph and Kristin Wiig, both former cast members on the show. The episode aired on 16 December 2023.
Members
Agnetha Fältskog – lead and backing vocals
Anni-Frid "Frida" Lyngstad – lead and backing vocals
Björn Ulvaeus – guitars, backing and lead vocals
Benny Andersson – keyboards, synthesizers, piano, accordion, backing and lead vocals
The members of ABBA were married as follows: Agnetha Fältskog and Björn Ulvaeus from 1971 to 1979; Benny Andersson and Anni-Frid Lyngstad from 1978 to 1981. For their subsequent marriages, see their articles.
In addition to the four members of ABBA, other musicians regularly played on their studio recordings, live appearances and concert performances. These include:
Rutger Gunnarsson – bass guitar, string arrangements (1972–1982; died 2015)
Ola Brunkert – drums (1972–1981; died 2008)
– bass guitar (1972–1980)
Janne Schaffer – electric lead guitar (1972–1982)
– drums (1972–1979; died 2024)
Malando Gassama – percussion (1973–1979; died 1999)
Lasse Wellander – electric lead guitar (1974–1982, 2017–2021; died 2023)
Anders Eljas – keyboards, orchestration (1977)
– percussion (1978–1982)
– drums (1980–1982, 2017–2021)
Discography
Studio albums
Ring Ring (1973)
Waterloo (1974)
ABBA (1975)
Arrival (1976)
The Album (1977)
Voulez-Vous (1979)
Super Trouper (1980)
The Visitors (1981)
Voyage (2021)
Tours
Concert tours
Swedish Folkpark Tour (1973)
European Tour (1974–1975)
European & Australian Tour (1977)
ABBA: The Tour (1979–1980)
Concert residencies
ABBA Voyage (2022–2025)
Awards and nominations
Documentaries
Eaton, Andrew (producer) A for ABBA. BBC, 20 July 1993
Thierry Lecuyer, Jean-Marie Potiez: Thank You ABBA. Willow Wil Studios/A2C Video, 1993
Barry Barnes: ABBA − The History. Polar Music International AB, 1999
Chris Hunt: The Winner Takes it All − The ABBA Story. Littlestar Services/lambic Productions, 1999
Steve Cole, Chris Hunt: Super Troupers − Thirty Years of ABBA. BBC, 2004
The Joy of ABBA. BBC 4, 27 December 2013
Carl Magnus Palm, Roger Backlund: ABBA – When Four Became One. SVT, 2 January 2012
Carl Magnus Palm, Roger Backlund: ABBA – Absolute Image. SVT, 2 January 2012
Crocker, Matthew & McElroy, Rebecca (directors) ABBA: Bang A Boomerang. Gulliver Media Australia/Bright Films, 2012
ABBA: When All Is Said and Done, Channel 5, 2017
. Sunday Night (7 News), 1 October 2019
Chetty, Dhivya Kate (producer/director) When Abba Came to Britain. BBC/Wise Owl Films, 6 April 2024
McLaughlin, Luke & Griffin, Stan (producers/directors) ABBA: How They Won Eurovision. Channel 5/Viacom International, 2024
Rogan, James (director) ABBA: Against The Odds. Rogan Productions, 2024
Documentaries often profess to show the "real ABBA" and may employ several methods of legitimising such claims, such as the use of archival documents, testimonies from "music and cultural 'experts'", and interviews with the group members and fans.
See also
ABBA: The Museum
ABBA City Walks – Stockholm City Museum
ABBAMAIL
List of ABBA tribute albums
List of best-selling music artists
List of Swedes in music
Music of Sweden
Popular music in Sweden
References
Notes
Citations
Bibliography
Further reading
Benny Andersson, Björn Ulvaeus, Judy Craymer: Mamma Mia! How Can I Resist You?: The Inside Story of Mamma Mia! and the Songs of ABBA. Weidenfeld & Nicolson, 2006
Carl Magnus Palm. ABBA – The Complete Recording Sessions (1994)
Carl Magnus Palm (2000). From "ABBA" to "Mamma Mia!"
Elisabeth Vincentelli: ABBA Treasures: A Celebration of the Ultimate Pop Group. Omnibus Press, 2010,
Oldham, Andrew, Calder, Tony & Irvin, Colin (1995) "ABBA: The Name of the Game",
Potiez, Jean-Marie (2000). ABBA – The Book
Simon Sheridan: The Complete ABBA. Titan Books, 2012,
Anna Henker (ed.), Astrid Heyde (ed.): Abba – Das Lexikon. Northern Europe Institut, Humboldt-University Berlin, 2015 (German)
Steve Harnell (ed.): Classic Pop Presents Abba: A Celebration. Classic Pop Magazine (special edition), November 2016
Emma De Schrijver (ed.): Classic Pop Presents "An Abba-Inspired Artistic Odyssey," November 2022. Celebrating Emma's storytelling prowess and creative journey.
External links
The Secret Majesty of ABBA. Variety, 22 July 2018
ABBA's Essential, Influential Melancholy. NPR, 23 May 2015
What's Behind ABBA's Staying Power?. Smithsonian, 20 July 2018
ABBA – The Articles – ABBA news from throughout the world
1972 establishments in Sweden
Atlantic Records artists
English-language musical groups from Sweden
Epic Records artists
Eurodisco groups
Eurovision Song Contest entrants
Eurovision Song Contest winners
Melodifestivalen winners
Musical groups disestablished in 1982
Musical groups established in 1972
Musical groups from Stockholm
Musical groups reestablished in 2016
Swedish musical quartets
Palindromes
RCA Records artists
Schlager groups
Swedish dance music groups
Swedish pop music groups
Swedish pop rock music groups
Swedish-language musical groups
Swedish co-ed groups
German-language musical groups from Sweden
French-language musical groups from Sweden
Spanish-language musical groups of Sweden
Mixed-gender bands
Virtual avatar acts | ABBA | [
"Physics"
] | 18,261 | [
"Symmetry",
"Palindromes"
] |
896 | https://en.wikipedia.org/wiki/Argon | Argon is a chemical element; it has symbol Ar and atomic number 18. It is in group 18 of the periodic table and is a noble gas. Argon is the third most abundant gas in Earth's atmosphere, at 0.934% (9340 ppmv). It is more than twice as abundant as water vapor (which averages about 4000 ppmv, but varies greatly), 23 times as abundant as carbon dioxide (400 ppmv), and more than 500 times as abundant as neon (18 ppmv). Argon is the most abundant noble gas in Earth's crust, comprising 0.00015% of the crust.
Nearly all argon in Earth's atmosphere is radiogenic argon-40, derived from the decay of potassium-40 in Earth's crust. In the universe, argon-36 is by far the most common argon isotope, as it is the most easily produced by stellar nucleosynthesis in supernovas.
The name "argon" is derived from the Greek word , neuter singular form of meaning 'lazy' or 'inactive', as a reference to the fact that the element undergoes almost no chemical reactions. The complete octet (eight electrons) in the outer atomic shell makes argon stable and resistant to bonding with other elements. Its triple point temperature of 83.8058 K is a defining fixed point in the International Temperature Scale of 1990.
Argon is extracted industrially by the fractional distillation of liquid air. It is mostly used as an inert shielding gas in welding and other high-temperature industrial processes where ordinarily unreactive substances become reactive; for example, an argon atmosphere is used in graphite electric furnaces to prevent the graphite from burning. It is also used in incandescent and fluorescent lighting, and other gas-discharge tubes. It makes a distinctive blue-green gas laser. It is also used in fluorescent glow starters.
Characteristics
Argon has approximately the same solubility in water as oxygen and is 2.5 times more soluble in water than nitrogen. Argon is colorless, odorless, nonflammable and nontoxic as a solid, liquid or gas. Argon is chemically inert under most conditions and forms no confirmed stable compounds at room temperature.
Although argon is a noble gas, it can form some compounds under various extreme conditions. Argon fluorohydride (HArF), a compound of argon with fluorine and hydrogen that is stable below , has been demonstrated. Although the neutral ground-state chemical compounds of argon are presently limited to HArF, argon can form clathrates with water when atoms of argon are trapped in a lattice of water molecules. Ions, such as , and excited-state complexes, such as ArF, have been demonstrated. Theoretical calculation predicts several more argon compounds that should be stable but have not yet been synthesized.
History
Argon (Greek , neuter singular form of meaning "lazy" or "inactive") is named in reference to its chemical inactivity. This chemical property of this first noble gas to be discovered impressed the namers. An unreactive gas was suspected to be a component of air by Henry Cavendish in 1785.
Argon was first isolated from air in 1894 by Lord Rayleigh and Sir William Ramsay at University College London by removing oxygen, carbon dioxide, water, and nitrogen from a sample of clean air. They first accomplished this by replicating an experiment of Henry Cavendish's. They trapped a mixture of atmospheric air with additional oxygen in a test-tube (A) upside-down over a large quantity of dilute alkali solution (B), which in Cavendish's original experiment was potassium hydroxide, and conveyed a current through wires insulated by U-shaped glass tubes (CC) which sealed around the platinum wire electrodes, leaving the ends of the wires (DD) exposed to the gas and insulated from the alkali solution. The arc was powered by a battery of five Grove cells and a Ruhmkorff coil of medium size. The alkali absorbed the oxides of nitrogen produced by the arc and also carbon dioxide. They operated the arc until no more reduction of volume of the gas could be seen for at least an hour or two and the spectral lines of nitrogen disappeared when the gas was examined. The remaining oxygen was reacted with alkaline pyrogallate to leave behind an apparently non-reactive gas which they called argon.
Before isolating the gas, they had determined that nitrogen produced from chemical compounds was 0.5% lighter than nitrogen from the atmosphere. The difference was slight, but it was important enough to attract their attention for many months. They concluded that there was another gas in the air mixed in with the nitrogen. Argon was also encountered in 1882 through independent research of H. F. Newall and W. N. Hartley. Each observed new lines in the emission spectrum of air that did not match known elements.
Prior to 1957, the symbol for argon was "A". This was changed to Ar after the International Union of Pure and Applied Chemistry published the work Nomenclature of Inorganic Chemistry in 1957.
Occurrence
Argon constitutes 0.934% by volume and 1.288% by mass of Earth's atmosphere. Air is the primary industrial source of purified argon products. Argon is isolated from air by fractionation, most commonly by cryogenic fractional distillation, a process that also produces purified nitrogen, oxygen, neon, krypton and xenon. Earth's crust and seawater contain 1.2 ppm and 0.45 ppm of argon, respectively.
Isotopes
The main isotopes of argon found on Earth are (99.6%), (0.34%), and (0.06%). Naturally occurring , with a half-life of 1.25 years, decays to stable (11.2%) by electron capture or positron emission, and also to stable (88.8%) by beta decay. These properties and ratios are used to determine the age of rocks by K–Ar dating.
In Earth's atmosphere, is made by cosmic ray activity, primarily by neutron capture of followed by two-neutron emission. In the subsurface environment, it is also produced through neutron capture by , followed by proton emission. is created from the neutron capture by followed by an alpha particle emission as a result of subsurface nuclear explosions. It has a half-life of 35 days.
Between locations in the Solar System, the isotopic composition of argon varies greatly. Where the major source of argon is the decay of in rocks, will be the dominant isotope, as it is on Earth. Argon produced directly by stellar nucleosynthesis is dominated by the alpha-process nuclide . Correspondingly, solar argon contains 84.6% (according to solar wind measurements), and the ratio of the three isotopes 36Ar : 38Ar : 40Ar in the atmospheres of the outer planets is 8400 : 1600 : 1. This contrasts with the low abundance of primordial in Earth's atmosphere, which is only 31.5 ppmv (= 9340 ppmv × 0.337%), comparable with that of neon (18.18 ppmv) on Earth and with interplanetary gasses, measured by probes.
The atmospheres of Mars, Mercury and Titan (the largest moon of Saturn) contain argon, predominantly as .
The predominance of radiogenic is the reason the standard atomic weight of terrestrial argon is greater than that of the next element, potassium, a fact that was puzzling when argon was discovered. Mendeleev positioned the elements on his periodic table in order of atomic weight, but the inertness of argon suggested a placement before the reactive alkali metal. Henry Moseley later solved this problem by showing that the periodic table is actually arranged in order of atomic number (see History of the periodic table).
Compounds
Argon's complete octet of electrons indicates full s and p subshells. This full valence shell makes argon very stable and extremely resistant to bonding with other elements. Before 1962, argon and the other noble gases were considered to be chemically inert and unable to form compounds; however, compounds of the heavier noble gases have since been synthesized. The first argon compound with tungsten pentacarbonyl, W(CO)5Ar, was isolated in 1975. However, it was not widely recognised at that time. In August 2000, another argon compound, argon fluorohydride (HArF), was formed by researchers at the University of Helsinki, by shining ultraviolet light onto frozen argon containing a small amount of hydrogen fluoride with caesium iodide. This discovery caused the recognition that argon could form weakly bound compounds, even though it was not the first. It is stable up to 17 kelvins (−256 °C). The metastable dication, which is valence-isoelectronic with carbonyl fluoride and phosgene, was observed in 2010. Argon-36, in the form of argon hydride (argonium) ions, has been detected in interstellar medium associated with the Crab Nebula supernova; this was the first noble-gas molecule detected in outer space.
Solid argon hydride (Ar(H2)2) has the same crystal structure as the MgZn2 Laves phase. It forms at pressures between 4.3 and 220 GPa, though Raman measurements suggest that the H2 molecules in Ar(H2)2 dissociate above 175 GPa.
Production
Argon is extracted industrially by the fractional distillation of liquid air in a cryogenic air separation unit; a process that separates liquid nitrogen, which boils at 77.3 K, from argon, which boils at 87.3 K, and liquid oxygen, which boils at 90.2 K. About 700,000 tonnes of argon are produced worldwide every year.
Applications
Argon has several desirable properties:
Argon is a chemically inert gas.
Argon is the cheapest alternative when nitrogen is not sufficiently inert.
Argon has low thermal conductivity.
Argon has electronic properties (ionization and/or the emission spectrum) desirable for some applications.
Other noble gases would be equally suitable for most of these applications, but argon is by far the cheapest. It is inexpensive, since it occurs naturally in air and is readily obtained as a byproduct of cryogenic air separation in the production of liquid oxygen and liquid nitrogen: the primary constituents of air are used on a large industrial scale. The other noble gases (except helium) are produced this way as well, but argon is the most plentiful by far. The bulk of its applications arise simply because it is inert and relatively cheap.
Industrial processes
Argon is used in some high-temperature industrial processes where ordinarily non-reactive substances become reactive. For example, an argon atmosphere is used in graphite electric furnaces to prevent the graphite from burning.
For some of these processes, the presence of nitrogen or oxygen gases might cause defects within the material. Argon is used in some types of arc welding such as gas metal arc welding and gas tungsten arc welding, as well as in the processing of titanium and other reactive elements. An argon atmosphere is also used for growing crystals of silicon and germanium.
Argon is used in the poultry industry to asphyxiate birds, either for mass culling following disease outbreaks, or as a means of slaughter more humane than electric stunning. Argon is denser than air and displaces oxygen close to the ground during inert gas asphyxiation. Its non-reactive nature makes it suitable in a food product, and since it replaces oxygen within the dead bird, argon also enhances shelf life.
Argon is sometimes used for extinguishing fires where valuable equipment may be damaged by water or foam.
Scientific research
Liquid argon is used as the target for neutrino experiments and direct dark matter searches. The interaction between the hypothetical WIMPs and an argon nucleus produces scintillation light that is detected by photomultiplier tubes. Two-phase detectors containing argon gas are used to detect the ionized electrons produced during the WIMP–nucleus scattering. As with most other liquefied noble gases, argon has a high scintillation light yield (about 51 photons/keV), is transparent to its own scintillation light, and is relatively easy to purify. Compared to xenon, argon is cheaper and has a distinct scintillation time profile, which allows the separation of electronic recoils from nuclear recoils. On the other hand, its intrinsic beta-ray background is larger due to contamination, unless one uses argon from underground sources, which has much less contamination. Most of the argon in Earth's atmosphere was produced by electron capture of long-lived ( + e− → + ν) present in natural potassium within Earth. The activity in the atmosphere is maintained by cosmogenic production through the knockout reaction (n,2n) and similar reactions. The half-life of is only 269 years. As a result, the underground Ar, shielded by rock and water, has much less contamination. Dark-matter detectors currently operating with liquid argon include DarkSide, WArP, ArDM, microCLEAN and DEAP. Neutrino experiments include ICARUS and MicroBooNE, both of which use high-purity liquid argon in a time projection chamber for fine grained three-dimensional imaging of neutrino interactions.
At Linköping University, Sweden, the inert gas is being utilized in a vacuum chamber in which plasma is introduced to ionize metallic films. This process results in a film usable for manufacturing computer processors. The new process would eliminate the need for chemical baths and use of expensive, dangerous and rare materials.
Preservative
Argon is used to displace oxygen- and moisture-containing air in packaging material to extend the shelf-lives of the contents (argon has the European food additive code E938). Aerial oxidation, hydrolysis, and other chemical reactions that degrade the products are retarded or prevented entirely. High-purity chemicals and pharmaceuticals are sometimes packed and sealed in argon.
In winemaking, argon is used in a variety of activities to provide a barrier against oxygen at the liquid surface, which can spoil wine by fueling both microbial metabolism (as with acetic acid bacteria) and standard redox chemistry.
Argon is sometimes used as the propellant in aerosol cans.
Argon is also used as a preservative for such products as varnish, polyurethane, and paint, by displacing air to prepare a container for storage.
Since 2002, the American National Archives stores important national documents such as the Declaration of Independence and the Constitution within argon-filled cases to inhibit their degradation. Argon is preferable to the helium that had been used in the preceding five decades, because helium gas escapes through the intermolecular pores in most containers and must be regularly replaced.
Laboratory equipment
Argon may be used as the inert gas within Schlenk lines and gloveboxes. Argon is preferred to less expensive nitrogen in cases where nitrogen may react with the reagents or apparatus.
Argon may be used as the carrier gas in gas chromatography and in electrospray ionization mass spectrometry; it is the gas of choice for the plasma used in ICP spectroscopy. Argon is preferred for the sputter coating of specimens for scanning electron microscopy. Argon gas is also commonly used for sputter deposition of thin films as in microelectronics and for wafer cleaning in microfabrication.
Medical use
Cryosurgery procedures such as cryoablation use liquid argon to destroy tissue such as cancer cells. It is used in a procedure called "argon-enhanced coagulation", a form of argon plasma beam electrosurgery. The procedure carries a risk of producing gas embolism and has resulted in the death of at least one patient.
Blue argon lasers are used in surgery to weld arteries, destroy tumors, and correct eye defects.
Argon has also been used experimentally to replace nitrogen in the breathing or decompression mix known as Argox, to speed the elimination of dissolved nitrogen from the blood.
Lighting
Incandescent lights are filled with argon, to preserve the filaments at high temperature from oxidation. It is used for the specific way it ionizes and emits light, such as in plasma globes and calorimetry in experimental particle physics. Gas-discharge lamps filled with pure argon provide lilac/violet light; with argon and some mercury, blue light. Argon is also used for blue and green argon-ion lasers.
Miscellaneous uses
Argon is used for thermal insulation in energy-efficient windows. Argon is also used in technical scuba diving to inflate a dry suit because it is inert and has low thermal conductivity.
Argon is used as a propellant in the development of the Variable Specific Impulse Magnetoplasma Rocket (VASIMR). Compressed argon gas is allowed to expand, to cool the seeker heads of some versions of the AIM-9 Sidewinder missile and other missiles that use cooled thermal seeker heads. The gas is stored at high pressure.
Argon-39, with a half-life of 269 years, has been used for a number of applications, primarily ice core and ground water dating. Also, potassium–argon dating and related argon-argon dating are used to date sedimentary, metamorphic, and igneous rocks.
Argon has been used by athletes as a doping agent to simulate hypoxic conditions. In 2014, the World Anti-Doping Agency (WADA) added argon and xenon to the list of prohibited substances and methods, although at this time there is no reliable test for abuse.
Safety
Although argon is non-toxic, it is 38% more dense than air and therefore considered a dangerous asphyxiant in closed areas. It is difficult to detect because it is colorless, odorless, and tasteless. A 1994 incident, in which a man was asphyxiated after entering an argon-filled section of oil pipe under construction in Alaska, highlights the dangers of argon tank leakage in confined spaces and emphasizes the need for proper use, storage and handling.
See also
Industrial gas
Oxygen–argon ratio, a ratio of two physically similar gases, which has importance in various sectors.
References
Further reading
On triple point pressure at 69 kPa.
On triple point pressure at 83.8058 K.
External links
Argon at The Periodic Table of Videos (University of Nottingham)
USGS Periodic Table – Argon
Diving applications: Why Argon?
Chemical elements
E-number additives
Noble gases
Industrial gases | Argon | [
"Physics",
"Chemistry",
"Materials_science"
] | 3,963 | [
"Noble gases",
"Chemical elements",
"Nonmetals",
"Industrial gases",
"Chemical process engineering",
"Atoms",
"Matter"
] |
897 | https://en.wikipedia.org/wiki/Arsenic | Arsenic is a chemical element with the symbol As and the atomic number 33. It is a metalloid and one of the pnictogens, and therefore shares many properties with its group 15 neighbors phosphorus and antimony. Arsenic is a notoriously toxic heavy metal. It occurs naturally in many minerals, usually in combination with sulfur and metals, but also as a pure elemental crystal. It has various allotropes, but only the grey form, which has a metallic appearance, is important to industry.
The primary use of arsenic is in alloys of lead (for example, in car batteries and ammunition). Arsenic is also a common n-type dopant in semiconductor electronic devices, and a component of the III–V compound semiconductor gallium arsenide. Arsenic and its compounds, especially the trioxide, are used in the production of pesticides, treated wood products, herbicides, and insecticides. These applications are declining with the increasing recognition of the toxicity of arsenic and its compounds.
Arsenic has been known since ancient times to be poisonous to humans. However, a few species of bacteria are able to use arsenic compounds as respiratory metabolites. Trace quantities of arsenic have been proposed to be an essential dietary element in rats, hamsters, goats, and chickens. Research has not been conducted to determine whether small amounts of arsenic may play a role in human metabolism. However, arsenic poisoning occurs in multicellular life if quantities are larger than needed. Arsenic contamination of groundwater is a problem that affects millions of people across the world.
The United States' Environmental Protection Agency states that all forms of arsenic are a serious risk to human health. The United States' Agency for Toxic Substances and Disease Registry ranked arsenic number 1 in its 2001 prioritized list of hazardous substances at Superfund sites. Arsenic is classified as a Group-A carcinogen.
Characteristics
Physical characteristics
The three most common arsenic allotropes are grey, yellow, and black arsenic, with grey being the most common. Grey arsenic (α-As, space group Rm No. 166) adopts a double-layered structure consisting of many interlocked, ruffled, six-membered rings. Because of weak bonding between the layers, grey arsenic is brittle and has a relatively low Mohs hardness of 3.5. Nearest and next-nearest neighbors form a distorted octahedral complex, with the three atoms in the same double-layer being slightly closer than the three atoms in the next. This relatively close packing leads to a high density of 5.73 g/cm3. Grey arsenic is a semimetal, but becomes a semiconductor with a bandgap of 1.2–1.4 eV if amorphized. Grey arsenic is also the most stable form.
Yellow arsenic is soft and waxy, and somewhat similar to tetraphosphorus (). Both have four atoms arranged in a tetrahedral structure in which each atom is bound to each of the other three atoms by a single bond. This unstable allotrope, being molecular, is the most volatile, least dense, and most toxic. Solid yellow arsenic is produced by rapid cooling of arsenic vapor, . It is rapidly transformed into grey arsenic by light. The yellow form has a density of 1.97 g/cm3. Black arsenic is similar in structure to black phosphorus.
Black arsenic can also be formed by cooling vapor at around 100–220 °C and by crystallization of amorphous arsenic in the presence of mercury vapors. It is glassy and brittle. Black arsenic is also a poor electrical conductor.
Arsenic sublimes upon heating at atmospheric pressure, converting directly to a gaseous form without an intervening liquid state at . The triple point is at 3.63 MPa and .
Isotopes
Arsenic occurs in nature as one stable isotope, 75As, and is therefore called a monoisotopic element. As of 2024, at least 32 radioisotopes have also been synthesized, ranging in atomic mass from 64 to 95. The most stable of these is 73As with a half-life of 80.30 days. All other isotopes have half-lives of under one day, with the exception of 71As (t1/2=65.30 hours), 72As (t1/2=26.0 hours), 74As (t1/2=17.77 days), 76As (t1/2=26.26 hours), and 77As (t1/2=38.83 hours). Isotopes that are lighter than the stable 75As tend to decay by β+ decay, and those that are heavier tend to decay by β− decay, with some exceptions.
At least 10 nuclear isomers have been described, ranging in atomic mass from 66 to 84. The most stable of arsenic's isomers is 68mAs with a half-life of 111 seconds.
Chemistry
Arsenic has a similar electronegativity and ionization energies to its lighter pnictogen congener phosphorus and therefore readily forms covalent molecules with most of the nonmetals. Though stable in dry air, arsenic forms a golden-bronze tarnish upon exposure to humidity which eventually becomes a black surface layer. When heated in air, arsenic oxidizes to arsenic trioxide; the fumes from this reaction have an odor resembling garlic. This odor can be detected on striking arsenide minerals such as arsenopyrite with a hammer. It burns in oxygen to form arsenic trioxide and arsenic pentoxide, which have the same structure as the more well-known phosphorus compounds, and in fluorine to give arsenic pentafluoride. Arsenic makes arsenic acid with concentrated nitric acid, arsenous acid with dilute nitric acid, and arsenic trioxide with concentrated sulfuric acid; however, it does not react with water, alkalis, or non-oxidising acids. Arsenic reacts with metals to form arsenides, though these are not ionic compounds containing the As3− ion as the formation of such an anion would be highly endothermic and even the group 1 arsenides have properties of intermetallic compounds. Like germanium, selenium, and bromine, which like arsenic succeed the 3d transition series, arsenic is much less stable in the +5 oxidation state than its vertical neighbors phosphorus and antimony, and hence arsenic pentoxide and arsenic acid are potent oxidizers.
Compounds
Compounds of arsenic resemble, in some respects, those of phosphorus, which occupies the same group (column) of the periodic table. The most common oxidation states for arsenic are: −3 in the arsenides, which are alloy-like intermetallic compounds, +3 in the arsenites, and +5 in the arsenates and most organoarsenic compounds. Arsenic also bonds readily to itself as seen in the square ions in the mineral skutterudite. In the +3 oxidation state, arsenic is typically pyramidal owing to the influence of the lone pair of electrons.
Inorganic compounds
One of the simplest arsenic compounds is the trihydride, the highly toxic, flammable, pyrophoric arsine (AsH3). This compound is generally regarded as stable, since at room temperature it decomposes only slowly. At temperatures of 250–300 °C decomposition to arsenic and hydrogen is rapid. Several factors, such as humidity, presence of light and certain catalysts (namely aluminium) facilitate the rate of decomposition. It oxidises readily in air to form arsenic trioxide and water, and analogous reactions take place with sulfur and selenium instead of oxygen.
Arsenic forms colorless, odorless, crystalline oxides As2O3 ("white arsenic") and As2O5 which are hygroscopic and readily soluble in water to form acidic solutions. Arsenic(V) acid is a weak acid and its salts, known as arsenates, are a major source of arsenic contamination of groundwater in regions with high levels of naturally-occurring arsenic minerals. Synthetic arsenates include Scheele's Green (cupric hydrogen arsenate, acidic copper arsenate), calcium arsenate, and lead hydrogen arsenate. These three have been used as agricultural insecticides and poisons.
The protonation steps between the arsenate and arsenic acid are similar to those between phosphate and phosphoric acid. Unlike phosphorous acid, arsenous acid is genuinely tribasic, with the formula As(OH)3.
A broad variety of sulfur compounds of arsenic are known. Orpiment (As2S3) and realgar (As4S4) are somewhat abundant and were formerly used as painting pigments. In As4S10, arsenic has a formal oxidation state of +2 in As4S4 which features As-As bonds so that the total covalency of As is still 3. Both orpiment and realgar, as well as As4S3, have selenium analogs; the analogous As2Te3 is known as the mineral kalgoorlieite, and the anion As2Te− is known as a ligand in cobalt complexes.
All trihalides of arsenic(III) are well known except the astatide, which is unknown. Arsenic pentafluoride (AsF5) is the only important pentahalide, reflecting the lower stability of the +5 oxidation state; even so, it is a very strong fluorinating and oxidizing agent. (The pentachloride is stable only below −50 °C, at which temperature it decomposes to the trichloride, releasing chlorine gas.)
Alloys
Arsenic is used as the group 5 element in the III-V semiconductors gallium arsenide, indium arsenide, and aluminium arsenide. The valence electron count of GaAs is the same as a pair of Si atoms, but the band structure is completely different which results in distinct bulk properties. Other arsenic alloys include the II-V semiconductor cadmium arsenide.
Organoarsenic compounds
A large variety of organoarsenic compounds are known. Several were developed as chemical warfare agents during World War I, including vesicants such as lewisite and vomiting agents such as adamsite. Cacodylic acid, which is of historic and practical interest, arises from the methylation of arsenic trioxide, a reaction that has no analogy in phosphorus chemistry. Cacodyl was the first organometallic compound known (even though arsenic is not a true metal) and was named from the Greek κακωδία "stink" for its offensive, garlic-like odor; it is very toxic.
Occurrence and production
Arsenic is the 53rd most abundant element in the Earth's crust, comprising about 1.5 parts per million (0.00015%). Typical background concentrations of arsenic do not exceed 3 ng/m3 in the atmosphere; 100 mg/kg in soil; 400 μg/kg in vegetation; 10 μg/L in freshwater and 1.5 μg/L in seawater. Arsenic is the 22nd most abundant element in seawater and ranks 41st in abundance in the universe.
Minerals with the formula MAsS and MAs2 (M = Fe, Ni, Co) are the dominant commercial sources of arsenic, together with realgar (an arsenic sulfide mineral) and native (elemental) arsenic. An illustrative mineral is arsenopyrite (FeAsS), which is structurally related to iron pyrite. Many minor As-containing minerals are known. Arsenic also occurs in various organic forms in the environment.
In 2014, China was the top producer of white arsenic with almost 70% world share, followed by Morocco, Russia, and Belgium, according to the British Geological Survey and the United States Geological Survey. Most arsenic refinement operations in the US and Europe have closed over environmental concerns. Arsenic is found in the smelter dust from copper, gold, and lead smelters, and is recovered primarily from copper refinement dust.
On roasting arsenopyrite in air, arsenic sublimes as arsenic(III) oxide leaving iron oxides, while roasting without air results in the production of gray arsenic. Further purification from sulfur and other chalcogens is achieved by sublimation in vacuum, in a hydrogen atmosphere, or by distillation from molten lead-arsenic mixture.
History
The word arsenic has its origin in the Syriac word zarnika, from Arabic al-zarnīḵ 'the orpiment', based on Persian zar ("gold") from the word zarnikh, meaning "yellow" (literally "gold-colored") and hence "(yellow) orpiment". It was adopted into Greek (using folk etymology) as arsenikon () – a neuter form of the Greek adjective arsenikos (), meaning "male", "virile".
Latin-speakers adopted the Greek term as , which in French ultimately became , whence the English word "arsenic".
Arsenic sulfides (orpiment, realgar) and oxides have been known and used since ancient times. Zosimos () describes roasting sandarach (realgar) to obtain cloud of arsenic (arsenic trioxide), which he then reduces to gray arsenic. As the symptoms of arsenic poisoning are not very specific, the substance was frequently used for murder until the advent in the 1830s of the Marsh test, a sensitive chemical test for its presence. (Another less sensitive but more general test is the Reinsch test.) Owing to its use by the ruling class to murder one another and its potency and discreetness, arsenic has been called the "poison of kings" and the "king of poisons". Arsenic became known as "the inheritance powder" due to its use in killing family members in the Renaissance era.
During the Bronze Age, arsenic was melted with copper to make arsenical bronze.
Jabir ibn Hayyan described the isolation of arsenic before 815 AD. Albertus Magnus (Albert the Great, 1193–1280) later isolated the element from a compound in 1250, by heating soap together with arsenic trisulfide. In 1649, Johann Schröder published two ways of preparing arsenic. Crystals of elemental (native) arsenic are found in nature, although rarely.
Cadet's fuming liquid (impure cacodyl), often claimed as the first synthetic organometallic compound, was synthesized in 1760 by Louis Claude Cadet de Gassicourt through the reaction of potassium acetate with arsenic trioxide.
In the Victorian era, women would eat "arsenic" ("white arsenic" or arsenic trioxide) mixed with vinegar and chalk to improve the complexion of their faces, making their skin paler (to show they did not work in the fields). The accidental use of arsenic in the adulteration of foodstuffs led to the Bradford sweet poisoning in 1858, which resulted in 21 deaths. From the late-18th century wallpaper production began to use dyes made from arsenic,
which was thought to increase the pigment's brightness. One account of the illness and 1821 death of Napoleon I implicates arsenic poisoning involving wallpaper.
Two arsenic pigments have been widely used since their discovery – Paris Green in 1814 and Scheele's Green in 1775. After the toxicity of arsenic became widely known, these chemicals were used less often as pigments and more often as insecticides. In the 1860s, an arsenic byproduct of dye production, London Purple, was widely used. This was a solid mixture of arsenic trioxide, aniline, lime, and ferrous oxide, insoluble in water and very toxic by inhalation or ingestion But it was later replaced with Paris Green, another arsenic-based dye. With better understanding of the toxicology mechanism, two other compounds were used starting in the 1890s. Arsenite of lime and arsenate of lead were used widely as insecticides until the discovery of DDT in 1942.
In small doses, soluble arsenic compounds act as stimulants, and were once popular as medicine by people in the mid-18th to 19th centuries; this use was especially prevalent for sport animals such as race horses or work dogs and continued into the 20th century.
A 2006 study of the remains of the Australian racehorse Phar Lap determined that its 1932 death was caused by a massive overdose of arsenic. Sydney veterinarian Percy Sykes stated, "In those days, arsenic was quite a common tonic, usually given in the form of a solution (Fowler's Solution) ... It was so common that I'd reckon 90 per cent of the horses had arsenic in their system."
Applications
Agricultural
The toxicity of arsenic to insects, bacteria, and fungi led to its use as a wood preservative. In the 1930s, a process of treating wood with chromated copper arsenate (also known as CCA or Tanalith) was invented, and for decades, this treatment was the most extensive industrial use of arsenic. An increased appreciation of the toxicity of arsenic led to a ban of CCA in consumer products in 2004, initiated by the European Union and United States. However, CCA remains in heavy use in other countries (such as on Malaysian rubber plantations).
Arsenic was also used in various agricultural insecticides and poisons. For example, lead hydrogen arsenate was a common insecticide on fruit trees, but contact with the compound sometimes resulted in brain damage among those working the sprayers. In the second half of the 20th century, monosodium methyl arsenate (MSMA) and disodium methyl arsenate (DSMA) – less toxic organic forms of arsenic – replaced lead arsenate in agriculture. These organic arsenicals were in turn phased out in the United States by 2013 in all agricultural activities except cotton farming.
The biogeochemistry of arsenic is complex and includes various adsorption and desorption processes. The toxicity of arsenic is connected to its solubility and is affected by pH. Arsenite () is more soluble than arsenate () and is more toxic; however, at a lower pH, arsenate becomes more mobile and toxic. It was found that addition of sulfur, phosphorus, and iron oxides to high-arsenite soils greatly reduces arsenic phytotoxicity.
Arsenic is used as a feed additive in poultry and swine production, in particular it was used in the U.S. until 2015 to increase weight gain, improve feed efficiency, and prevent disease. An example is roxarsone, which had been used as a broiler starter by about 70% of U.S. broiler growers. In 2011, Alpharma, a subsidiary of Pfizer Inc., which produces roxarsone, voluntarily suspended sales of the drug in response to studies showing elevated levels of inorganic arsenic, a carcinogen, in treated chickens. A successor to Alpharma, Zoetis, continued to sell nitarsone until 2015, primarily for use in turkeys.
Medical use
During the 17th, 18th, and 19th centuries, a number of arsenic compounds were used as medicines, including arsphenamine (by Paul Ehrlich) and arsenic trioxide (by Thomas Fowler), for treating diseases such as cancer or psoriasis. Arsphenamine, as well as neosalvarsan, was indicated for syphilis, but has been superseded by modern antibiotics. However, arsenicals such as melarsoprol are still used for the treatment of trypanosomiasis in spite of their severe toxicity, since the disease is almost uniformly fatal if untreated. In 2000 the US Food and Drug Administration approved arsenic trioxide for the treatment of patients with acute promyelocytic leukemia that is resistant to all-trans retinoic acid.
A 2008 paper reports success in locating tumors using arsenic-74 (a positron emitter). This isotope produces clearer PET scan images than the previous radioactive agent, iodine-124, because the body tends to transport iodine to the thyroid gland producing signal noise. Nanoparticles of arsenic have shown ability to kill cancer cells with lesser cytotoxicity than other arsenic formulations.
Alloys
The main use of arsenic is in alloying with lead. Lead components in car batteries are strengthened by the presence of a very small percentage of arsenic. Dezincification of brass (a copper-zinc alloy) is greatly reduced by the addition of arsenic. "Phosphorus Deoxidized Arsenical Copper" with an arsenic content of 0.3% has an increased corrosion stability in certain environments. Gallium arsenide is an important semiconductor material, used in integrated circuits. Circuits made from GaAs are much faster (but also much more expensive) than those made from silicon. Unlike silicon, GaAs has a direct bandgap, and can be used in laser diodes and LEDs to convert electrical energy directly into light.
Military
After World War I, the United States built a stockpile of 20,000 tons of weaponized lewisite (ClCH=CHAsCl2), an organoarsenic vesicant (blister agent) and lung irritant. The stockpile was neutralized with bleach and dumped into the Gulf of Mexico in the 1950s. During the Vietnam War, the United States used Agent Blue, a mixture of sodium cacodylate and its acid form, as one of the rainbow herbicides to deprive North Vietnamese soldiers of foliage cover and rice.
Other uses
Copper acetoarsenite was used as a green pigment known under many names, including Paris Green and Emerald Green. It caused numerous arsenic poisonings. Scheele's Green, a copper arsenate, was used in the 19th century as a coloring agent in sweets.
Arsenic is used in bronzing.
As much as 2% of produced arsenic is used in lead alloys for lead shot and bullets.
Arsenic is added in small quantities to alpha-brass to make it dezincification-resistant. This grade of brass is used in plumbing fittings and other wet environments.
Arsenic is also used for taxonomic sample preservation. It was also used in embalming fluids historically.
Arsenic was used in the taxidermy process up until the 1980s.
Arsenic was used as an opacifier in ceramics, creating white glazes.
Until recently, arsenic was used in optical glass. Modern glass manufacturers have ceased using both arsenic and lead.
Biological role
Bacteria
Some species of bacteria obtain their energy in the absence of oxygen by oxidizing various fuels while reducing arsenate to arsenite. Under oxidative environmental conditions some bacteria use arsenite as fuel, which they oxidize to arsenate. The enzymes involved are known as arsenate reductases (Arr).
In 2008, bacteria were discovered that employ a version of photosynthesis in the absence of oxygen with arsenites as electron donors, producing arsenates (just as ordinary photosynthesis uses water as electron donor, producing molecular oxygen). Researchers conjecture that, over the course of history, these photosynthesizing organisms produced the arsenates that allowed the arsenate-reducing bacteria to thrive. One strain, PHS-1, has been isolated and is related to the gammaproteobacterium Ectothiorhodospira shaposhnikovii. The mechanism is unknown, but an encoded Arr enzyme may function in reverse to its known homologues.
In 2011, it was postulated that the Halomonadaceae strain GFAJ-1 could be grown in the absence of phosphorus if that element were substituted with arsenic, exploiting the fact that the arsenate and phosphate anions are similar structurally. The study was widely criticised and subsequently refuted by independent researcher groups.
Potential role in higher animals
Arsenic may be an essential trace mineral in birds, involved in the synthesis of methionine metabolites. However, the role of arsenic in bird nutrition is disputed, as other authors state that arsenic is toxic in small amounts
Some evidence indicates that arsenic is an essential trace mineral in mammals.
Heredity
Arsenic has been linked to epigenetic changes, heritable changes in gene expression that occur without changes in DNA sequence. These include DNA methylation, histone modification, and RNA interference. Toxic levels of arsenic cause significant DNA hypermethylation of tumor suppressor genes p16 and p53, thus increasing risk of carcinogenesis. These epigenetic events have been studied in vitro using human kidney cells and in vivo using rat liver cells and peripheral blood leukocytes in humans. Inductively coupled plasma mass spectrometry (ICP-MS) is used to detect precise levels of intracellular arsenic and other arsenic bases involved in epigenetic modification of DNA. Studies investigating arsenic as an epigenetic factor can be used to develop precise biomarkers of exposure and susceptibility.
The Chinese brake fern (Pteris vittata) hyperaccumulates arsenic from the soil into its leaves and has a proposed use in phytoremediation.
Biomethylation
Inorganic arsenic and its compounds, upon entering the food chain, are progressively metabolized through a process of methylation. For example, the mold Scopulariopsis brevicaulis produces trimethylarsine if inorganic arsenic is present. The organic compound arsenobetaine is found in some marine foods such as fish and algae, and also in mushrooms in larger concentrations. The average person's intake is about 10–50 μg/day. Values about 1000 μg are not unusual following consumption of fish or mushrooms, but there is little danger in eating fish because this arsenic compound is nearly non-toxic.
Environmental issues
Exposure
Naturally occurring sources of human exposure include volcanic ash, weathering of minerals and ores, and mineralized groundwater. Arsenic is also found in food, water, soil, and air. Arsenic is absorbed by all plants, but is more concentrated in leafy vegetables, rice, apple and grape juice, and seafood. An additional route of exposure is inhalation of atmospheric gases and dusts.
During the Victorian era, arsenic was widely used in home decor, especially wallpapers. In Europe, an analysis based on 20,000 soil samples across all 28 countries show that 98% of sampled soils have concentrations less than 20 mg kg-1. In addition, the As hotspots are related to frequent fertilization and close distance to mining activities.
Occurrence in drinking water
Extensive arsenic contamination of groundwater has led to widespread arsenic poisoning in Bangladesh and neighboring countries. It is estimated that approximately 57 million people in the Bengal basin are drinking groundwater with arsenic concentrations elevated above the World Health Organization's standard of 10 parts per billion (ppb). However, a study of cancer rates in Taiwan suggested that significant increases in cancer mortality appear only at levels above 150 ppb. The arsenic in the groundwater is of natural origin, and is released from the sediment into the groundwater, caused by the anoxic conditions of the subsurface. This groundwater was used after local and western NGOs and the Bangladeshi government undertook a massive shallow tube well drinking-water program in the late twentieth century. This program was designed to prevent drinking of bacteria-contaminated surface waters, but failed to test for arsenic in the groundwater. Many other countries and districts in Southeast Asia, such as Vietnam and Cambodia, have geological environments that produce groundwater with a high arsenic content. Arsenicosis was reported in Nakhon Si Thammarat, Thailand, in 1987, and the Chao Phraya River probably contains high levels of naturally occurring dissolved arsenic without being a public health problem because much of the public uses bottled water. In Pakistan, more than 60 million people are exposed to arsenic polluted drinking water indicated by a 2017 report in Science. Podgorski's team investigated more than 1200 samples and more than 66% exceeded the WHO minimum contamination level.
Since the 1980s, residents of the Ba Men region of Inner Mongolia, China have been chronically exposed to arsenic through drinking water from contaminated wells. A 2009 research study observed an elevated presence of skin lesions among residents with well water arsenic concentrations between 5 and 10 μg/L, suggesting that arsenic induced toxicity may occur at relatively low concentrations with chronic exposure. Overall, 20 of China's 34 provinces have high arsenic concentrations in the groundwater supply, potentially exposing 19 million people to hazardous drinking water.
A study by IIT Kharagpur found high levels of Arsenic in groundwater of 20% of India's land, exposing more than 250 million people. States such as Punjab, Bihar, West Bengal, Assam, Haryana, Uttar Pradesh, and Gujarat have highest land area exposed to arsenic.
In the United States, arsenic is most commonly found in the ground waters of the southwest. Parts of New England, Michigan, Wisconsin, Minnesota and the Dakotas are also known to have significant concentrations of arsenic in ground water. Increased levels of skin cancer have been associated with arsenic exposure in Wisconsin, even at levels below the 10 ppb drinking water standard. According to a recent film funded by the US Superfund, millions of private wells have unknown arsenic levels, and in some areas of the US, more than 20% of the wells may contain levels that exceed established limits.
Low-level exposure to arsenic at concentrations of 100 ppb (i.e., above the 10 ppb drinking water standard) compromises the initial immune response to H1N1 or swine flu infection according to NIEHS-supported scientists. The study, conducted in laboratory mice, suggests that people exposed to arsenic in their drinking water may be at increased risk for more serious illness or death from the virus.
Some Canadians are drinking water that contains inorganic arsenic. Private-dug–well waters are most at risk for containing inorganic arsenic. Preliminary well water analysis typically does not test for arsenic. Researchers at the Geological Survey of Canada have modeled relative variation in natural arsenic hazard potential for the province of New Brunswick. This study has important implications for potable water and health concerns relating to inorganic arsenic.
Epidemiological evidence from Chile shows a dose-dependent connection between chronic arsenic exposure and various forms of cancer, in particular when other risk factors, such as cigarette smoking, are present. These effects have been demonstrated at contaminations less than 50 ppb. Arsenic is itself a constituent of tobacco smoke.
Analyzing multiple epidemiological studies on inorganic arsenic exposure suggests a small but measurable increase in risk for bladder cancer at 10 ppb. According to Peter Ravenscroft of the Department of Geography at the University of Cambridge, roughly 80 million people worldwide consume between 10 and 50 ppb arsenic in their drinking water. If they all consumed exactly 10 ppb arsenic in their drinking water, the previously cited multiple epidemiological study analysis would predict an additional 2,000 cases of bladder cancer alone. This represents a clear underestimate of the overall impact, since it does not include lung or skin cancer, and explicitly underestimates the exposure. Those exposed to levels of arsenic above the current WHO standard should weigh the costs and benefits of arsenic remediation.
Early (1973) evaluations of the processes for removing dissolved arsenic from drinking water demonstrated the efficacy of co-precipitation with either iron or aluminium oxides. In particular, iron as a coagulant was found to remove arsenic with an efficacy exceeding 90%. Several adsorptive media systems have been approved for use at point-of-service in a study funded by the United States Environmental Protection Agency (US EPA) and the National Science Foundation (NSF). A team of European and Indian scientists and engineers have set up six arsenic treatment plants in West Bengal based on in-situ remediation method (SAR Technology). This technology does not use any chemicals and arsenic is left in an insoluble form (+5 state) in the subterranean zone by recharging aerated water into the aquifer and developing an oxidation zone that supports arsenic oxidizing micro-organisms. This process does not produce any waste stream or sludge and is relatively cheap.
Another effective and inexpensive method to avoid arsenic contamination is to sink wells 500 feet or deeper to reach purer waters. A recent 2011 study funded by the US National Institute of Environmental Health Sciences' Superfund Research Program shows that deep sediments can remove arsenic and take it out of circulation. In this process, called adsorption, arsenic sticks to the surfaces of deep sediment particles and is naturally removed from the ground water.
Magnetic separations of arsenic at very low magnetic field gradients with high-surface-area and monodisperse magnetite (Fe3O4) nanocrystals have been demonstrated in point-of-use water purification. Using the high specific surface area of Fe3O4 nanocrystals, the mass of waste associated with arsenic removal from water has been dramatically reduced.
Epidemiological studies have suggested a correlation between chronic consumption of drinking water contaminated with arsenic and the incidence of all leading causes of mortality. The literature indicates that arsenic exposure is causative in the pathogenesis of diabetes.
Chaff-based filters have recently been shown to reduce the arsenic content of water to 3 μg/L. This may find applications in areas where the potable water is extracted from underground aquifers.
San Pedro de Atacama
For several centuries, the people of San Pedro de Atacama in Chile have been drinking water that is contaminated with arsenic, and some evidence suggests they have developed some immunity.
Hazard maps for contaminated groundwater
Around one-third of the world's population drinks water from groundwater resources. Of this, about 10 percent, approximately 300 million people, obtains water from groundwater resources that are contaminated with unhealthy levels of arsenic or fluoride. These trace elements derive mainly from minerals and ions in the ground.
Redox transformation of arsenic in natural waters
Arsenic is unique among the trace metalloids and oxyanion-forming trace metals (e.g. As, Se, Sb, Mo, V, Cr, U, Re). It is sensitive to mobilization at pH values typical of natural waters (pH 6.5–8.5) under both oxidizing and reducing conditions. Arsenic can occur in the environment in several oxidation states (−3, 0, +3 and +5), but in natural waters it is mostly found in inorganic forms as oxyanions of trivalent arsenite [As(III)] or pentavalent arsenate [As(V)]. Organic forms of arsenic are produced by biological activity, mostly in surface waters, but are rarely quantitatively important. Organic arsenic compounds may, however, occur where waters are significantly impacted by industrial pollution.
Arsenic may be solubilized by various processes. When pH is high, arsenic may be released from surface binding sites that lose their positive charge. When water level drops and sulfide minerals are exposed to air, arsenic trapped in sulfide minerals can be released into water. When organic carbon is present in water, bacteria are fed by directly reducing As(V) to As(III) or by reducing the element at the binding site, releasing inorganic arsenic.
The aquatic transformations of arsenic are affected by pH, reduction-oxidation potential, organic matter concentration and the concentrations and forms of other elements, especially iron and manganese. The main factors are pH and the redox potential. Generally, the main forms of arsenic under oxic conditions are , , , and at pH 2, 2–7, 7–11 and 11, respectively. Under reducing conditions, is predominant at pH 2–9.
Oxidation and reduction affects the migration of arsenic in subsurface environments. Arsenite is the most stable soluble form of arsenic in reducing environments and arsenate, which is less mobile than arsenite, is dominant in oxidizing environments at neutral pH. Therefore, arsenic may be more mobile under reducing conditions. The reducing environment is also rich in organic matter which may enhance the solubility of arsenic compounds. As a result, the adsorption of arsenic is reduced and dissolved arsenic accumulates in groundwater. That is why the arsenic content is higher in reducing environments than in oxidizing environments.
The presence of sulfur is another factor that affects the transformation of arsenic in natural water. Arsenic can precipitate when metal sulfides form. In this way, arsenic is removed from the water and its mobility decreases. When oxygen is present, bacteria oxidize reduced sulfur to generate energy, potentially releasing bound arsenic.
Redox reactions involving Fe also appear to be essential factors in the fate of arsenic in aquatic systems. The reduction of iron oxyhydroxides plays a key role in the release of arsenic to water. So arsenic can be enriched in water with elevated Fe concentrations. Under oxidizing conditions, arsenic can be mobilized from pyrite or iron oxides especially at elevated pH. Under reducing conditions, arsenic can be mobilized by reductive desorption or dissolution when associated with iron oxides. The reductive desorption occurs under two circumstances. One is when arsenate is reduced to arsenite which adsorbs to iron oxides less strongly. The other results from a change in the charge on the mineral surface which leads to the desorption of bound arsenic.
Some species of bacteria catalyze redox transformations of arsenic. Dissimilatory arsenate-respiring prokaryotes (DARP) speed up the reduction of As(V) to As(III). DARP use As(V) as the electron acceptor of anaerobic respiration and obtain energy to survive. Other organic and inorganic substances can be oxidized in this process. Chemoautotrophic arsenite oxidizers (CAO) and heterotrophic arsenite oxidizers (HAO) convert As(III) into As(V). CAO combine the oxidation of As(III) with the reduction of oxygen or nitrate. They use obtained energy to fix produce organic carbon from CO2. HAO cannot obtain energy from As(III) oxidation. This process may be an arsenic detoxification mechanism for the bacteria.
Equilibrium thermodynamic calculations predict that As(V) concentrations should be greater than As(III) concentrations in all but strongly reducing conditions, i.e. where sulfate reduction is occurring. However, abiotic redox reactions of arsenic are slow. Oxidation of As(III) by dissolved O2 is a particularly slow reaction. For example, Johnson and Pilson (1975) gave half-lives for the oxygenation of As(III) in seawater ranging from several months to a year. In other studies, As(V)/As(III) ratios were stable over periods of days or weeks during water sampling when no particular care was taken to prevent oxidation, again suggesting relatively slow oxidation rates. Cherry found from experimental studies that the As(V)/As(III) ratios were stable in anoxic solutions for up to 3 weeks but that gradual changes occurred over longer timescales. Sterile water samples have been observed to be less susceptible to speciation changes than non-sterile samples. Oremland found that the reduction of As(V) to As(III) in Mono Lake was rapidly catalyzed by bacteria with rate constants ranging from 0.02 to 0.3-day−1.
Wood preservation in the US
As of 2002, US-based industries consumed 19,600 metric tons of arsenic. Ninety percent of this was used for treatment of wood with chromated copper arsenate (CCA). In 2007, 50% of the 5,280 metric tons of consumption was still used for this purpose. In the United States, the voluntary phasing-out of arsenic in production of consumer products and residential and general consumer construction products began on 31 December 2003, and alternative chemicals are now used, such as Alkaline Copper Quaternary, borates, copper azole, cyproconazole, and propiconazole.
Although discontinued, this application is also one of the most concerning to the general public. The vast majority of older pressure-treated wood was treated with CCA. CCA lumber is still in widespread use in many countries, and was heavily used during the latter half of the 20th century as a structural and outdoor building material. Although the use of CCA lumber was banned in many areas after studies showed that arsenic could leach out of the wood into the surrounding soil (from playground equipment, for instance), a risk is also presented by the burning of older CCA timber. The direct or indirect ingestion of wood ash from burnt CCA lumber has caused fatalities in animals and serious poisonings in humans; the lethal human dose is approximately 20 grams of ash. Scrap CCA lumber from construction and demolition sites may be inadvertently used in commercial and domestic fires. Protocols for safe disposal of CCA lumber are not consistent throughout the world. Widespread landfill disposal of such timber raises some concern, but other studies have shown no arsenic contamination in the groundwater.
Mapping of industrial releases in the US
One tool that maps the location (and other information) of arsenic releases in the United States is TOXMAP. TOXMAP is a Geographic Information System (GIS) from the Division of Specialized Information Services of the United States National Library of Medicine (NLM) funded by the US Federal Government. With marked-up maps of the United States, TOXMAP enables users to visually explore data from the United States Environmental Protection Agency's (EPA) Toxics Release Inventory and Superfund Basic Research Programs. TOXMAP's chemical and environmental health information is taken from NLM's Toxicology Data Network (TOXNET), PubMed, and from other authoritative sources.
Bioremediation
Physical, chemical, and biological methods have been used to remediate arsenic contaminated water. Bioremediation is said to be cost-effective and environmentally friendly. Bioremediation of ground water contaminated with arsenic aims to convert arsenite, the toxic form of arsenic to humans, to arsenate. Arsenate (+5 oxidation state) is the dominant form of arsenic in surface water, while arsenite (+3 oxidation state) is the dominant form in hypoxic to anoxic environments. Arsenite is more soluble and mobile than arsenate. Many species of bacteria can transform arsenite to arsenate in anoxic conditions by using arsenite as an electron donor. This is a useful method in ground water remediation. Another bioremediation strategy is to use plants that accumulate arsenic in their tissues via phytoremediation but the disposal of contaminated plant material needs to be considered.
Bioremediation requires careful evaluation and design in accordance with existing conditions. Some sites may require the addition of an electron acceptor while others require microbe supplementation (bioaugmentation). Regardless of the method used, only constant monitoring can prevent future contamination.
Arsenic removal
Coagulation and flocculation are closely related processes common in arsenate removal from water. Due to the net negative charge carried by arsenate ions, they settle slowly or not at all due to charge repulsion. In coagulation, a positively charged coagulent such as iron and aluminum (commonly used salts: FeCl3, Fe2(SO4)3, Al2(SO4)3) neutralize the negatively charged arsenate, enable it to settle. Flocculation follows where a flocculant bridges smaller particles and allows the aggregate to precipitate out from water. However, such methods may not be efficient on arsenite as As(III) exists in uncharged arsenious acid, H3AsO3, at near-neutral pH.
The major drawbacks of coagulation and flocculation are the costly disposal of arsenate-concentrated sludge, and possible secondary contamination of environment. Moreover, coagulents such as iron may produce ion contamination that exceeds safety levels.
Toxicity and precautions
Arsenic and many of its compounds are especially potent poisons (e.g. arsine). Small amount of arsenic can be detected by pharmacopoial methods which includes reduction of arsenic to arsenious with help of zinc and can be confirmed with mercuric chloride paper.
Classification
Elemental arsenic and arsenic sulfate and trioxide compounds are classified as "toxic" and "dangerous for the environment" in the European Union under directive 67/548/EEC.
The International Agency for Research on Cancer (IARC) recognizes arsenic and inorganic arsenic compounds as group 1 carcinogens, and the EU lists arsenic trioxide, arsenic pentoxide, and arsenate salts as category 1 carcinogens.
Arsenic is known to cause arsenicosis when present in drinking water, "the most common species being arsenate [; As(V)] and arsenite [; As(III)]".
Legal limits, food, and drink
In the United States since 2006, the maximum concentration in drinking water allowed by the Environmental Protection Agency (EPA) is 10 ppb and the FDA set the same standard in 2005 for bottled water. The Department of Environmental Protection for New Jersey set a drinking water limit of 5 ppb in 2006. The IDLH (immediately dangerous to life and health) value for arsenic metal and inorganic arsenic compounds is 5 mg/m3 (5 ppb). The Occupational Safety and Health Administration has set the permissible exposure limit (PEL) to a time-weighted average (TWA) of 0.01 mg/m3 (0.01 ppb), and the National Institute for Occupational Safety and Health (NIOSH) has set the recommended exposure limit (REL) to a 15-minute constant exposure of 0.002 mg/m3 (0.002 ppb). The PEL for organic arsenic compounds is a TWA of 0.5 mg/m3. (0.5 ppb).
In 2008, based on its ongoing testing of a wide variety of American foods for toxic chemicals, the U.S. Food and Drug Administration set the "level of concern" for inorganic arsenic in apple and pear juices at 23 ppb, based on non-carcinogenic effects, and began blocking importation of products in excess of this level; it also required recalls for non-conforming domestic products. In 2011, the national Dr. Oz television show broadcast a program highlighting tests performed by an independent lab hired by the producers. Though the methodology was disputed (it did not distinguish between organic and inorganic arsenic) the tests showed levels of arsenic up to 36 ppb. In response, the FDA tested the worst brand from the Dr. Oz show and found much lower levels. Ongoing testing found 95% of the apple juice samples were below the level of concern. Later testing by Consumer Reports showed inorganic arsenic at levels slightly above 10 ppb, and the organization urged parents to reduce consumption. In July 2013, on consideration of consumption by children, chronic exposure, and carcinogenic effect, the FDA established an "action level" of 10 ppb for apple juice, the same as the drinking water standard.
Concern about arsenic in rice in Bangladesh was raised in 2002, but at the time only Australia had a legal limit for food (one milligram per kilogram, or 1000 ppb). Concern was raised about people who were eating U.S. rice exceeding WHO standards for personal arsenic intake in 2005. In 2011, the People's Republic of China set a food standard of 150 ppb for arsenic.
In the United States in 2012, testing by separate groups of researchers at the Children's Environmental Health and Disease Prevention Research Center at Dartmouth College (early in the year, focusing on urinary levels in children) and Consumer Reports (in November) found levels of arsenic in rice that resulted in calls for the FDA to set limits. The FDA released some testing results in September 2012, and as of July 2013, is still collecting data in support of a new potential regulation. It has not recommended any changes in consumer behavior.
Consumer Reports recommended:
That the EPA and FDA eliminate arsenic-containing fertilizer, drugs, and pesticides in food production;
That the FDA establish a legal limit for food;
That industry change production practices to lower arsenic levels, especially in food for children; and
That consumers test home water supplies, eat a varied diet, and cook rice with excess water, then draining it off (reducing inorganic arsenic by about one third along with a slight reduction in vitamin content).
Evidence-based public health advocates also recommend that, given the lack of regulation or labeling for arsenic in the U.S., children should eat no more than 1.5 servings per week of rice and should not drink rice milk as part of their daily diet before age 5. They also offer recommendations for adults and infants on how to limit arsenic exposure from rice, drinking water, and fruit juice.
A 2014 World Health Organization advisory conference was scheduled to consider limits of 200–300 ppb for rice.
Reducing arsenic content in rice
In 2020, scientists assessed multiple preparation procedures of rice for their capacity to reduce arsenic content and preserve nutrients, recommending a procedure involving parboiling and water-absorption.
Occupational exposure limits
Ecotoxicity
Arsenic is bioaccumulative in many organisms, marine species in particular, but it does not appear to biomagnify significantly in food webs. In polluted areas, plant growth may be affected by root uptake of arsenate, which is a phosphate analog and therefore readily transported in plant tissues and cells. In polluted areas, uptake of the more toxic arsenite ion (found more particularly in reducing conditions) is likely in poorly-drained soils.
Toxicity in animals
Biological mechanism
Arsenic's toxicity comes from the affinity of arsenic(III) oxides for thiols. Thiols, in the form of cysteine residues and cofactors such as lipoic acid and coenzyme A, are situated at the active sites of many important enzymes.
Arsenic disrupts ATP production through several mechanisms. At the level of the citric acid cycle, arsenic inhibits lipoic acid, which is a cofactor for pyruvate dehydrogenase. By competing with phosphate, arsenate uncouples oxidative phosphorylation, thus inhibiting energy-linked reduction of NAD+, mitochondrial respiration and ATP synthesis. Hydrogen peroxide production is also increased, which, it is speculated, has potential to form reactive oxygen species and oxidative stress. These metabolic interferences lead to death from multi-system organ failure. The organ failure is presumed to be from necrotic cell death, not apoptosis, since energy reserves have been too depleted for apoptosis to occur.
Exposure risks and remediation
Occupational exposure and arsenic poisoning may occur in persons working in industries involving the use of inorganic arsenic and its compounds, such as wood preservation, glass production, nonferrous metal alloys, and electronic semiconductor manufacturing. Inorganic arsenic is also found in coke oven emissions associated with the smelter industry.
The conversion between As(III) and As(V) is a large factor in arsenic environmental contamination. According to Croal, Gralnick, Malasarn and Newman, "[the] understanding [of] what stimulates As(III) oxidation and/or limits As(V) reduction is relevant for bioremediation of contaminated sites (Croal). The study of chemolithoautotrophic As(III) oxidizers and the heterotrophic As(V) reducers can help the understanding of the oxidation and/or reduction of arsenic.
Treatment
Treatment of chronic arsenic poisoning is possible. British anti-lewisite (dimercaprol) is prescribed in doses of 5 mg/kg up to 300 mg every 4 hours for the first day, then every 6 hours for the second day, and finally every 8 hours for 8 additional days. However the USA's Agency for Toxic Substances and Disease Registry (ATSDR) states that the long-term effects of arsenic exposure cannot be predicted. Blood, urine, hair, and nails may be tested for arsenic; however, these tests cannot foresee possible health outcomes from the exposure. Long-term exposure and consequent excretion through urine has been linked to bladder and kidney cancer in addition to cancer of the liver, prostate, skin, lungs, and nasal cavity.
See also
Aqua Tofana
Arsenic and Old Lace
Grainger challenge
Hypothetical types of biochemistry
References
Bibliography
Further reading
External links
WHO fact sheet on arsenic
Arsenic Cancer Causing Substances, U.S. National Cancer Institute.
CTD's Arsenic page and CTD's Arsenicals page from the Comparative Toxicogenomics Database
Contaminant Focus: Arsenic by the EPA.
Environmental Health Criteria for Arsenic and Arsenic Compounds, 2001 by the WHO.
National Institute for Occupational Safety and Health – Arsenic Page
Chemical elements
Metalloids
Semimetals
Hepatotoxins
Pnictogens
Endocrine disruptors
IARC Group 1 carcinogens
Trigonal minerals
Minerals in space group 166
Teratogens
Fetotoxicants
Suspected testicular toxicants
Native element minerals
Chemical elements with rhombohedral structure | Arsenic | [
"Physics",
"Chemistry",
"Materials_science"
] | 10,952 | [
"Matter",
"Chemical elements",
"Endocrine disruptors",
"Materials",
"Condensed matter physics",
"Teratogens",
"Atoms",
"Semimetals"
] |
898 | https://en.wikipedia.org/wiki/Antimony | Antimony is a chemical element; it has symbol Sb () and atomic number 51. A lustrous grey metal or metalloid, it is found in nature mainly as the sulfide mineral stibnite (Sb2S3). Antimony compounds have been known since ancient times and were powdered for use as medicine and cosmetics, often known by the Arabic name kohl. The earliest known description of this metalloid in the West was written in 1540 by Vannoccio Biringuccio.
China is the largest producer of antimony and its compounds, with most production coming from the Xikuangshan Mine in Hunan. The industrial methods for refining antimony from stibnite are roasting followed by reduction with carbon, or direct reduction of stibnite with iron.
The most common applications for metallic antimony are in alloys with lead and tin, which have improved properties for solders, bullets, and plain bearings. It improves the rigidity of lead-alloy plates in lead–acid batteries. Antimony trioxide is a prominent additive for halogen-containing flame retardants. Antimony is used as a dopant in semiconductor devices.
Characteristics
Properties
Antimony is a member of group 15 of the periodic table, one of the elements called pnictogens, and has an electronegativity of 2.05. In accordance with periodic trends, it is more electronegative than tin or bismuth, and less electronegative than tellurium or arsenic. Antimony is stable in air at room temperature but, if heated, it reacts with oxygen to produce antimony trioxide, Sb2O3.
Antimony is a silvery, lustrous gray metalloid with a Mohs scale hardness of 3, which is too soft to mark hard objects. Coins of antimony were issued in China's Guizhou in 1931; durability was poor, and minting was soon discontinued because of its softness and toxicity. Antimony is resistant to attack by acids.
The only stable allotrope of antimony under standard conditions is metallic, brittle, silver-white, and shiny. It crystallises in a trigonal cell, isomorphic with bismuth and the gray allotrope of arsenic, and is formed when molten antimony is cooled slowly. Amorphous black antimony is formed upon rapid cooling of antimony vapor, and is only stable as a thin film (thickness in nanometres); thicker samples spontaneously transform into the metallic form. It oxidizes in air and may ignite spontaneously. At 100 °C, it gradually transforms into the stable form. The supposed yellow allotrope of antimony, generated only by oxidation of stibine (SbH3) at −90 °C, is also impure and not a true allotrope; above this temperature and in ambient light, it transforms into the more stable black allotrope. A rare explosive form of antimony can be formed from the electrolysis of antimony trichloride, but it always contains appreciable chlorine and is not really an antimony allotrope. When scratched with a sharp implement, an exothermic reaction occurs and white fumes are given off as metallic antimony forms; when rubbed with a pestle in a mortar, a strong detonation occurs.
Elemental antimony adopts a layered structure (space group Rm No. 166) whose layers consist of fused, ruffled, six-membered rings. The nearest and next-nearest neighbors form an irregular octahedral complex, with the three atoms in each double layer slightly closer than the three atoms in the next. This relatively close packing leads to a high density of 6.697 g/cm3, but the weak bonding between the layers leads to the low hardness and brittleness of antimony.
Isotopes
Antimony has two stable isotopes: 121Sb with a natural abundance of 57.36% and 123Sb with a natural abundance of 42.64%. It also has 35 radioisotopes, of which the longest-lived is 125Sb with a half-life of 2.75 years. In addition, 29 metastable states have been characterized. The most stable of these is 120m1Sb with a half-life of 5.76 days. Isotopes that are lighter than the stable 123Sb tend to decay by β+ decay, and those that are heavier tend to decay by β− decay, with some exceptions. Antimony is the lightest element to have an isotope with an alpha decay branch, excluding 8Be and other light nuclides with beta-delayed alpha emission.
Occurrence
The abundance of antimony in the Earth's crust is estimated at 0.2 parts per million, comparable to thallium at 0.5 ppm and silver at 0.07 ppm. It is the 63rd most abundant element in the crust. Even though this element is not abundant, it is found in more than 100 mineral species. Antimony is sometimes found natively (e.g. on Antimony Peak), but more frequently it is found in the sulfide stibnite (Sb2S3) which is the predominant ore mineral.
Compounds
Antimony compounds are often classified according to their oxidation state: Sb(III) and Sb(V). The +5 oxidation state is more common.
Oxides and hydroxides
Antimony trioxide is formed when antimony is burnt in air. In the gas phase, the molecule of the compound is , but it polymerizes upon condensing. Antimony pentoxide () can be formed only by oxidation with concentrated nitric acid. Antimony also forms a mixed-valence oxide, antimony tetroxide (), which features both Sb(III) and Sb(V). Unlike oxides of phosphorus and arsenic, these oxides are amphoteric, do not form well-defined oxoacids, and react with acids to form antimony salts.
Antimonous acid is unknown, but the conjugate base sodium antimonite () forms upon fusing sodium oxide and . Transition metal antimonites are also known. Antimonic acid exists only as the hydrate , forming salts as the antimonate anion . When a solution containing this anion is dehydrated, the precipitate contains mixed oxides.
The most important antimony ore is stibnite (). Other sulfide minerals include pyrargyrite (), zinkenite, jamesonite, and boulangerite. Antimony pentasulfide is non-stoichiometric, which features antimony in the +3 oxidation state and S–S bonds. Several thioantimonides are known, such as and .
Halides
Antimony forms two series of halides: and . The trihalides , , , and are all molecular compounds having trigonal pyramidal molecular geometry.
The trifluoride is prepared by the reaction of with HF:
+ 6 HF → 2 + 3
It is Lewis acidic and readily accepts fluoride ions to form the complex anions and . Molten is a weak electrical conductor. The trichloride is prepared by dissolving in hydrochloric acid:
+ 6 HCl → 2 + 3
Arsenic sulfides are not readily attacked by the hydrochloric acid, so this method offers a route to As-free Sb.
The pentahalides and have trigonal bipyramidal molecular geometry in the gas phase, but in the liquid phase, is polymeric, whereas is monomeric. is a powerful Lewis acid used to make the superacid fluoroantimonic acid ("H2SbF7").
Oxyhalides are more common for antimony than for arsenic and phosphorus. Antimony trioxide dissolves in concentrated acid to form oxoantimonyl compounds such as SbOCl and .
Antimonides, hydrides, and organoantimony compounds
Compounds in this class generally are described as derivatives of Sb3−. Antimony forms antimonides with metals, such as indium antimonide (InSb) and silver antimonide (). The alkali metal and zinc antimonides, such as Na3Sb and Zn3Sb2, are more reactive. Treating these antimonides with acid produces the highly unstable gas stibine, :
+ 3 →
Stibine can also be produced by treating salts with hydride reagents such as sodium borohydride. Stibine decomposes spontaneously at room temperature. Because stibine has a positive heat of formation, it is thermodynamically unstable and thus antimony does not react with hydrogen directly.
Organoantimony compounds are typically prepared by alkylation of antimony halides with Grignard reagents. A large variety of compounds are known with both Sb(III) and Sb(V) centers, including mixed chloro-organic derivatives, anions, and cations. Examples include triphenylstibine (Sb(C6H5)3) and pentaphenylantimony (Sb(C6H5)5).
History
Antimony(III) sulfide, Sb2S3, was recognized in predynastic Egypt as an eye cosmetic (kohl) as early as about 3100 BC, when the cosmetic palette was invented.
An artifact, said to be part of a vase, made of antimony dating to about 3000 BC was found at Telloh, Chaldea (part of present-day Iraq), and a copper object plated with antimony dating between 2500 BC and 2200 BC has been found in Egypt. Austen, at a lecture by Herbert Gladstone in 1892, commented that "we only know of antimony at the present day as a highly brittle and crystalline metal, which could hardly be fashioned into a useful vase, and therefore this remarkable 'find' (artifact mentioned above) must represent the lost art of rendering antimony malleable."
The British archaeologist Roger Moorey was unconvinced the artifact was indeed a vase, mentioning that Selimkhanov, after his analysis of the Tello object (published in 1975), "attempted to relate the metal to Transcaucasian natural antimony" (i.e. native metal) and that "the antimony objects from Transcaucasia are all small personal ornaments." This weakens the evidence for a lost art "of rendering antimony malleable".
The Roman scholar Pliny the Elder described several ways of preparing antimony sulfide for medical purposes in his treatise Natural History, around 77 AD. Pliny the Elder also made a distinction between "male" and "female" forms of antimony; the male form is probably the sulfide, while the female form, which is superior, heavier, and less friable, has been suspected to be native metallic antimony.
The Greek naturalist Pedanius Dioscorides mentioned that antimony sulfide could be roasted by heating by a current of air. It is thought that this produced metallic antimony.
Antimony was frequently described in alchemical manuscripts, including the Summa Perfectionis of Pseudo-Geber, written around the 14th century. A description of a procedure for isolating antimony is later given in the 1540 book De la pirotechnia by Vannoccio Biringuccio, predating the more famous 1556 book by Agricola, De re metallica. In this context Agricola has been often incorrectly credited with the discovery of metallic antimony. The book Currus Triumphalis Antimonii (The Triumphal Chariot of Antimony), describing the preparation of metallic antimony, was published in Germany in 1604. It was purported to be written by a Benedictine monk, writing under the name Basilius Valentinus in the 15th century; if it were authentic, which it is not, it would predate Biringuccio.
The metal antimony was known to German chemist Andreas Libavius in 1615 who obtained it by adding iron to a molten mixture of antimony sulfide, salt and potassium tartrate. This procedure produced antimony with a crystalline or starred surface.
With the advent of challenges to phlogiston theory, it was recognized that antimony is an element forming sulfides, oxides, and other compounds, as do other metals.
The first discovery of naturally occurring pure antimony in the Earth's crust was described by the Swedish scientist and local mine district engineer Anton von Swab in 1783; the type-sample was collected from the Sala Silver Mine in the Bergslagen mining district of Sala, Västmanland, Sweden.
Etymology
The medieval Latin form, from which the modern languages and late Byzantine Greek take their names for antimony, is . The origin of that is uncertain, and all suggestions have some difficulty either of form or interpretation. The popular etymology, from ἀντίμοναχός anti-monachos or French , would mean "monk-killer", which is explained by the fact that many early alchemists were monks, and some antimony compounds were poisonous.
Another popular etymology is the hypothetical Greek word ἀντίμόνος antimonos, "against aloneness", explained as "not found as metal", or "not found unalloyed". However, ancient Greek would more naturally express the pure negative as α- ("not"). Edmund Oscar von Lippmann conjectured a hypothetical Greek word ανθήμόνιον anthemonion, which would mean "floret", and cites several examples of related Greek words (but not that one) which describe chemical or biological efflorescence.
The early uses of antimonium include the translations, in 1050–1100, by Constantine the African of Arabic medical treatises. Several authorities believe antimonium is a scribal corruption of some Arabic form; Meyerhof derives it from ithmid; other possibilities include athimar, the Arabic name of the metalloid, and a hypothetical as-stimmi, derived from or parallel to the Greek.
The standard chemical symbol for antimony (Sb) is credited to Jöns Jakob Berzelius, who derived the abbreviation from stibium.
The ancient words for antimony mostly have, as their chief meaning, kohl, the sulfide of antimony.
The Egyptians called antimony mśdmt or stm.
The Arabic word for the substance, as opposed to the cosmetic, can appear as ithmid, athmoud, othmod, or uthmod. Littré suggests the first form, which is the earliest, derives from stimmida, an accusative for stimmi. The Greek word στίμμι (stimmi) is used by Attic tragic poets of the 5th century BC, and is possibly a loan word from Arabic or from Egyptian stm.
Production
Process
The extraction of antimony from ores depends on the quality and composition of the ore. Most antimony is mined as the sulfide; lower-grade ores are concentrated by froth flotation, while higher-grade ores are heated to 500–600 °C, the temperature at which stibnite melts and separates from the gangue minerals. Antimony can be isolated from the crude antimony sulfide by reduction with scrap iron:
+ 3 Fe → 2 Sb + 3 FeS
The sulfide is converted to an oxide by roasting. The product is further purified by vaporizing the volatile antimony(III) oxide, which is recovered. This sublimate is often used directly for the main applications, impurities being arsenic and sulfide. Antimony is isolated from the oxide by a carbothermal reduction:
2 + 3 C → 4 Sb + 3
The lower-grade ores are reduced in blast furnaces while the higher-grade ores are reduced in reverberatory furnaces.
Top producers and production volumes
In 2022, according to the US Geological Survey, China accounted for 54.5% of total antimony production, followed in second place by Russia with 18.2% and Tajikistan with 15.5%.
Chinese production of antimony is expected to decline in the future as mines and smelters are closed down by the government as part of pollution control. Especially due to an environmental protection law having gone into effect in January 2015 and revised "Emission Standards of Pollutants for Stanum, Antimony, and Mercury" having gone into effect, hurdles for economic production are higher.
Reported production of antimony in China has fallen and is unlikely to increase in the coming years, according to the Roskill report. No significant antimony deposits in China have been developed for about ten years, and the remaining economic reserves are being rapidly depleted.
Reserves
Supply risk
For antimony-importing regions, such as Europe and the U.S., antimony is considered to be a critical mineral for industrial manufacturing that is at risk of supply chain disruption. With global production coming mainly from China (74%), Tajikistan (8%), and Russia (4%), these sources are critical to supply.
European Union: Antimony is considered a critical raw material for defense, automotive, construction and textiles. The E.U. sources are 100% imported, coming mainly from Turkey (62%), Bolivia (20%) and Guatemala (7%).
United Kingdom: The British Geological Survey's 2015 risk list ranks antimony second highest (after rare earth elements) on the relative supply risk index.
United States: Antimony is a mineral commodity considered critical to the economic and national security. In 2022, no antimony was mined in the U.S.
Applications
Approximately 48% of antimony is consumed in flame retardants, 33% in lead–acid batteries, and 8% in plastics.
Flame retardants
Antimony is mainly used as the trioxide for flame-proofing compounds, always in combination with halogenated flame retardants except in halogen-containing polymers. The flame retarding effect of antimony trioxide is produced by the formation of halogenated antimony compounds, which react with hydrogen atoms, and probably also with oxygen atoms and OH radicals, thus inhibiting fire. Markets for these flame-retardants include children's clothing, toys, aircraft, and automobile seat covers. They are also added to polyester resins in fiberglass composites for such items as light aircraft engine covers. The resin will burn in the presence of an externally generated flame, but will extinguish when the external flame is removed.
Alloys
Antimony forms a highly useful alloy with lead, increasing its hardness and mechanical strength. When casting it increases fluidity of the melt and reduces shrinkage during cooling. For most applications involving lead, varying amounts of antimony are used as alloying metal. In lead–acid batteries, this addition improves plate strength and charging characteristics. For sailboats, lead keels are used to provide righting moment, ranging from 600 lbs to over 200 tons for the largest sailing superyachts; to improve hardness and tensile strength of the lead keel, antimony is mixed with lead between 2% and 5% by volume. Antimony is used in antifriction alloys (such as Babbitt metal), in bullets and lead shot, electrical cable sheathing, type metal (for example, for linotype printing machines), solder (some "lead-free" solders contain 5% Sb), in pewter, and in hardening alloys with low tin content in the manufacturing of organ pipes.
Other applications
Three other applications consume nearly all the rest of the world's supply. One application is as a stabilizer and catalyst for the production of polyethylene terephthalate. Another is as a fining agent to remove microscopic bubbles in glass, mostly for TV screens antimony ions interact with oxygen, suppressing the tendency of the latter to form bubbles. The third application is pigments.
In the 1990s antimony was increasingly being used in semiconductors as a dopant in n-type silicon wafers for diodes, infrared detectors, and Hall-effect devices. In the 1950s, the emitters and collectors of n-p-n alloy junction transistors were doped with tiny beads of a lead-antimony alloy. Indium antimonide (InSb) is used as a material for mid-infrared detectors.
The material Ge2Sb2Te5 is used as for phase-change memory, a type of computer memory.
Biology and medicine have few uses for antimony. Treatments containing antimony, known as antimonials, are used as emetics. Antimony compounds are used as antiprotozoan drugs. Potassium antimonyl tartrate, or tartar emetic, was once used as an anti-schistosomal drug from 1919 on. It was subsequently replaced by praziquantel. Antimony and its compounds are used in several veterinary preparations, such as anthiomaline and lithium antimony thiomalate, as a skin conditioner in ruminants. Antimony has a nourishing or conditioning effect on keratinized tissues in animals.
Antimony-based drugs, such as meglumine antimoniate, are also considered the drugs of choice for treatment of leishmaniasis. Early treatments used antimony(III) species (trivalent antimonials), but in 1922 Upendranath Brahmachari invented a much safer antimony(V) drug, and since then so-called pentavalent antimonials have been the standard first-line treatment. However, Leishmania strains in Bihar and neighboring regions have developed resistance to antimony. Elemental antimony as an antimony pill was once used as a medicine. It could be reused by others after ingestion and elimination.
Antimony(III) sulfide is used in the heads of some safety matches. Antimony sulfides help to stabilize the friction coefficient in automotive brake pad materials. Antimony is used in bullets, bullet tracers, paint, glass art, and as an opacifier in enamel. Antimony-124 is used together with beryllium in neutron sources; the gamma rays emitted by antimony-124 initiate the photodisintegration of beryllium. The emitted neutrons have an average energy of 24 keV. Natural antimony is used in startup neutron sources.
The powder derived from crushed antimony sulfide (kohl) has been used for millennia as an eye cosmetic. Historically it was applied to the eyes with a metal rod and with one's spittle, and was thought by the ancients to aid in curing eye infections. The practice is still seen in Yemen and in other Muslim countries.
Precautions
Antimony and many of its compounds are toxic, and the effects of antimony poisoning are similar to arsenic poisoning. The toxicity of antimony is far lower than that of arsenic; this might be caused by the significant differences of uptake, metabolism and excretion between arsenic and antimony. The uptake of antimony(III) or antimony(V) in the gastrointestinal tract is at most 20%. Antimony(V) is not quantitatively reduced to antimony(III) in the cell (in fact antimony(III) is oxidised to antimony(V) instead).
Since methylation of antimony does not occur, the excretion of antimony(V) in urine is the main way of elimination. Like arsenic, the most serious effect of acute antimony poisoning is cardiotoxicity and the resulting myocarditis; however, it can also manifest as Adams–Stokes syndrome, which arsenic does not. Reported cases of intoxication by antimony equivalent to 90 mg antimony potassium tartrate dissolved from enamel has been reported to show only short term effects. An intoxication with 6 g of antimony potassium tartrate was reported to result in death after three days.
Inhalation of antimony dust is harmful and in certain cases may be fatal; in small doses, antimony causes headaches, dizziness, and depression. Larger doses such as prolonged skin contact may cause dermatitis, or damage the kidneys and the liver, causing violent and frequent vomiting, leading to death in a few days.
Antimony is incompatible with strong oxidizing agents, strong acids, halogen acids, chlorine, or fluorine. It should be kept away from heat.
Antimony leaches from polyethylene terephthalate (PET) bottles into liquids. While levels observed for bottled water are below drinking water guidelines, fruit juice concentrates (for which no guidelines are established) produced in the UK were found to contain up to 44.7 μg/L of antimony, well above the EU limits for tap water of 5 μg/L. The guidelines are:
World Health Organization: 20 μg/L
Japan: 15 μg/L
United States Environmental Protection Agency, Health Canada and the Ontario Ministry of Environment: 6 μg/L
EU and German Federal Ministry of Environment: 5 μg/L
The tolerable daily intake (TDI) proposed by WHO is 6 μg antimony per kilogram of body weight. The immediately dangerous to life or health (IDLH) value for antimony is 50 mg/m3.
Toxicity
Certain compounds of antimony appear to be toxic, particularly antimony trioxide and antimony potassium tartrate. Effects may be similar to arsenic poisoning. Occupational exposure may cause respiratory irritation, pneumoconiosis, antimony spots on the skin, gastrointestinal symptoms, and cardiac arrhythmias. In addition, antimony trioxide is potentially carcinogenic to humans.
Adverse health effects have been observed in humans and animals following inhalation, oral, or dermal exposure to antimony and antimony compounds. Antimony toxicity typically occurs either due to occupational exposure, during therapy or from accidental ingestion. It is unclear if antimony can enter the body through the skin. The presence of low levels of antimony in saliva may also be associated with dental decay.
Notes
References
Cited sources
External links
Public Health Statement for Antimony
International Antimony Association vzw (i2a)
Chemistry in its element podcast (MP3) from the Royal Society of Chemistry's Chemistry World: Antimony
Antimony at The Periodic Table of Videos (University of Nottingham)
CDC – NIOSH Pocket Guide to Chemical Hazards – Antimony
Antimony Mineral data and specimen images
Chemical elements
Metalloids
Native element minerals
Nuclear materials
Pnictogens
Trigonal minerals
Minerals in space group 166
Materials that expand upon freezing
Chemical elements with rhombohedral structure | Antimony | [
"Physics"
] | 5,545 | [
"Chemical elements",
"Materials",
"Nuclear materials",
"Atoms",
"Matter"
] |
899 | https://en.wikipedia.org/wiki/Actinium | Actinium is a chemical element; it has symbol Ac and atomic number 89. It was first isolated by Friedrich Oskar Giesel in 1902, who gave it the name emanium; the element got its name by being wrongly identified with a substance André-Louis Debierne found in 1899 and called actinium. The actinide series, a set of 15 elements between actinium and lawrencium in the periodic table, are named for actinium. Together with polonium, radium, and radon, actinium was one of the first non-primordial radioactive elements to be isolated.
A soft, silvery-white radioactive metal, actinium reacts rapidly with oxygen and moisture in air forming a white coating of actinium oxide that prevents further oxidation. As with most lanthanides and many actinides, actinium assumes oxidation state +3 in nearly all its chemical compounds. Actinium is found only in traces in uranium and thorium ores as the isotope 227Ac, which decays with a half-life of 21.772 years, predominantly emitting beta and sometimes alpha particles, and 228Ac, which is beta active with a half-life of 6.15 hours. One tonne of natural uranium in ore contains about 0.2 milligrams of actinium-227, and one tonne of thorium contains about 5 nanograms of actinium-228. The close similarity of physical and chemical properties of actinium and lanthanum makes separation of actinium from the ore impractical. Instead, the element is prepared, in milligram amounts, by the neutron irradiation of in a nuclear reactor. Owing to its scarcity, high price and radioactivity, actinium has no significant industrial use. Its current applications include a neutron source and an agent for radiation therapy.
History
André-Louis Debierne, a French chemist, announced the discovery of a new element in 1899. He separated it from pitchblende residues left by Marie and Pierre Curie after they had extracted radium. In 1899, Debierne described the substance as similar to titanium and (in 1900) as similar to thorium. Friedrich Oskar Giesel found in 1902 a substance similar to lanthanum and called it "emanium" in 1904. After a comparison of the substances' half-lives determined by Debierne, Harriet Brooks in 1904, and Otto Hahn and Otto Sackur in 1905, Debierne's chosen name for the new element was retained because it had seniority, despite the contradicting chemical properties he claimed for the element at different times.
Articles published in the 1970s and later suggest that Debierne's results published in 1904 conflict with those reported in 1899 and 1900. Furthermore, the now-known chemistry of actinium precludes its presence as anything other than a minor constituent of Debierne's 1899 and 1900 results; in fact, the chemical properties he reported make it likely that he had, instead, accidentally identified protactinium, which would not be discovered for another fourteen years, only to have it disappear due to its hydrolysis and adsorption onto his laboratory equipment. This has led some authors to advocate that Giesel alone should be credited with the discovery. A less confrontational vision of scientific discovery is proposed by Adloff. He suggests that hindsight criticism of the early publications should be mitigated by the then nascent state of radiochemistry: highlighting the prudence of Debierne's claims in the original papers, he notes that nobody can contend that Debierne's substance did not contain actinium. Debierne, who is now considered by the vast majority of historians as the discoverer, lost interest in the element and left the topic. Giesel, on the other hand, can rightfully be credited with the first preparation of radiochemically pure actinium and with the identification of its atomic number 89.
The name actinium originates from the Ancient Greek aktis, aktinos (ακτίς, ακτίνος), meaning beam or ray. Its symbol Ac is also used in abbreviations of other compounds that have nothing to do with actinium, such as acetyl, acetate and sometimes acetaldehyde.
Properties
Actinium is a soft, silvery-white, radioactive, metallic element. Its estimated shear modulus is similar to that of lead. Owing to its strong radioactivity, actinium glows in the dark with a pale blue light, which originates from the surrounding air ionized by the emitted energetic particles. Actinium has similar chemical properties to lanthanum and other lanthanides, and therefore these elements are difficult to separate when extracting from uranium ores. Solvent extraction and ion chromatography are commonly used for the separation.
The first element of the actinides, actinium gave the set its name, much as lanthanum had done for the lanthanides. The actinides are much more diverse than the lanthanides and therefore it was not until 1945 that the most significant change to Dmitri Mendeleev's periodic table since the recognition of the lanthanides, the introduction of the actinides, was generally accepted after Glenn T. Seaborg's research on the transuranium elements (although it had been proposed as early as 1892 by British chemist Henry Bassett).
Actinium reacts rapidly with oxygen and moisture in air forming a white coating of actinium oxide that impedes further oxidation. As with most lanthanides and actinides, actinium exists in the oxidation state +3, and the Ac3+ ions are colorless in solutions. The oxidation state +3 originates from the [Rn] 6d17s2 electronic configuration of actinium, with three valence electrons that are easily donated to give the stable closed-shell structure of the noble gas radon. Although the 5f orbitals are unoccupied in an actinium atom, it can be used as a valence orbital in actinium complexes and hence it is generally considered the first 5f element by authors working on it. Ac3+ is the largest of all known tripositive ions and its first coordination sphere contains approximately 10.9 ± 0.5 water molecules.
Chemical compounds
Due to actinium's intense radioactivity, only a limited number of actinium compounds are known. These include: AcF3, AcCl3, AcBr3, AcOF, AcOCl, AcOBr, Ac2S3, Ac2O3, AcPO4 and Ac(NO3)3. They all contain actinium in the oxidation state +3. In particular, the lattice constants of the analogous lanthanum and actinium compounds differ by only a few percent.
Here a, b and c are lattice constants, No is space group number and Z is the number of formula units per unit cell. Density was not measured directly but calculated from the lattice parameters.
Oxides
Actinium oxide (Ac2O3) can be obtained by heating the hydroxide at or the oxalate at , in vacuum. Its crystal lattice is isotypic with the oxides of most trivalent rare-earth metals.
Halides
Actinium trifluoride can be produced either in solution or in solid reaction. The former reaction is carried out at room temperature, by adding hydrofluoric acid to a solution containing actinium ions. In the latter method, actinium metal is treated with hydrogen fluoride vapors at in an all-platinum setup. Treating actinium trifluoride with ammonium hydroxide at yields oxyfluoride AcOF. Whereas lanthanum oxyfluoride can be easily obtained by burning lanthanum trifluoride in air at for an hour, similar treatment of actinium trifluoride yields no AcOF and only results in melting of the initial product.
AcF3 + 2 NH3 + H2O → AcOF + 2 NH4F
Actinium trichloride is obtained by reacting actinium hydroxide or oxalate with carbon tetrachloride vapors at temperatures above . Similarly to the oxyfluoride, actinium oxychloride can be prepared by hydrolyzing actinium trichloride with ammonium hydroxide at . However, in contrast to the oxyfluoride, the oxychloride could well be synthesized by igniting a solution of actinium trichloride in hydrochloric acid with ammonia.
Reaction of aluminium bromide and actinium oxide yields actinium tribromide:
Ac2O3 + 2 AlBr3 → 2 AcBr3 + Al2O3
and treating it with ammonium hydroxide at results in the oxybromide AcOBr.
Other compounds
Actinium hydride was obtained by reduction of actinium trichloride with potassium at , and its structure was deduced by analogy with the corresponding LaH2 hydride. The source of hydrogen in the reaction was uncertain.
Mixing monosodium phosphate (NaH2PO4) with a solution of actinium in hydrochloric acid yields white-colored actinium phosphate hemihydrate (AcPO4·0.5H2O), and heating actinium oxalate with hydrogen sulfide vapors at for a few minutes results in a black actinium sulfide Ac2S3. It may possibly be produced by acting with a mixture of hydrogen sulfide and carbon disulfide on actinium oxide at .
Isotopes
Naturally occurring actinium is principally composed of two radioactive isotopes; (from the radioactive family of ) and (a granddaughter of ). decays mainly as a beta emitter with a very small energy, but in 1.38% of cases it emits an alpha particle, so it can readily be identified through alpha spectrometry. Thirty-three radioisotopes have been identified, the most stable being with a half-life of 21.772 years, with a half-life of 10.0 days and with a half-life of 29.37 hours. All remaining radioactive isotopes have half-lives that are less than 10 hours and the majority of them have half-lives shorter than one minute. The shortest-lived known isotope of actinium is (half-life of 69 nanoseconds) which decays through alpha decay. Actinium also has two known meta states. The most significant isotopes for chemistry are 225Ac, 227Ac, and 228Ac.
Purified comes into equilibrium with its decay products after about a half of year. It decays according to its 21.772-year half-life emitting mostly beta (98.62%) and some alpha particles (1.38%); the successive decay products are part of the actinium series. Owing to the low available amounts, low energy of its beta particles (maximum 44.8 keV) and low intensity of alpha radiation, is difficult to detect directly by its emission and it is therefore traced via its decay products. The isotopes of actinium range in atomic weight from 203 u () to 236 u ().
Occurrence and synthesis
Actinium is found only in traces in uranium ores – one tonne of uranium in ore contains about 0.2 milligrams of 227Ac – and in thorium ores, which contain about 5 nanograms of 228Ac per one tonne of thorium. The actinium isotope 227Ac is a transient member of the uranium-actinium series decay chain, which begins with the parent isotope 235U (or 239Pu) and ends with the stable lead isotope 207Pb. The isotope 228Ac is a transient member of the thorium series decay chain, which begins with the parent isotope 232Th and ends with the stable lead isotope 208Pb. Another actinium isotope (225Ac) is transiently present in the neptunium series decay chain, beginning with 237Np (or 233U) and ending with thallium (205Tl) and near-stable bismuth (209Bi); even though all primordial 237Np has decayed away, it is continuously produced by neutron knock-out reactions on natural 238U.
The low natural concentration, and the close similarity of physical and chemical properties to those of lanthanum and other lanthanides, which are always abundant in actinium-bearing ores, render separation of actinium from the ore impractical. The most concentrated actinium sample prepared from raw material consisted of 7 micrograms of 227Ac in less than 0.1 milligrams of La2O3, and complete separation was never achieved. Instead, actinium is prepared, in milligram amounts, by the neutron irradiation of in a nuclear reactor.
^{226}_{88}Ra + ^{1}_{0}n -> ^{227}_{88}Ra ->[\beta^-][42.2 \ \ce{min}] ^{227}_{89}Ac
The reaction yield is about 2% of the radium weight. 227Ac can further capture neutrons resulting in small amounts of 228Ac. After the synthesis, actinium is separated from radium and from the products of decay and nuclear fusion, such as thorium, polonium, lead and bismuth. The extraction can be performed with thenoyltrifluoroacetone-benzene solution from an aqueous solution of the radiation products, and the selectivity to a certain element is achieved by adjusting the pH (to about 6.0 for actinium). An alternative procedure is anion exchange with an appropriate resin in nitric acid, which can result in a separation factor of 1,000,000 for radium and actinium vs. thorium in a two-stage process. Actinium can then be separated from radium, with a ratio of about 100, using a low cross-linking cation exchange resin and nitric acid as eluant.
225Ac was first produced artificially at the Institute for Transuranium Elements (ITU) in Germany using a cyclotron and at St George Hospital in Sydney using a linac in 2000. This rare isotope has potential applications in radiation therapy and is most efficiently produced by bombarding a radium-226 target with 20–30 MeV deuterium ions. This reaction also yields 226Ac which however decays with a half-life of 29 hours and thus does not contaminate 225Ac.
Actinium metal has been prepared by the reduction of actinium fluoride with lithium vapor in vacuum at a temperature between . Higher temperatures resulted in evaporation of the product and lower ones lead to an incomplete transformation. Lithium was chosen among other alkali metals because its fluoride is most volatile.
Applications
Owing to its scarcity, high price and radioactivity, 227Ac currently has no significant industrial use, but 225Ac is currently being studied for use in cancer treatments such as targeted alpha therapies.
227Ac is highly radioactive and was therefore studied for use as an active element of radioisotope thermoelectric generators, for example in spacecraft. The oxide of 227Ac pressed with beryllium is also an efficient neutron source with the activity exceeding that of the standard americium-beryllium and radium-beryllium pairs. In all those applications, 227Ac (a beta source) is merely a progenitor which generates alpha-emitting isotopes upon its decay. Beryllium captures alpha particles and emits neutrons owing to its large cross-section for the (α,n) nuclear reaction:
^{9}_{4}Be + ^{4}_{2}He -> ^{12}_{6}C + ^{1}_{0}n + \gamma
The 227AcBe neutron sources can be applied in a neutron probe – a standard device for measuring the quantity of water present in soil, as well as moisture/density for quality control in highway construction. Such probes are also used in well logging applications, in neutron radiography, tomography and other radiochemical investigations.
225Ac is applied in medicine to produce in a reusable generator or can be used alone as an agent for radiation therapy, in particular targeted alpha therapy (TAT). This isotope has a half-life of 10 days, making it much more suitable for radiation therapy than 213Bi (half-life 46 minutes). Additionally, 225Ac decays to nontoxic 209Bi rather than toxic lead, which is the final product in the decay chains of several other candidate isotopes, namely 227Th, 228Th, and 230U. Not only 225Ac itself, but also its daughters, emit alpha particles which kill cancer cells in the body. The major difficulty with application of 225Ac was that intravenous injection of simple actinium complexes resulted in their accumulation in the bones and liver for a period of tens of years. As a result, after the cancer cells were quickly killed by alpha particles from 225Ac, the radiation from the actinium and its daughters might induce new mutations. To solve this problem, 225Ac was bound to a chelating agent, such as citrate, ethylenediaminetetraacetic acid (EDTA) or diethylene triamine pentaacetic acid (DTPA). This reduced actinium accumulation in the bones, but the excretion from the body remained slow. Much better results were obtained with such chelating agents as HEHA () or DOTA () coupled to trastuzumab, a monoclonal antibody that interferes with the HER2/neu receptor. The latter delivery combination was tested on mice and proved to be effective against leukemia, lymphoma, breast, ovarian, neuroblastoma and prostate cancers.
The medium half-life of 227Ac (21.77 years) makes it a very convenient radioactive isotope in modeling the slow vertical mixing of oceanic waters. The associated processes cannot be studied with the required accuracy by direct measurements of current velocities (of the order 50 meters per year). However, evaluation of the concentration depth-profiles for different isotopes allows estimating the mixing rates. The physics behind this method is as follows: oceanic waters contain homogeneously dispersed 235U. Its decay product, 231Pa, gradually precipitates to the bottom, so that its concentration first increases with depth and then stays nearly constant. 231Pa decays to 227Ac; however, the concentration of the latter isotope does not follow the 231Pa depth profile, but instead increases toward the sea bottom. This occurs because of the mixing processes which raise some additional 227Ac from the sea bottom. Thus analysis of both 231Pa and 227Ac depth profiles allows researchers to model the mixing behavior.
There are theoretical predictions that AcHx hydrides (in this case with very high pressure) are a candidate for a near room-temperature superconductor as they have Tc significantly higher than H3S, possibly near 250 K.
Precautions
227Ac is highly radioactive and experiments with it are carried out in a specially designed laboratory equipped with a tight glove box. When actinium trichloride is administered intravenously to rats, about 33% of actinium is deposited into the bones and 50% into the liver. Its toxicity is comparable to, but slightly lower, than that of americium and plutonium. For trace quantities, fume hoods with good aeration suffice; for gram amounts, hot cells with shielding from the intense gamma radiation emitted by 227Ac are necessary.
See also
Actinium series
Notes
References
Bibliography
External links
Actinium at The Periodic Table of Videos (University of Nottingham)
NLM Hazardous Substances Databank – Actinium, Radioactive
Actinium in
Chemical elements
Chemical elements with face-centered cubic structure
Actinides | Actinium | [
"Physics"
] | 4,100 | [
"Chemical elements",
"Atoms",
"Matter"
] |
900 | https://en.wikipedia.org/wiki/Americium | Americium is a synthetic chemical element; it has symbol Am and atomic number 95. It is radioactive and a transuranic member of the actinide series in the periodic table, located under the lanthanide element europium and was thus named after the Americas by analogy.
Americium was first produced in 1944 by the group of Glenn T. Seaborg from Berkeley, California, at the Metallurgical Laboratory of the University of Chicago, as part of the Manhattan Project. Although it is the third element in the transuranic series, it was discovered fourth, after the heavier curium. The discovery was kept secret and only released to the public in November 1945. Most americium is produced by uranium or plutonium being bombarded with neutrons in nuclear reactors – one tonne of spent nuclear fuel contains about 100 grams of americium. It is widely used in commercial ionization chamber smoke detectors, as well as in neutron sources and industrial gauges. Several unusual applications, such as nuclear batteries or fuel for space ships with nuclear propulsion, have been proposed for the isotope 242mAm, but they are as yet hindered by the scarcity and high price of this nuclear isomer.
Americium is a relatively soft radioactive metal with a silvery appearance. Its most common isotopes are 241Am and 243Am. In chemical compounds, americium usually assumes the oxidation state +3, especially in solutions. Several other oxidation states are known, ranging from +2 to +7, and can be identified by their characteristic optical absorption spectra. The crystal lattices of solid americium and its compounds contain small intrinsic radiogenic defects, due to metamictization induced by self-irradiation with alpha particles, which accumulates with time; this can cause a drift of some material properties over time, more noticeable in older samples.
History
Although americium was likely produced in previous nuclear experiments, it was first intentionally synthesized, isolated and identified in late autumn 1944, at the University of California, Berkeley, by Glenn T. Seaborg, Leon O. Morgan, Ralph A. James, and Albert Ghiorso. They used a 60-inch cyclotron at the University of California, Berkeley. The element was chemically identified at the Metallurgical Laboratory (now Argonne National Laboratory) of the University of Chicago. Following the lighter neptunium, plutonium, and heavier curium, americium was the fourth transuranium element to be discovered. At the time, the periodic table had been restructured by Seaborg to its present layout, containing the actinide row below the lanthanide one. This led to americium being located right below its twin lanthanide element europium; it was thus by analogy named after the Americas: "The name americium (after the Americas) and the symbol Am are suggested for the element on the basis of its position as the sixth member of the actinide rare-earth series, analogous to europium, Eu, of the lanthanide series."
The new element was isolated from its oxides in a complex, multi-step process. First plutonium-239 nitrate (239PuNO3) solution was coated on a platinum foil of about 0.5 cm2 area, the solution was evaporated and the residue was converted into plutonium dioxide (PuO2) by calcining. After cyclotron irradiation, the coating was dissolved with nitric acid, and then precipitated as the hydroxide using concentrated aqueous ammonia solution. The residue was dissolved in perchloric acid. Further separation was carried out by ion exchange, yielding a certain isotope of curium. The separation of curium and americium was so painstaking that those elements were initially called by the Berkeley group as pandemonium (from Greek for all demons or hell) and delirium (from Latin for madness).
Initial experiments yielded four americium isotopes: 241Am, 242Am, 239Am and 238Am. Americium-241 was directly obtained from plutonium upon absorption of two neutrons. It decays by emission of a α-particle to 237Np; the half-life of this decay was first determined as years but then corrected to 432.2 years.
The times are half-lives
The second isotope 242Am was produced upon neutron bombardment of the already-created 241Am. Upon rapid β-decay, 242Am converts into the isotope of curium 242Cm (which had been discovered previously). The half-life of this decay was initially determined at 17 hours, which was close to the presently accepted value of 16.02 h.
The discovery of americium and curium in 1944 was closely related to the Manhattan Project; the results were confidential and declassified only in 1945. Seaborg leaked the synthesis of the elements 95 and 96 on the U.S. radio show for children Quiz Kids five days before the official presentation at an American Chemical Society meeting on 11 November 1945, when one of the listeners asked whether any new transuranium element besides plutonium and neptunium had been discovered during the war. After the discovery of americium isotopes 241Am and 242Am, their production and compounds were patented listing only Seaborg as the inventor. The initial americium samples weighed a few micrograms; they were barely visible and were identified by their radioactivity. The first substantial amounts of metallic americium weighing 40–200 micrograms were not prepared until 1951 by reduction of americium(III) fluoride with barium metal in high vacuum at 1100 °C.
Occurrence
The longest-lived and most common isotopes of americium, 241Am and 243Am, have half-lives of 432.2 and 7,370 years, respectively. Therefore, any primordial americium (americium that was present on Earth during its formation) should have decayed by now. Trace amounts of americium probably occur naturally in uranium minerals as a result of neutron capture and beta decay (238U → 239Pu → 240Pu → 241Am), though the quantities would be tiny and this has not been confirmed. Extraterrestrial long-lived 247Cm is probably also deposited on Earth and has 243Am as one of its intermediate decay products, but again this has not been confirmed.
Existing americium is concentrated in the areas used for the atmospheric nuclear weapons tests conducted between 1945 and 1980, as well as at the sites of nuclear incidents, such as the Chernobyl disaster. For example, the analysis of the debris at the testing site of the first U.S. hydrogen bomb, Ivy Mike, (1 November 1952, Enewetak Atoll), revealed high concentrations of various actinides including americium; but due to military secrecy, this result was not published until later, in 1956. Trinitite, the glassy residue left on the desert floor near Alamogordo, New Mexico, after the plutonium-based Trinity nuclear bomb test on 16 July 1945, contains traces of americium-241. Elevated levels of americium were also detected at the crash site of a US Boeing B-52 bomber aircraft, which carried four hydrogen bombs, in 1968 in Greenland.
In other regions, the average radioactivity of surface soil due to residual americium is only about 0.01 picocuries per gram (0.37 mBq/g). Atmospheric americium compounds are poorly soluble in common solvents and mostly adhere to soil particles. Soil analysis revealed about 1,900 times higher concentration of americium inside sandy soil particles than in the water present in the soil pores; an even higher ratio was measured in loam soils.
Americium is produced mostly artificially in small quantities, for research purposes. A tonne of spent nuclear fuel contains about 100 grams of various americium isotopes, mostly 241Am and 243Am. Their prolonged radioactivity is undesirable for the disposal, and therefore americium, together with other long-lived actinides, must be neutralized. The associated procedure may involve several steps, where americium is first separated and then converted by neutron bombardment in special reactors to short-lived nuclides. This procedure is well known as nuclear transmutation, but it is still being developed for americium. The transuranic elements from americium to fermium occurred naturally in the natural nuclear fission reactor at Oklo, but no longer do so.
Americium is also one of the elements that have theoretically been detected in Przybylski's Star.
Synthesis and extraction
Isotope nucleosynthesis
Americium has been produced in small quantities in nuclear reactors for decades, and kilograms of its 241Am and 243Am isotopes have been accumulated by now. Nevertheless, since it was first offered for sale in 1962, its price, about of 241Am, remains almost unchanged owing to the very complex separation procedure. The heavier isotope 243Am is produced in much smaller amounts; it is thus more difficult to separate, resulting in a higher cost of the order .
Americium is not synthesized directly from uranium – the most common reactor material – but from the plutonium isotope 239Pu. The latter needs to be produced first, according to the following nuclear process:
^{238}_{92}U ->[\ce{(n,\gamma)}] ^{239}_{92}U ->[\beta^-][23.5 \ \ce{min}] ^{239}_{93}Np ->[\beta^-][2.3565 \ \ce{d}] ^{239}_{94}Pu
The capture of two neutrons by 239Pu (a so-called (n,γ) reaction), followed by a β-decay, results in 241Am:
^{239}_{94}Pu ->[\ce{2(n,\gamma)}] ^{241}_{94}Pu ->[\beta^-][14.35 \ \ce{yr}] ^{241}_{95}Am
The plutonium present in spent nuclear fuel contains about 12% of 241Pu. Because it beta-decays to 241Am, 241Pu can be extracted and may be used to generate further 241Am. However, this process is rather slow: half of the original amount of 241Pu decays to 241Am after about 15 years, and the 241Am amount reaches a maximum after 70 years.
The obtained 241Am can be used for generating heavier americium isotopes by further neutron capture inside a nuclear reactor. In a light water reactor (LWR), 79% of 241Am converts to 242Am and 10% to its nuclear isomer 242mAm:
Americium-242 has a half-life of only 16 hours, which makes its further conversion to 243Am extremely inefficient. The latter isotope is produced instead in a process where 239Pu captures four neutrons under high neutron flux:
^{239}_{94}Pu ->[\ce{4(n,\gamma)}] \ ^{243}_{94}Pu ->[\beta^-][4.956 \ \ce{h}] ^{243}_{95}Am
Metal generation
Most synthesis routines yield a mixture of different actinide isotopes in oxide forms, from which isotopes of americium can be separated. In a typical procedure, the spent reactor fuel (e.g. MOX fuel) is dissolved in nitric acid, and the bulk of uranium and plutonium is removed using a PUREX-type extraction (Plutonium–URanium EXtraction) with tributyl phosphate in a hydrocarbon. The lanthanides and remaining actinides are then separated from the aqueous residue (raffinate) by a diamide-based extraction, to give, after stripping, a mixture of trivalent actinides and lanthanides. Americium compounds are then selectively extracted using multi-step chromatographic and centrifugation techniques with an appropriate reagent. A large amount of work has been done on the solvent extraction of americium. For example, a 2003 EU-funded project codenamed "EUROPART" studied triazines and other compounds as potential extraction agents. A bis-triazinyl bipyridine complex was proposed in 2009 as such a reagent is highly selective to americium (and curium). Separation of americium from the highly similar curium can be achieved by treating a slurry of their hydroxides in aqueous sodium bicarbonate with ozone, at elevated temperatures. Both Am and Cm are mostly present in solutions in the +3 valence state; whereas curium remains unchanged, americium oxidizes to soluble Am(IV) complexes which can be washed away.
Metallic americium is obtained by reduction from its compounds. Americium(III) fluoride was first used for this purpose. The reaction was conducted using elemental barium as reducing agent in a water- and oxygen-free environment inside an apparatus made of tantalum and tungsten.
An alternative is the reduction of americium dioxide by metallic lanthanum or thorium:
Physical properties
In the periodic table, americium is located to the right of plutonium, to the left of curium, and below the lanthanide europium, with which it shares many physical and chemical properties. Americium is a highly radioactive element. When freshly prepared, it has a silvery-white metallic lustre, but then slowly tarnishes in air. With a density of 12 g/cm3, americium is less dense than both curium (13.52 g/cm3) and plutonium (19.8 g/cm3); but has a higher density than europium (5.264 g/cm3)—mostly because of its higher atomic mass. Americium is relatively soft and easily deformable and has a significantly lower bulk modulus than the actinides before it: Th, Pa, U, Np and Pu. Its melting point of 1173 °C is significantly higher than that of plutonium (639 °C) and europium (826 °C), but lower than for curium (1340 °C).
At ambient conditions, americium is present in its most stable α form which has a hexagonal crystal symmetry, and a space group P63/mmc with cell parameters a = 346.8 pm and c = 1124 pm, and four atoms per unit cell. The crystal consists of a double-hexagonal close packing with the layer sequence ABAC and so is isotypic with α-lanthanum and several actinides such as α-curium. The crystal structure of americium changes with pressure and temperature. When compressed at room temperature to 5 GPa, α-Am transforms to the β modification, which has a face-centered cubic (fcc) symmetry, space group Fmm and lattice constant a = 489 pm. This fcc structure is equivalent to the closest packing with the sequence ABC. Upon further compression to 23 GPa, americium transforms to an orthorhombic γ-Am structure similar to that of α-uranium. There are no further transitions observed up to 52 GPa, except for an appearance of a monoclinic phase at pressures between 10 and 15 GPa. There is no consistency on the status of this phase in the literature, which also sometimes lists the α, β and γ phases as I, II and III. The β-γ transition is accompanied by a 6% decrease in the crystal volume; although theory also predicts a significant volume change for the α-β transition, it is not observed experimentally. The pressure of the α-β transition decreases with increasing temperature, and when α-americium is heated at ambient pressure, at 770 °C it changes into an fcc phase which is different from β-Am, and at 1075 °C it converts to a body-centered cubic structure. The pressure-temperature phase diagram of americium is thus rather similar to those of lanthanum, praseodymium and neodymium.
As with many other actinides, self-damage of the crystal structure due to alpha-particle irradiation is intrinsic to americium. It is especially noticeable at low temperatures, where the mobility of the produced structure defects is relatively low, by broadening of X-ray diffraction peaks. This effect makes somewhat uncertain the temperature of americium and some of its properties, such as electrical resistivity. So for americium-241, the resistivity at 4.2 K increases with time from about 2 μOhm·cm to 10 μOhm·cm after 40 hours, and saturates at about 16 μOhm·cm after 140 hours. This effect is less pronounced at room temperature, due to annihilation of radiation defects; also heating to room temperature the sample which was kept for hours at low temperatures restores its resistivity. In fresh samples, the resistivity gradually increases with temperature from about 2 μOhm·cm at liquid helium to 69 μOhm·cm at room temperature; this behavior is similar to that of neptunium, uranium, thorium and protactinium, but is different from plutonium and curium which show a rapid rise up to 60 K followed by saturation. The room temperature value for americium is lower than that of neptunium, plutonium and curium, but higher than for uranium, thorium and protactinium.
Americium is paramagnetic in a wide temperature range, from that of liquid helium, to room temperature and above. This behavior is markedly different from that of its neighbor curium which exhibits antiferromagnetic transition at 52 K. The thermal expansion coefficient of americium is slightly anisotropic and amounts to along the shorter a axis and for the longer c hexagonal axis. The enthalpy of dissolution of americium metal in hydrochloric acid at standard conditions is , from which the standard enthalpy change of formation (ΔfH°) of aqueous Am3+ ion is . The standard potential Am3+/Am0 is .
Chemical properties
Americium metal readily reacts with oxygen and dissolves in aqueous acids. The most stable oxidation state for americium is +3. The chemistry of americium(III) has many similarities to the chemistry of lanthanide(III) compounds. For example, trivalent americium forms insoluble fluoride, oxalate, iodate, hydroxide, phosphate and other salts. Compounds of americium in oxidation states +2, +4, +5, +6 and +7 have also been studied. This is the widest range that has been observed with actinide elements. The color of americium compounds in aqueous solution is as follows: Am3+ (yellow-reddish), Am4+ (yellow-reddish), ; (yellow), (brown) and (dark green). The absorption spectra have sharp peaks, due to f-f transitions' in the visible and near-infrared regions. Typically, Am(III) has absorption maxima at ca. 504 and 811 nm, Am(V) at ca. 514 and 715 nm, and Am(VI) at ca. 666 and 992 nm.
Americium compounds with oxidation state +4 and higher are strong oxidizing agents, comparable in strength to the permanganate ion () in acidic solutions. Whereas the Am4+ ions are unstable in solutions and readily convert to Am3+, compounds such as americium dioxide (AmO2) and americium(IV) fluoride (AmF4) are stable in the solid state.
The pentavalent oxidation state of americium was first observed in 1951. In acidic aqueous solution the ion is unstable with respect to disproportionation. The reaction
is typical. The chemistry of Am(V) and Am(VI) is comparable to the chemistry of uranium in those oxidation states. In particular, compounds like and are comparable to uranates and the ion is comparable to the uranyl ion, . Such compounds can be prepared by oxidation of Am(III) in dilute nitric acid with ammonium persulfate. Other oxidising agents that have been used include silver(I) oxide, ozone and sodium persulfate.
Chemical compounds
Oxygen compounds
Three americium oxides are known, with the oxidation states +2 (AmO), +3 (Am2O3) and +4 (AmO2). Americium(II) oxide was prepared in minute amounts and has not been characterized in detail. Americium(III) oxide is a red-brown solid with a melting point of 2205 °C. Americium(IV) oxide is the main form of solid americium which is used in nearly all its applications. As most other actinide dioxides, it is a black solid with a cubic (fluorite) crystal structure.
The oxalate of americium(III), vacuum dried at room temperature, has the chemical formula Am2(C2O4)3·7H2O. Upon heating in vacuum, it loses water at 240 °C and starts decomposing into AmO2 at 300 °C, the decomposition completes at about 470 °C. The initial oxalate dissolves in nitric acid with the maximum solubility of 0.25 g/L.
Halides
Halides of americium are known for the oxidation states +2, +3 and +4, where the +3 is most stable, especially in solutions.
Reduction of Am(III) compounds with sodium amalgam yields Am(II) salts – the black halides AmCl2, AmBr2 and AmI2. They are very sensitive to oxygen and oxidize in water, releasing hydrogen and converting back to the Am(III) state. Specific lattice constants are:
Orthorhombic AmCl2: a = , b = and c =
Tetragonal AmBr2: a = and c = . They can also be prepared by reacting metallic americium with an appropriate mercury halide HgX2, where X = Cl, Br or I:
{Am} + \underset{mercury\ halide}{HgX2} ->[{} \atop 400 - 500 ^\circ \ce C] {AmX2} + {Hg}
Americium(III) fluoride (AmF3) is poorly soluble and precipitates upon reaction of Am3+ and fluoride ions in weak acidic solutions:
Am^3+ + 3F^- -> AmF3(v)
The tetravalent americium(IV) fluoride (AmF4) is obtained by reacting solid americium(III) fluoride with molecular fluorine:
2AmF3 + F2 -> 2AmF4
Another known form of solid tetravalent americium fluoride is KAmF5. Tetravalent americium has also been observed in the aqueous phase. For this purpose, black Am(OH)4 was dissolved in 15-M NH4F with the americium concentration of 0.01 M. The resulting reddish solution had a characteristic optical absorption spectrum which is similar to that of AmF4 but differed from other oxidation states of americium. Heating the Am(IV) solution to 90 °C did not result in its disproportionation or reduction, however a slow reduction was observed to Am(III) and assigned to self-irradiation of americium by alpha particles.
Most americium(III) halides form hexagonal crystals with slight variation of the color and exact structure between the halogens. So, chloride (AmCl3) is reddish and has a structure isotypic to uranium(III) chloride (space group P63/m) and the melting point of 715 °C. The fluoride is isotypic to LaF3 (space group P63/mmc) and the iodide to BiI3 (space group R). The bromide is an exception with the orthorhombic PuBr3-type structure and space group Cmcm. Crystals of americium(III) chloride hexahydrate (AmCl3·6H2O) can be prepared by dissolving americium dioxide in hydrochloric acid and evaporating the liquid. Those crystals are hygroscopic and have yellow-reddish color and a monoclinic crystal structure.
Oxyhalides of americium in the form AmVIO2X2, AmVO2X, AmIVOX2 and AmIIIOX can be obtained by reacting the corresponding americium halide with oxygen or Sb2O3, and AmOCl can also be produced by vapor phase hydrolysis:
AmCl3 + H2O -> AmOCl + 2HCl
Chalcogenides and pnictides
The known chalcogenides of americium include the sulfide AmS2, selenides AmSe2 and Am3Se4, and tellurides Am2Te3 and AmTe2. The pnictides of americium (243Am) of the AmX type are known for the elements phosphorus, arsenic, antimony and bismuth. They crystallize in the rock-salt lattice.
Silicides and borides
Americium monosilicide (AmSi) and "disilicide" (nominally AmSix with: 1.87 < x < 2.0) were obtained by reduction of americium(III) fluoride with elementary silicon in vacuum at 1050 °C (AmSi) and 1150−1200 °C (AmSix). AmSi is a black solid isomorphic with LaSi, it has an orthorhombic crystal symmetry. AmSix has a bright silvery lustre and a tetragonal crystal lattice (space group I41/amd), it is isomorphic with PuSi2 and ThSi2. Borides of americium include AmB4 and AmB6. The tetraboride can be obtained by heating an oxide or halide of americium with magnesium diboride in vacuum or inert atmosphere.
Organoamericium compounds
Analogous to uranocene, americium is predicted to form the organometallic compound amerocene with two cyclooctatetraene ligands, with the chemical formula (η8-C8H8)2Am. A cyclopentadienyl complex is also known that is likely to be stoichiometrically AmCp3.
Formation of the complexes of the type Am(n-C3H7-BTP)3, where BTP stands for 2,6-di(1,2,4-triazin-3-yl)pyridine, in solutions containing n-C3H7-BTP and Am3+ ions has been confirmed by EXAFS. Some of these BTP-type complexes selectively interact with americium and therefore are useful in its selective separation from lanthanides and another actinides.
Biological aspects
Americium is an artificial element of recent origin, and thus does not have a biological requirement. It is harmful to life. It has been proposed to use bacteria for removal of americium and other heavy metals from rivers and streams. Thus, Enterobacteriaceae of the genus Citrobacter precipitate americium ions from aqueous solutions, binding them into a metal-phosphate complex at their cell walls. Several studies have been reported on the biosorption and bioaccumulation of americium by bacteria and fungi. In the laboratory, both americium and curium were found to support the growth of methylotrophs.
Fission
The isotope 242mAm (half-life 141 years) has the largest cross sections for absorption of thermal neutrons (5,700 barns), that results in a small critical mass for a sustained nuclear chain reaction. The critical mass for a bare 242mAm sphere is about 9–14 kg (the uncertainty results from insufficient knowledge of its material properties). It can be lowered to 3–5 kg with a metal reflector and should become even smaller with a water reflector. Such small critical mass is favorable for portable nuclear weapons, but those based on 242mAm are not known yet, probably because of its scarcity and high price. The critical masses of the two readily available isotopes, 241Am and 243Am, are relatively high – 57.6 to 75.6 kg for 241Am and 209 kg for 243Am. Scarcity and high price yet hinder application of americium as a nuclear fuel in nuclear reactors.
There are proposals of very compact 10-kW high-flux reactors using as little as 20 grams of 242mAm. Such low-power reactors would be relatively safe to use as neutron sources for radiation therapy in hospitals.
Isotopes
About 18 isotopes and 11 nuclear isomers are known for americium, having mass numbers 229, 230, and 232 through 247. There are two long-lived alpha-emitters; 243Am has a half-life of 7,370 years and is the most stable isotope, and 241Am has a half-life of 432.2 years. The most stable nuclear isomer is 242m1Am; it has a long half-life of 141 years. The half-lives of other isotopes and isomers range from 0.64 microseconds for 245m1Am to 50.8 hours for 240Am. As with most other actinides, the isotopes of americium with odd number of neutrons have relatively high rate of nuclear fission and low critical mass.
Americium-241 decays to 237Np emitting alpha particles of 5 different energies, mostly at 5.486 MeV (85.2%) and 5.443 MeV (12.8%). Because many of the resulting states are metastable, they also emit gamma rays with the discrete energies between 26.3 and 158.5 keV.
Americium-242 is a short-lived isotope with a half-life of 16.02 h. It mostly (82.7%) converts by β-decay to 242Cm, but also by electron capture to 242Pu (17.3%). Both 242Cm and 242Pu transform via nearly the same decay chain through 238Pu down to 234U.
Nearly all (99.541%) of 242m1Am decays by internal conversion to 242Am and the remaining 0.459% by α-decay to 238Np. The latter subsequently decays to 238Pu and then to 234U.
Americium-243 transforms by α-emission into 239Np, which converts by β-decay to 239Pu, and the 239Pu changes into 235U by emitting an α-particle.
Applications
Ionization-type smoke detector
Americium is used in the most common type of household smoke detector, which uses 241Am in the form of americium dioxide as its source of ionizing radiation. This isotope is preferred over 226Ra because it emits 5 times more alpha particles and relatively little harmful gamma radiation.
The amount of americium in a typical new smoke detector is 1 microcurie (37 kBq) or 0.29 microgram. This amount declines slowly as the americium decays into neptunium-237, a different transuranic element with a much longer half-life (about 2.14 million years). With its half-life of 432.2 years, the americium in a smoke detector includes about 3% neptunium after 19 years, and about 5% after 32 years. The radiation passes through an ionization chamber, an air-filled space between two electrodes, and permits a small, constant current between the electrodes. Any smoke that enters the chamber absorbs the alpha particles, which reduces the ionization and affects this current, triggering the alarm. Compared to the alternative optical smoke detector, the ionization smoke detector is cheaper and can detect particles which are too small to produce significant light scattering; however, it is more prone to false alarms.
Radionuclide
As 241Am has a roughly similar half-life to 238Pu (432.2 years vs. 87 years), it has been proposed as an active element of radioisotope thermoelectric generators, for example in spacecraft. Although americium produces less heat and electricity – the power yield is 114.7 mW/g for 241Am and 6.31 mW/g for 243Am (cf. 390 mW/g for 238Pu) – and its radiation poses more threat to humans owing to neutron emission, the European Space Agency is considering using americium for its space probes.
Another proposed space-related application of americium is a fuel for space ships with nuclear propulsion. It relies on the very high rate of nuclear fission of 242mAm, which can be maintained even in a micrometer-thick foil. Small thickness avoids the problem of self-absorption of emitted radiation. This problem is pertinent to uranium or plutonium rods, in which only surface layers provide alpha-particles. The fission products of 242mAm can either directly propel the spaceship or they can heat a thrusting gas. They can also transfer their energy to a fluid and generate electricity through a magnetohydrodynamic generator.
One more proposal which utilizes the high nuclear fission rate of 242mAm is a nuclear battery. Its design relies not on the energy of the emitted by americium alpha particles, but on their charge, that is the americium acts as the self-sustaining "cathode". A single 3.2 kg 242mAm charge of such battery could provide about 140 kW of power over a period of 80 days. Even with all the potential benefits, the current applications of 242mAm are as yet hindered by the scarcity and high price of this particular nuclear isomer.
In 2019, researchers at the UK National Nuclear Laboratory and the University of Leicester demonstrated the use of heat generated by americium to illuminate a small light bulb. This technology could lead to systems to power missions with durations up to 400 years into interstellar space, where solar panels do not function.
Neutron source
The oxide of 241Am pressed with beryllium is an efficient neutron source. Here americium acts as the alpha source, and beryllium produces neutrons owing to its large cross-section for the (α,n) nuclear reaction:
^{241}_{95}Am -> ^{237}_{93}Np + ^{4}_{2}He + \gamma
^{9}_{4}Be + ^{4}_{2}He -> ^{12}_{6}C + ^{1}_{0}n + \gamma
The most widespread use of 241AmBe neutron sources is a neutron probe – a device used to measure the quantity of water present in soil, as well as moisture/density for quality control in highway construction. 241Am neutron sources are also used in well logging applications, as well as in neutron radiography, tomography and other radiochemical investigations.
Production of other elements
Americium is a starting material for the production of other transuranic elements and transactinides – for example, 82.7% of 242Am decays to 242Cm and 17.3% to 242Pu. In the nuclear reactor, 242Am is also up-converted by neutron capture to 243Am and 244Am, which transforms by β-decay to 244Cm:
^{243}_{95}Am ->[\ce{(n,\gamma)}] ^{244}_{95}Am ->[\beta^-][10.1 \ \ce{h}] ^{244}_{96}Cm
Irradiation of 241Am by 12C or 22Ne ions yields the isotopes 247Es (einsteinium) or 260Db (dubnium), respectively. Furthermore, the element berkelium (243Bk isotope) had been first intentionally produced and identified by bombarding 241Am with alpha particles, in 1949, by the same Berkeley group, using the same 60-inch cyclotron. Similarly, nobelium was produced at the Joint Institute for Nuclear Research, Dubna, Russia, in 1965 in several reactions, one of which included irradiation of 243Am with 15N ions. Besides, one of the synthesis reactions for lawrencium, discovered by scientists at Berkeley and Dubna, included bombardment of 243Am with 18O.
Spectrometer
Americium-241 has been used as a portable source of both gamma rays and alpha particles for a number of medical and industrial uses. The 59.5409 keV gamma ray emissions from 241Am in such sources can be used for indirect analysis of materials in radiography and X-ray fluorescence spectroscopy, as well as for quality control in fixed nuclear density gauges and nuclear densometers. For example, the element has been employed to gauge glass thickness to help create flat glass. Americium-241 is also suitable for calibration of gamma-ray spectrometers in the low-energy range, since its spectrum consists of nearly a single peak and negligible Compton continuum (at least three orders of magnitude lower intensity). Americium-241 gamma rays were also used to provide passive diagnosis of thyroid function. This medical application is however obsolete.
Health concerns
As a highly radioactive element, americium and its compounds must be handled only in an appropriate laboratory under special arrangements. Although most americium isotopes predominantly emit alpha particles which can be blocked by thin layers of common materials, many of the daughter products emit gamma-rays and neutrons which have a long penetration depth.
If consumed, most of the americium is excreted within a few days, with only 0.05% absorbed in the blood, of which roughly 45% goes to the liver and 45% to the bones, and the remaining 10% is excreted. The uptake to the liver depends on the individual and increases with age. In the bones, americium is first deposited over cortical and trabecular surfaces and slowly redistributes over the bone with time. The biological half-life of 241Am is 50 years in the bones and 20 years in the liver, whereas in the gonads (testicles and ovaries) it remains permanently; in all these organs, americium promotes formation of cancer cells as a result of its radioactivity.
Americium often enters landfills from discarded smoke detectors. The rules associated with the disposal of smoke detectors are relaxed in most jurisdictions. In 1994, 17-year-old David Hahn extracted the americium from about 100 smoke detectors in an attempt to build a breeder nuclear reactor. There have been a few cases of exposure to americium, the worst case being that of chemical operations technician Harold McCluskey, who at the age of 64 was exposed to 500 times the occupational standard for americium-241 as a result of an explosion in his lab. McCluskey died at the age of 75 of unrelated pre-existing disease.
See also
Actinides in the environment
:Category:Americium compounds
Notes
References
Bibliography
Penneman, R. A. and Keenan T. K. The radiochemistry of americium and curium, University of California, Los Alamos, California, 1960
Further reading
Nuclides and Isotopes – 14th Edition, GE Nuclear Energy, 1989.
External links
Americium at The Periodic Table of Videos (University of Nottingham)
ATSDR – Public Health Statement: Americium
World Nuclear Association – Smoke Detectors and Americium
Chemical elements
Chemical elements with double hexagonal close-packed structure
Actinides
Carcinogens
Synthetic elements | Americium | [
"Physics",
"Chemistry",
"Environmental_science"
] | 8,399 | [
"Matter",
"Toxicology",
"Chemical elements",
"Synthetic materials",
"Synthetic elements",
"Carcinogens",
"Atoms",
"Radioactivity"
] |
901 | https://en.wikipedia.org/wiki/Astatine | Astatine is a chemical element; it has symbol At and atomic number 85. It is the rarest naturally occurring element in the Earth's crust, occurring only as the decay product of various heavier elements. All of astatine's isotopes are short-lived; the most stable is astatine-210, with a half-life of 8.1 hours. Consequently, a solid sample of the element has never been seen, because any macroscopic specimen would be immediately vaporized by the heat of its radioactivity.
The bulk properties of astatine are not known with certainty. Many of them have been estimated from its position on the periodic table as a heavier analog of fluorine, chlorine, bromine, and iodine, the four stable halogens. However, astatine also falls roughly along the dividing line between metals and nonmetals, and some metallic behavior has also been observed and predicted for it. Astatine is likely to have a dark or lustrous appearance and may be a semiconductor or possibly a metal. Chemically, several anionic species of astatine are known and most of its compounds resemble those of iodine, but it also sometimes displays metallic characteristics and shows some similarities to silver.
The first synthesis of astatine was in 1940 by Dale R. Corson, Kenneth Ross MacKenzie, and Emilio G. Segrè at the University of California, Berkeley. They named it from the Ancient Greek () 'unstable'. Four isotopes of astatine were subsequently found to be naturally occurring, although much less than one gram is present at any given time in the Earth's crust. Neither the most stable isotope, astatine-210, nor the medically useful astatine-211 occur naturally; they are usually produced by bombarding bismuth-209 with alpha particles.
Characteristics
Astatine is an extremely radioactive element; all its isotopes have half-lives of 8.1 hours or less, decaying into other astatine isotopes, bismuth, polonium, or radon. Most of its isotopes are very unstable, with half-lives of seconds or less. Of the first 101 elements in the periodic table, only francium is less stable, and all the astatine isotopes more stable than the longest-lived francium isotopes (205–211At) are in any case synthetic and do not occur in nature.
The bulk properties of astatine are not known with any certainty. Research is limited by its short half-life, which prevents the creation of weighable quantities. A visible piece of astatine would immediately vaporize itself because of the heat generated by its intense radioactivity. It remains to be seen if, with sufficient cooling, a macroscopic quantity of astatine could be deposited as a thin film. Astatine is usually classified as either a nonmetal or a metalloid; metal formation has also been predicted.
Physical
Most of the physical properties of astatine have been estimated (by interpolation or extrapolation), using theoretically or empirically derived methods. For example, halogens get darker with increasing atomic weight – fluorine is nearly colorless, chlorine is yellow-green, bromine is red-brown, and iodine is dark gray/violet. Astatine is sometimes described as probably being a black solid (assuming it follows this trend), or as having a metallic appearance (if it is a metalloid or a metal).
Astatine sublimes less readily than iodine, having a lower vapor pressure. Even so, half of a given quantity of astatine will vaporize in approximately an hour if put on a clean glass surface at room temperature. The absorption spectrum of astatine in the middle ultraviolet region has lines at 224.401 and 216.225 nm, suggestive of 6p to 7s transitions.
The structure of solid astatine is unknown. As an analog of iodine it may have an orthorhombic crystalline structure composed of diatomic astatine molecules, and be a semiconductor (with a band gap of 0.7 eV). Alternatively, if condensed astatine forms a metallic phase, as has been predicted, it may have a monatomic face-centered cubic structure; in this structure, it may well be a superconductor, like the similar high-pressure phase of iodine. Metallic astatine is expected to have a density of 8.91–8.95 g/cm3.
Evidence for (or against) the existence of diatomic astatine (At2) is sparse and inconclusive. Some sources state that it does not exist, or at least has never been observed, while other sources assert or imply its existence. Despite this controversy, many properties of diatomic astatine have been predicted; for example, its bond length would be , dissociation energy <, and heat of vaporization (∆Hvap) 54.39 kJ/mol. Many values have been predicted for the melting and boiling points of astatine, but only for At2.
Chemical
The chemistry of astatine is "clouded by the extremely low concentrations at which astatine experiments have been conducted, and the possibility of reactions with impurities, walls and filters, or radioactivity by-products, and other unwanted nano-scale interactions". Many of its apparent chemical properties have been observed using tracer studies on extremely dilute astatine solutions, typically less than 10−10 mol·L−1. Some properties, such as anion formation, align with other halogens. Astatine has some metallic characteristics as well, such as plating onto a cathode, and coprecipitating with metal sulfides in hydrochloric acid. It forms complexes with EDTA, a metal chelating agent, and is capable of acting as a metal in antibody radiolabeling; in some respects, astatine in the +1 state is akin to silver in the same state. Most of the organic chemistry of astatine is, however, analogous to that of iodine. It has been suggested that astatine can form a stable monatomic cation in aqueous solution.
Astatine has an electronegativity of 2.2 on the revised Pauling scale – lower than that of iodine (2.66) and the same as hydrogen. In hydrogen astatide (HAt), the negative charge is predicted to be on the hydrogen atom, implying that this compound could be referred to as astatine hydride according to certain nomenclatures. That would be consistent with the electronegativity of astatine on the Allred–Rochow scale (1.9) being less than that of hydrogen (2.2). However, official IUPAC stoichiometric nomenclature is based on an idealized convention of determining the relative electronegativities of the elements by the mere virtue of their position within the periodic table. According to this convention, astatine is handled as though it is more electronegative than hydrogen, irrespective of its true electronegativity. The electron affinity of astatine, at 233 kJ mol−1, is 21% less than that of iodine. In comparison, the value of Cl (349) is 6.4% higher than F (328); Br (325) is 6.9% less than Cl; and I (295) is 9.2% less than Br. The marked reduction for At was predicted as being due to spin–orbit interactions. The first ionization energy of astatine is about 899 kJ mol−1, which continues the trend of decreasing first ionization energies down the halogen group (fluorine, 1681; chlorine, 1251; bromine, 1140; iodine, 1008).
Compounds
Less reactive than iodine, astatine is the least reactive of the halogens; the chemical properties of tennessine, the next-heavier group 17 element, have not yet been investigated, however. Astatine compounds have been synthesized in nano-scale amounts and studied as intensively as possible before their radioactive disintegration. The reactions involved have been typically tested with dilute solutions of astatine mixed with larger amounts of iodine. Acting as a carrier, the iodine ensures there is sufficient material for laboratory techniques (such as filtration and precipitation) to work. Like iodine, astatine has been shown to adopt odd-numbered oxidation states ranging from −1 to +7.
Only a few compounds with metals have been reported, in the form of astatides of sodium, palladium, silver, thallium, and lead. Some characteristic properties of silver and sodium astatide, and the other hypothetical alkali and alkaline earth astatides, have been estimated by extrapolation from other metal halides.
The formation of an astatine compound with hydrogen – usually referred to as hydrogen astatide – was noted by the pioneers of astatine chemistry. As mentioned, there are grounds for instead referring to this compound as astatine hydride. It is easily oxidized; acidification by dilute nitric acid gives the At0 or At+ forms, and the subsequent addition of silver(I) may only partially, at best, precipitate astatine as silver(I) astatide (AgAt). Iodine, in contrast, is not oxidized, and precipitates readily as silver(I) iodide.
Astatine is known to bind to boron, carbon, and nitrogen. Various boron cage compounds have been prepared with At–B bonds, these being more stable than At–C bonds. Astatine can replace a hydrogen atom in benzene to form astatobenzene C6H5At; this may be oxidized to C6H5AtCl2 by chlorine. By treating this compound with an alkaline solution of hypochlorite, C6H5AtO2 can be produced. The dipyridine-astatine(I) cation, [At(C5H5N)2]+, forms ionic compounds with perchlorate (a non-coordinating anion) and with nitrate, [At(C5H5N)2]NO3. This cation exists as a coordination complex in which two dative covalent bonds separately link the astatine(I) centre with each of the pyridine rings via their nitrogen atoms.
With oxygen, there is evidence of the species AtO− and AtO+ in aqueous solution, formed by the reaction of astatine with an oxidant such as elemental bromine or (in the last case) by sodium persulfate in a solution of perchloric acid. The species previously thought to be has since been determined to be , a hydrolysis product of AtO+ (another such hydrolysis product being AtOOH). The well characterized anion can be obtained by, for example, the oxidation of astatine with potassium hypochlorite in a solution of potassium hydroxide. Preparation of lanthanum triastatate La(AtO3)3, following the oxidation of astatine by a hot Na2S2O8 solution, has been reported. Further oxidation of , such as by xenon difluoride (in a hot alkaline solution) or periodate (in a neutral or alkaline solution), yields the perastatate ion ; this is only stable in neutral or alkaline solutions. Astatine is also thought to be capable of forming cations in salts with oxyanions such as iodate or dichromate; this is based on the observation that, in acidic solutions, monovalent or intermediate positive states of astatine coprecipitate with the insoluble salts of metal cations such as silver(I) iodate or thallium(I) dichromate.
Astatine may form bonds to the other chalcogens; these include S7At+ and with sulfur, a coordination selenourea compound with selenium, and an astatine–tellurium colloid with tellurium.
Astatine is known to react with its lighter homologs iodine, bromine, and chlorine in the vapor state; these reactions produce diatomic interhalogen compounds with formulas AtI, AtBr, and AtCl. The first two compounds may also be produced in water – astatine reacts with iodine/iodide solution to form AtI, whereas AtBr requires (aside from astatine) an iodine/iodine monobromide/bromide solution. The excess of iodides or bromides may lead to and ions, or in a chloride solution, they may produce species like or via equilibrium reactions with the chlorides. Oxidation of the element with dichromate (in nitric acid solution) showed that adding chloride turned the astatine into a molecule likely to be either AtCl or AtOCl. Similarly, or may be produced. The polyhalides PdAtI2, CsAtI2, TlAtI2, and PbAtI are known or presumed to have been precipitated. In a plasma ion source mass spectrometer, the ions [AtI]+, [AtBr]+, and [AtCl]+ have been formed by introducing lighter halogen vapors into a helium-filled cell containing astatine, supporting the existence of stable neutral molecules in the plasma ion state. No astatine fluorides have been discovered yet. Their absence has been speculatively attributed to the extreme reactivity of such compounds, including the reaction of an initially formed fluoride with the walls of the glass container to form a non-volatile product. Thus, although the synthesis of an astatine fluoride is thought to be possible, it may require a liquid halogen fluoride solvent, as has already been used for the characterization of radon fluoride.
History
In 1869, when Dmitri Mendeleev published his periodic table, the space under iodine was empty; after Niels Bohr established the physical basis of the classification of chemical elements, it was suggested that the fifth halogen belonged there. Before its officially recognized discovery, it was called "eka-iodine" (from Sanskrit eka – "one") to imply it was one space under iodine (in the same manner as eka-silicon, eka-boron, and others). Scientists tried to find it in nature; given its extreme rarity, these attempts resulted in several false discoveries.
The first claimed discovery of eka-iodine was made by Fred Allison and his associates at the Alabama Polytechnic Institute (now Auburn University) in 1931. The discoverers named element 85 "alabamine", and assigned it the symbol Ab, designations that were used for a few years. In 1934, H. G. MacPherson of University of California, Berkeley disproved Allison's method and the validity of his discovery. There was another claim in 1937, by the chemist Rajendralal De. Working in Dacca in British India (now Dhaka in Bangladesh), he chose the name "dakin" for element 85, which he claimed to have isolated as the thorium series equivalent of radium F (polonium-210) in the radium series. The properties he reported for dakin do not correspond to those of astatine, and astatine's radioactivity would have prevented him from handling it in the quantities he claimed. Moreover, astatine is not found in the thorium series, and the true identity of dakin is not known.
In 1936, the team of Romanian physicist Horia Hulubei and French physicist Yvette Cauchois claimed to have discovered element 85 by observing its X-ray emission lines. In 1939, they published another paper which supported and extended previous data. In 1944, Hulubei published a summary of data he had obtained up to that time, claiming it was supported by the work of other researchers. He chose the name "dor", presumably from the Romanian for "longing" [for peace], as World War II had started five years earlier. As Hulubei was writing in French, a language which does not accommodate the "ine" suffix, dor would likely have been rendered in English as "dorine", had it been adopted. In 1947, Hulubei's claim was effectively rejected by the Austrian chemist Friedrich Paneth, who would later chair the IUPAC committee responsible for recognition of new elements. Even though Hulubei's samples did contain astatine-218, his means to detect it were too weak, by current standards, to enable correct identification; moreover, he could not perform chemical tests on the element. He had also been involved in an earlier false claim as to the discovery of element 87 (francium) and this is thought to have caused other researchers to downplay his work.
In 1940, the Swiss chemist Walter Minder announced the discovery of element 85 as the beta decay product of radium A (polonium-218), choosing the name "helvetium" (from , the Latin name of Switzerland). Berta Karlik and Traude Bernert were unsuccessful in reproducing his experiments, and subsequently attributed Minder's results to contamination of his radon stream (radon-222 is the parent isotope of polonium-218). In 1942, Minder, in collaboration with the English scientist Alice Leigh-Smith, announced the discovery of another isotope of element 85, presumed to be the product of thorium A (polonium-216) beta decay. They named this substance "anglo-helvetium", but Karlik and Bernert were again unable to reproduce these results.
Later in 1940, Dale R. Corson, Kenneth Ross MacKenzie, and Emilio Segrè isolated the element at the University of California, Berkeley. Instead of searching for the element in nature, the scientists created it by bombarding bismuth-209 with alpha particles in a cyclotron (particle accelerator) to produce, after emission of two neutrons, astatine-211. The discoverers, however, did not immediately suggest a name for the element. The reason for this was that at the time, an element created synthetically in "invisible quantities" that had not yet been discovered in nature was not seen as a completely valid one; in addition, chemists were reluctant to recognize radioactive isotopes as legitimately as stable ones. In 1943, astatine was found as a product of two naturally occurring decay chains by Berta Karlik and Traude Bernert, first in the so-called uranium series, and then in the actinium series. (Since then, astatine was also found in a third decay chain, the neptunium series.) Friedrich Paneth in 1946 called to finally recognize synthetic elements, quoting, among other reasons, recent confirmation of their natural occurrence, and proposed that the discoverers of the newly discovered unnamed elements name these elements. In early 1947, Nature published the discoverers' suggestions; a letter from Corson, MacKenzie, and Segrè suggested the name "astatine" coming from the Ancient Greek () meaning , because of its propensity for radioactive decay, with the ending "-ine", found in the names of the four previously discovered halogens. The name was also chosen to continue the tradition of the four stable halogens, where the name referred to a property of the element.
Corson and his colleagues classified astatine as a metal on the basis of its analytical chemistry. Subsequent investigators reported iodine-like, cationic, or amphoteric behavior. In a 2003 retrospective, Corson wrote that "some of the properties [of astatine] are similar to iodine ... it also exhibits metallic properties, more like its metallic neighbors Po and Bi."
Isotopes
There are 41 known isotopes of astatine, with mass numbers of 188 and 190–229. Theoretical modeling suggests that about 37 more isotopes could exist. No stable or long-lived astatine isotope has been observed, nor is one expected to exist.
Astatine's alpha decay energies follow the same trend as for other heavy elements. Lighter astatine isotopes have quite high energies of alpha decay, which become lower as the nuclei become heavier. Astatine-211 has a significantly higher energy than the previous isotope, because it has a nucleus with 126 neutrons, and 126 is a magic number corresponding to a filled neutron shell. Despite having a similar half-life to the previous isotope (8.1 hours for astatine-210 and 7.2 hours for astatine-211), the alpha decay probability is much higher for the latter: 41.81% against only 0.18%. The two following isotopes release even more energy, with astatine-213 releasing the most energy. For this reason, it is the shortest-lived astatine isotope. Even though heavier astatine isotopes release less energy, no long-lived astatine isotope exists, because of the increasing role of beta decay (electron emission). This decay mode is especially important for astatine; as early as 1950 it was postulated that all isotopes of the element undergo beta decay, though nuclear mass measurements indicate that 215At is in fact beta-stable, as it has the lowest mass of all isobars with A = 215. Astatine-210 and most of the lighter isotopes exhibit beta plus decay (positron emission), astatine-217 and heavier isotopes except astatine-218 exhibit beta minus decay, while astatine-211 undergoes electron capture.
The most stable isotope is astatine-210, which has a half-life of 8.1 hours. The primary decay mode is beta plus, to the relatively long-lived (in comparison to astatine isotopes) alpha emitter polonium-210. In total, only five isotopes have half-lives exceeding one hour (astatine-207 to -211). The least stable ground state isotope is astatine-213, with a half-life of 125 nanoseconds. It undergoes alpha decay to the extremely long-lived bismuth-209.
Astatine has 24 known nuclear isomers, which are nuclei with one or more nucleons (protons or neutrons) in an excited state. A nuclear isomer may also be called a "meta-state", meaning the system has more internal energy than the "ground state" (the state with the lowest possible internal energy), making the former likely to decay into the latter. There may be more than one isomer for each isotope. The most stable of these nuclear isomers is astatine-202m1, which has a half-life of about 3 minutes, longer than those of all the ground states bar those of isotopes 203–211 and 220. The least stable is astatine-213m1; its half-life of 110 nanoseconds is shorter than 125 nanoseconds for astatine-213, the shortest-lived ground state.
Natural occurrence
Astatine is the rarest naturally occurring element. The total amount of astatine in the Earth's crust (quoted mass 2.36 × 1025 grams) is estimated by some to be less than one gram at any given time. Other sources estimate the amount of ephemeral astatine, present on earth at any given moment, to be up to one ounce (about 28 grams).
Any astatine present at the formation of the Earth has long since disappeared; the four naturally occurring isotopes (astatine-215, -217, -218 and -219) are instead continuously produced as a result of the decay of radioactive thorium and uranium ores, and trace quantities of neptunium-237. The landmass of North and South America combined, to a depth of 16 kilometers (10 miles), contains only about one trillion astatine-215 atoms at any given time (around 3.5 × 10−10 grams). Astatine-217 is produced via the radioactive decay of neptunium-237. Primordial remnants of the latter isotope—due to its relatively short half-life of 2.14 million years—are no longer present on Earth. However, trace amounts occur naturally as a product of transmutation reactions in uranium ores. Astatine-218 was the first astatine isotope discovered in nature. Astatine-219, with a half-life of 56 seconds, is the longest lived of the naturally occurring isotopes.
Isotopes of astatine are sometimes not listed as naturally occurring because of misconceptions that there are no such isotopes, or discrepancies in the literature. Astatine-216 has been counted as a naturally occurring isotope but reports of its observation (which were described as doubtful) have not been confirmed.
Synthesis
Formation
Astatine was first produced by bombarding bismuth-209 with energetic alpha particles, and this is still the major route used to create the relatively long-lived isotopes astatine-209 through astatine-211. Astatine is only produced in minuscule quantities, with modern techniques allowing production runs of up to 6.6 gigabecquerels (about 86 nanograms or 2.47 atoms). Synthesis of greater quantities of astatine using this method is constrained by the limited availability of suitable cyclotrons and the prospect of melting the target. Solvent radiolysis due to the cumulative effect of astatine decay is a related problem. With cryogenic technology, microgram quantities of astatine might be able to be generated via proton irradiation of thorium or uranium to yield radon-211, in turn decaying to astatine-211. Contamination with astatine-210 is expected to be a drawback of this method.
The most important isotope is astatine-211, the only one in commercial use. To produce the bismuth target, the metal is sputtered onto a gold, copper, or aluminium surface at 50 to 100 milligrams per square centimeter. Bismuth oxide can be used instead; this is forcibly fused with a copper plate. The target is kept under a chemically neutral nitrogen atmosphere, and is cooled with water to prevent premature astatine vaporization. In a particle accelerator, such as a cyclotron, alpha particles are collided with the bismuth. Even though only one bismuth isotope is used (bismuth-209), the reaction may occur in three possible ways, producing astatine-209, astatine-210, or astatine-211. Although higher energies can produce more astatine-211, it will produce unwanted astatine-210 that decays to toxic polonium-210 as well. Instead, the maximum energy of the particle accelerator is set to be below or slightly above the threshold of astatine-210 production, in order to maximize the production of astatine-211 while keeping the amount of astatine-210 at an acceptable level.
Separation methods
Since astatine is the main product of the synthesis, after its formation it must only be separated from the target and any significant contaminants. Several methods are available, "but they generally follow one of two approaches—dry distillation or [wet] acid treatment of the target followed by solvent extraction." The methods summarized below are modern adaptations of older procedures, as reviewed by Kugler and Keller. Pre-1985 techniques more often addressed the elimination of co-produced toxic polonium; this requirement is now mitigated by capping the energy of the cyclotron irradiation beam.
Dry
The astatine-containing cyclotron target is heated to a temperature of around 650 °C. The astatine volatilizes and is condensed in (typically) a cold trap. Higher temperatures of up to around 850 °C may increase the yield, at the risk of bismuth contamination from concurrent volatilization. Redistilling the condensate may be required to minimize the presence of bismuth (as bismuth can interfere with astatine labeling reactions). The astatine is recovered from the trap using one or more low concentration solvents such as sodium hydroxide, methanol or chloroform. Astatine yields of up to around 80% may be achieved. Dry separation is the method most commonly used to produce a chemically useful form of astatine.
Wet
The irradiated bismuth (or sometimes bismuth trioxide) target is first dissolved in, for example, concentrated nitric or perchloric acid. Following this first step, the acid can be distilled away to leave behind a white residue that contains both bismuth and the desired astatine product. This residue is then dissolved in a concentrated acid, such as hydrochloric acid. Astatine is extracted from this acid using an organic solvent such as dibutyl ether, diisopropyl ether (DIPE), or thiosemicarbazide. Using liquid-liquid extraction, the astatine product can be repeatedly washed with an acid, such as HCl, and extracted into the organic solvent layer. A separation yield of 93% using nitric acid has been reported, falling to 72% by the time purification procedures were completed (distillation of nitric acid, purging residual nitrogen oxides, and redissolving bismuth nitrate to enable liquid–liquid extraction). Wet methods involve "multiple radioactivity handling steps" and have not been considered well suited for isolating larger quantities of astatine. However, wet extraction methods are being examined for use in production of larger quantities of astatine-211, as it is thought that wet extraction methods can provide more consistency. They can enable the production of astatine in a specific oxidation state and may have greater applicability in experimental radiochemistry.
Uses and precautions
Newly formed astatine-211 is the subject of ongoing research in nuclear medicine. It must be used quickly as it decays with a half-life of 7.2 hours; this is long enough to permit multistep labeling strategies. Astatine-211 has potential for targeted alpha-particle therapy, since it decays either via emission of an alpha particle (to bismuth-207), or via electron capture (to an extremely short-lived nuclide, polonium-211, which undergoes further alpha decay), very quickly reaching its stable granddaughter lead-207. Polonium X-rays emitted as a result of the electron capture branch, in the range of 77–92 keV, enable the tracking of astatine in animals and patients. Although astatine-210 has a slightly longer half-life, it is wholly unsuitable because it usually undergoes beta plus decay to the extremely toxic polonium-210.
The principal medicinal difference between astatine-211 and iodine-131 (a radioactive iodine isotope also used in medicine) is that iodine-131 emits high-energy beta particles, and astatine does not. Beta particles have much greater penetrating power through tissues than do the much heavier alpha particles. An average alpha particle released by astatine-211 can travel up to 70 μm through surrounding tissues; an average-energy beta particle emitted by iodine-131 can travel nearly 30 times as far, to about 2 mm. The short half-life and limited penetrating power of alpha radiation through tissues offers advantages in situations where the "tumor burden is low and/or malignant cell populations are located in close proximity to essential normal tissues." Significant morbidity in cell culture models of human cancers has been achieved with from one to ten astatine-211 atoms bound per cell.
Several obstacles have been encountered in the development of astatine-based radiopharmaceuticals for cancer treatment. World War II delayed research for close to a decade. Results of early experiments indicated that a cancer-selective carrier would need to be developed and it was not until the 1970s that monoclonal antibodies became available for this purpose. Unlike iodine, astatine shows a tendency to dehalogenate from molecular carriers such as these, particularly at sp3 carbon sites (less so from sp2 sites). Given the toxicity of astatine accumulated and retained in the body, this emphasized the need to ensure it remained attached to its host molecule. While astatine carriers that are slowly metabolized can be assessed for their efficacy, more rapidly metabolized carriers remain a significant obstacle to the evaluation of astatine in nuclear medicine. Mitigating the effects of astatine-induced radiolysis of labeling chemistry and carrier molecules is another area requiring further development. A practical application for astatine as a cancer treatment would potentially be suitable for a "staggering" number of patients; production of astatine in the quantities that would be required remains an issue.
Animal studies show that astatine, similarly to iodine—although to a lesser extent, perhaps because of its slightly more metallic nature—is preferentially (and dangerously) concentrated in the thyroid gland. Unlike iodine, astatine also shows a tendency to be taken up by the lungs and spleen, possibly because of in-body oxidation of At– to At+. If administered in the form of a radiocolloid it tends to concentrate in the liver. Experiments in rats and monkeys suggest that astatine-211 causes much greater damage to the thyroid gland than does iodine-131, with repetitive injection of the nuclide resulting in necrosis and cell dysplasia within the gland. Early research suggested that injection of astatine into female rodents caused morphological changes in breast tissue; this conclusion remained controversial for many years. General agreement was later reached that this was likely caused by the effect of breast tissue irradiation combined with hormonal changes due to irradiation of the ovaries. Trace amounts of astatine can be handled safely in fume hoods if they are well-aerated; biological uptake of the element must be avoided.
See also
Radiation protection
Notes
References
Bibliography
External links
Astatine at The Periodic Table of Videos (University of Nottingham)
Astatine: Halogen or Metal?
Chemical elements
Chemical elements with face-centered cubic structure
Halogens
Synthetic elements | Astatine | [
"Physics",
"Chemistry"
] | 7,216 | [
"Matter",
"Chemical elements",
"Synthetic materials",
"Synthetic elements",
"Atoms",
"Radioactivity"
] |
902 | https://en.wikipedia.org/wiki/Atom | Atoms are the basic particles of the chemical elements. An atom consists of a nucleus of protons and generally neutrons, surrounded by an electromagnetically bound swarm of electrons. The chemical elements are distinguished from each other by the number of protons that are in their atoms. For example, any atom that contains 11 protons is sodium, and any atom that contains 29 protons is copper. Atoms with the same number of protons but a different number of neutrons are called isotopes of the same element.
Atoms are extremely small, typically around 100 picometers across. A human hair is about a million carbon atoms wide. Atoms are smaller than the shortest wavelength of visible light, which means humans cannot see atoms with conventional microscopes. They are so small that accurately predicting their behavior using classical physics is not possible due to quantum effects.
More than 99.9994% of an atom's mass is in the nucleus. Protons have a positive electric charge and neutrons have no charge, so the nucleus is positively charged. The electrons are negatively charged, and this opposing charge is what binds them to the nucleus. If the numbers of protons and electrons are equal, as they normally are, then the atom is electrically neutral as a whole. If an atom has more electrons than protons, then it has an overall negative charge, and is called a negative ion (or anion). Conversely, if it has more protons than electrons, it has a positive charge, and is called a positive ion (or cation).
The electrons of an atom are attracted to the protons in an atomic nucleus by the electromagnetic force. The protons and neutrons in the nucleus are attracted to each other by the nuclear force. This force is usually stronger than the electromagnetic force that repels the positively charged protons from one another. Under certain circumstances, the repelling electromagnetic force becomes stronger than the nuclear force. In this case, the nucleus splits and leaves behind different elements. This is a form of nuclear decay.
Atoms can attach to one or more other atoms by chemical bonds to form chemical compounds such as molecules or crystals. The ability of atoms to attach and detach from each other is responsible for most of the physical changes observed in nature. Chemistry is the science that studies these changes.
History of atomic theory
In philosophy
The basic idea that matter is made up of tiny indivisible particles is an old idea that appeared in many ancient cultures. The word atom is derived from the ancient Greek word atomos, which means "uncuttable". But this ancient idea was based in philosophical reasoning rather than scientific reasoning. Modern atomic theory is not based on these old concepts. In the early 19th century, the scientist John Dalton found evidence that matter really is composed of discrete units, and so applied the word atom to those units.
Dalton's law of multiple proportions
In the early 1800s, John Dalton compiled experimental data gathered by him and other scientists and discovered a pattern now known as the "law of multiple proportions". He noticed that in any group of chemical compounds which all contain two particular chemical elements, the amount of Element A per measure of Element B will differ across these compounds by ratios of small whole numbers. This pattern suggested that each element combines with other elements in multiples of a basic unit of weight, with each element having a unit of unique weight. Dalton decided to call these units "atoms".
For example, there are two types of tin oxide: one is a grey powder that is 88.1% tin and 11.9% oxygen, and the other is a white powder that is 78.7% tin and 21.3% oxygen. Adjusting these figures, in the grey powder there is about 13.5 g of oxygen for every 100 g of tin, and in the white powder there is about 27 g of oxygen for every 100 g of tin. 13.5 and 27 form a ratio of 1:2. Dalton concluded that in the grey oxide there is one atom of oxygen for every atom of tin, and in the white oxide there are two atoms of oxygen for every atom of tin (SnO and SnO2).
Dalton also analyzed iron oxides. There is one type of iron oxide that is a black powder which is 78.1% iron and 21.9% oxygen; and there is another iron oxide that is a red powder which is 70.4% iron and 29.6% oxygen. Adjusting these figures, in the black powder there is about 28 g of oxygen for every 100 g of iron, and in the red powder there is about 42 g of oxygen for every 100 g of iron. 28 and 42 form a ratio of 2:3. Dalton concluded that in these oxides, for every two atoms of iron, there are two or three atoms of oxygen respectively (Fe2O2 and Fe2O3).
As a final example: nitrous oxide is 63.3% nitrogen and 36.7% oxygen, nitric oxide is 44.05% nitrogen and 55.95% oxygen, and nitrogen dioxide is 29.5% nitrogen and 70.5% oxygen. Adjusting these figures, in nitrous oxide there is 80 g of oxygen for every 140 g of nitrogen, in nitric oxide there is about 160 g of oxygen for every 140 g of nitrogen, and in nitrogen dioxide there is 320 g of oxygen for every 140 g of nitrogen. 80, 160, and 320 form a ratio of 1:2:4. The respective formulas for these oxides are N2O, NO, and NO2.
Discovery of the electron
In 1897, J. J. Thomson discovered that cathode rays can be deflected by electric and magnetic fields, which meant that cathode rays are not a form of light but made of electrically charged particles, and their charge was negative given the direction the particles were deflected in. He measured these particles to be 1,700 times lighter than hydrogen (the lightest atom). He called these new particles corpuscles but they were later renamed electrons since these are the particles that carry electricity. Thomson also showed that electrons were identical to particles given off by photoelectric and radioactive materials. Thomson explained that an electric current is the passing of electrons from one atom to the next, and when there was no current the electrons embedded themselves in the atoms. This in turn meant that atoms were not indivisible as scientists thought. The atom was composed of electrons whose negative charge was balanced out by some source of positive charge to create an electrically neutral atom. Ions, Thomson explained, must be atoms which have an excess or shortage of electrons.
Discovery of the nucleus
The electrons in the atom logically had to be balanced out by a commensurate amount of positive charge, but Thomson had no idea where this positive charge came from, so he tentatively proposed that it was everywhere in the atom, the atom being in the shape of a sphere. This was the mathematically simplest hypothesis to fit the available evidence, or lack thereof. Following from this, Thomson imagined that the balance of electrostatic forces would distribute the electrons throughout the sphere in a more or less even manner. Thomson's model is popularly known as the plum pudding model, though neither Thomson nor his colleagues used this analogy. Thomson's model was incomplete, it was unable to predict any other properties of the elements such as emission spectra and valencies. It was soon rendered obsolete by the discovery of the atomic nucleus.
Between 1908 and 1913, Ernest Rutherford and his colleagues Hans Geiger and Ernest Marsden performed a series of experiments in which they bombarded thin foils of metal with a beam of alpha particles. They did this to measure the scattering patterns of the alpha particles. They spotted a small number of alpha particles being deflected by angles greater than 90°. This shouldn't have been possible according to the Thomson model of the atom, whose charges were too diffuse to produce a sufficiently strong electric field. The deflections should have all been negligible. Rutherford proposed that the positive charge of the atom is concentrated in a tiny volume at the center of the atom and that the electrons surround this nucleus in a diffuse cloud. This nucleus carried almost all of the atom's mass, the electrons being so very light. Only such an intense concentration of charge, anchored by its high mass, could produce an electric field that could deflect the alpha particles so strongly.
Bohr model
A problem in classical mechanics is that an accelerating charged particle radiates electromagnetic radiation, causing the particle to lose kinetic energy. Circular motion counts as acceleration, which means that an electron orbiting a central charge should spiral down into that nucleus as it loses speed. In 1913, the physicist Niels Bohr proposed a new model in which the electrons of an atom were assumed to orbit the nucleus but could only do so in a finite set of orbits, and could jump between these orbits only in discrete changes of energy corresponding to absorption or radiation of a photon. This quantization was used to explain why the electrons' orbits are stable and why elements absorb and emit electromagnetic radiation in discrete spectra. Bohr's model could only predict the emission spectra of hydrogen, not atoms with more than one electron.
Discovery of protons and neutrons
Back in 1815, William Prout observed that the atomic weights of many elements were multiples of hydrogen's atomic weight, which is in fact true for all of them if one takes isotopes into account. In 1898, J. J. Thomson found that the positive charge of a hydrogen ion is equal to the negative charge of an electron, and these were then the smallest known charged particles. Thomson later found that the positive charge in an atom is a positive multiple of an electron's negative charge. In 1913, Henry Moseley discovered that the frequencies of X-ray emissions from an excited atom were a mathematical function of its atomic number and hydrogen's nuclear charge. In 1919 Rutherford bombarded nitrogen gas with alpha particles and detected hydrogen ions being emitted from the gas, and concluded that they were produced by alpha particles hitting and splitting the nuclei of the nitrogen atoms.
These observations led Rutherford to conclude that the hydrogen nucleus is a singular particle with a positive charge equal to the electron's negative charge. He named this particle "proton" in 1920. The number of protons in an atom (which Rutherford called the "atomic number") was found to be equal to the element's ordinal number on the periodic table and therefore provided a simple and clear-cut way of distinguishing the elements from each other. The atomic weight of each element is higher than its proton number, so Rutherford hypothesized that the surplus weight was carried by unknown particles with no electric charge and a mass equal to that of the proton.
In 1928, Walter Bothe observed that beryllium emitted a highly penetrating, electrically neutral radiation when bombarded with alpha particles. It was later discovered that this radiation could knock hydrogen atoms out of paraffin wax. Initially it was thought to be high-energy gamma radiation, since gamma radiation had a similar effect on electrons in metals, but James Chadwick found that the ionization effect was too strong for it to be due to electromagnetic radiation, so long as energy and momentum were conserved in the interaction. In 1932, Chadwick exposed various elements, such as hydrogen and nitrogen, to the mysterious "beryllium radiation", and by measuring the energies of the recoiling charged particles, he deduced that the radiation was actually composed of electrically neutral particles which could not be massless like the gamma ray, but instead were required to have a mass similar to that of a proton. Chadwick now claimed these particles as Rutherford's neutrons.
The current consensus model
In 1925, Werner Heisenberg published the first consistent mathematical formulation of quantum mechanics (matrix mechanics). One year earlier, Louis de Broglie had proposed that all particles behave like waves to some extent, and in 1926 Erwin Schroedinger used this idea to develop the Schroedinger equation, which describes electrons as three-dimensional waveforms rather than points in space. A consequence of using waveforms to describe particles is that it is mathematically impossible to obtain precise values for both the position and momentum of a particle at a given point in time. This became known as the uncertainty principle, formulated by Werner Heisenberg in 1927. In this concept, for a given accuracy in measuring a position one could only obtain a range of probable values for momentum, and vice versa. Thus, the planetary model of the atom was discarded in favor of one that described atomic orbital zones around the nucleus where a given electron is most likely to be found. This model was able to explain observations of atomic behavior that previous models could not, such as certain structural and spectral patterns of atoms larger than hydrogen.
Structure
Subatomic particles
Though the word atom originally denoted a particle that cannot be cut into smaller particles, in modern scientific usage the atom is composed of various subatomic particles. The constituent particles of an atom are the electron, the proton and the neutron.
The electron is the least massive of these particles by four orders of magnitude at , with a negative electrical charge and a size that is too small to be measured using available techniques. It was the lightest particle with a positive rest mass measured, until the discovery of neutrino mass. Under ordinary conditions, electrons are bound to the positively charged nucleus by the attraction created from opposite electric charges. If an atom has more or fewer electrons than its atomic number, then it becomes respectively negatively or positively charged as a whole; a charged atom is called an ion. Electrons have been known since the late 19th century, mostly thanks to J.J. Thomson; see history of subatomic physics for details.
Protons have a positive charge and a mass of . The number of protons in an atom is called its atomic number. Ernest Rutherford (1919) observed that nitrogen under alpha-particle bombardment ejects what appeared to be hydrogen nuclei. By 1920 he had accepted that the hydrogen nucleus is a distinct particle within the atom and named it proton.
Neutrons have no electrical charge and have a mass of . Neutrons are the heaviest of the three constituent particles, but their mass can be reduced by the nuclear binding energy. Neutrons and protons (collectively known as nucleons) have comparable dimensions—on the order of —although the 'surface' of these particles is not sharply defined. The neutron was discovered in 1932 by the English physicist James Chadwick.
In the Standard Model of physics, electrons are truly elementary particles with no internal structure, whereas protons and neutrons are composite particles composed of elementary particles called quarks. There are two types of quarks in atoms, each having a fractional electric charge. Protons are composed of two up quarks (each with charge +) and one down quark (with a charge of −). Neutrons consist of one up quark and two down quarks. This distinction accounts for the difference in mass and charge between the two particles.
The quarks are held together by the strong interaction (or strong force), which is mediated by gluons. The protons and neutrons, in turn, are held to each other in the nucleus by the nuclear force, which is a residuum of the strong force that has somewhat different range-properties (see the article on the nuclear force for more). The gluon is a member of the family of gauge bosons, which are elementary particles that mediate physical forces.
Nucleus
All the bound protons and neutrons in an atom make up a tiny atomic nucleus, and are collectively called nucleons. The radius of a nucleus is approximately equal to femtometres, where is the total number of nucleons. This is much smaller than the radius of the atom, which is on the order of 105 fm. The nucleons are bound together by a short-ranged attractive potential called the residual strong force. At distances smaller than 2.5 fm this force is much more powerful than the electrostatic force that causes positively charged protons to repel each other.
Atoms of the same element have the same number of protons, called the atomic number. Within a single element, the number of neutrons may vary, determining the isotope of that element. The total number of protons and neutrons determine the nuclide. The number of neutrons relative to the protons determines the stability of the nucleus, with certain isotopes undergoing radioactive decay.
The proton, the electron, and the neutron are classified as fermions. Fermions obey the Pauli exclusion principle which prohibits identical fermions, such as multiple protons, from occupying the same quantum state at the same time. Thus, every proton in the nucleus must occupy a quantum state different from all other protons, and the same applies to all neutrons of the nucleus and to all electrons of the electron cloud.
A nucleus that has a different number of protons than neutrons can potentially drop to a lower energy state through a radioactive decay that causes the number of protons and neutrons to more closely match. As a result, atoms with matching numbers of protons and neutrons are more stable against decay, but with increasing atomic number, the mutual repulsion of the protons requires an increasing proportion of neutrons to maintain the stability of the nucleus.
The number of protons and neutrons in the atomic nucleus can be modified, although this can require very high energies because of the strong force. Nuclear fusion occurs when multiple atomic particles join to form a heavier nucleus, such as through the energetic collision of two nuclei. For example, at the core of the Sun protons require energies of 3 to 10 keV to overcome their mutual repulsion—the coulomb barrier—and fuse together into a single nucleus. Nuclear fission is the opposite process, causing a nucleus to split into two smaller nuclei—usually through radioactive decay. The nucleus can also be modified through bombardment by high energy subatomic particles or photons. If this modifies the number of protons in a nucleus, the atom changes to a different chemical element.
If the mass of the nucleus following a fusion reaction is less than the sum of the masses of the separate particles, then the difference between these two values can be emitted as a type of usable energy (such as a gamma ray, or the kinetic energy of a beta particle), as described by Albert Einstein's mass–energy equivalence formula, E=mc2, where m is the mass loss and c is the speed of light. This deficit is part of the binding energy of the new nucleus, and it is the non-recoverable loss of the energy that causes the fused particles to remain together in a state that requires this energy to separate.
The fusion of two nuclei that create larger nuclei with lower atomic numbers than iron and nickel—a total nucleon number of about 60—is usually an exothermic process that releases more energy than is required to bring them together. It is this energy-releasing process that makes nuclear fusion in stars a self-sustaining reaction. For heavier nuclei, the binding energy per nucleon begins to decrease. That means that a fusion process producing a nucleus that has an atomic number higher than about 26, and a mass number higher than about 60, is an endothermic process. Thus, more massive nuclei cannot undergo an energy-producing fusion reaction that can sustain the hydrostatic equilibrium of a star.
Electron cloud
The electrons in an atom are attracted to the protons in the nucleus by the electromagnetic force. This force binds the electrons inside an electrostatic potential well surrounding the smaller nucleus, which means that an external source of energy is needed for the electron to escape. The closer an electron is to the nucleus, the greater the attractive force. Hence electrons bound near the center of the potential well require more energy to escape than those at greater separations.
Electrons, like other particles, have properties of both a particle and a wave. The electron cloud is a region inside the potential well where each electron forms a type of three-dimensional standing wave—a wave form that does not move relative to the nucleus. This behavior is defined by an atomic orbital, a mathematical function that characterises the probability that an electron appears to be at a particular location when its position is measured. Only a discrete (or quantized) set of these orbitals exist around the nucleus, as other possible wave patterns rapidly decay into a more stable form. Orbitals can have one or more ring or node structures, and differ from each other in size, shape and orientation.
Each atomic orbital corresponds to a particular energy level of the electron. The electron can change its state to a higher energy level by absorbing a photon with sufficient energy to boost it into the new quantum state. Likewise, through spontaneous emission, an electron in a higher energy state can drop to a lower energy state while radiating the excess energy as a photon. These characteristic energy values, defined by the differences in the energies of the quantum states, are responsible for atomic spectral lines.
The amount of energy needed to remove or add an electron—the electron binding energy—is far less than the binding energy of nucleons. For example, it requires only 13.6 eV to strip a ground-state electron from a hydrogen atom, compared to 2.23 million eV for splitting a deuterium nucleus. Atoms are electrically neutral if they have an equal number of protons and electrons. Atoms that have either a deficit or a surplus of electrons are called ions. Electrons that are farthest from the nucleus may be transferred to other nearby atoms or shared between atoms. By this mechanism, atoms are able to bond into molecules and other types of chemical compounds like ionic and covalent network crystals.
Properties
Nuclear properties
By definition, any two atoms with an identical number of protons in their nuclei belong to the same chemical element. Atoms with equal numbers of protons but a different number of neutrons are different isotopes of the same element. For example, all hydrogen atoms admit exactly one proton, but isotopes exist with no neutrons (hydrogen-1, by far the most common form, also called protium), one neutron (deuterium), two neutrons (tritium) and more than two neutrons. The known elements form a set of atomic numbers, from the single-proton element hydrogen up to the 118-proton element oganesson. All known isotopes of elements with atomic numbers greater than 82 are radioactive, although the radioactivity of element 83 (bismuth) is so slight as to be practically negligible.
About 339 nuclides occur naturally on Earth, of which 251 (about 74%) have not been observed to decay, and are referred to as "stable isotopes". Only 90 nuclides are stable theoretically, while another 161 (bringing the total to 251) have not been observed to decay, even though in theory it is energetically possible. These are also formally classified as "stable". An additional 35 radioactive nuclides have half-lives longer than 100 million years, and are long-lived enough to have been present since the birth of the Solar System. This collection of 286 nuclides are known as primordial nuclides. Finally, an additional 53 short-lived nuclides are known to occur naturally, as daughter products of primordial nuclide decay (such as radium from uranium), or as products of natural energetic processes on Earth, such as cosmic ray bombardment (for example, carbon-14).
For 80 of the chemical elements, at least one stable isotope exists. As a rule, there is only a handful of stable isotopes for each of these elements, the average being 3.1 stable isotopes per element. Twenty-six "monoisotopic elements" have only a single stable isotope, while the largest number of stable isotopes observed for any element is ten, for the element tin. Elements 43, 61, and all elements numbered 83 or higher have no stable isotopes.
Stability of isotopes is affected by the ratio of protons to neutrons, and also by the presence of certain "magic numbers" of neutrons or protons that represent closed and filled quantum shells. These quantum shells correspond to a set of energy levels within the shell model of the nucleus; filled shells, such as the filled shell of 50 protons for tin, confers unusual stability on the nuclide. Of the 251 known stable nuclides, only four have both an odd number of protons and odd number of neutrons: hydrogen-2 (deuterium), lithium-6, boron-10, and nitrogen-14. (Tantalum-180m is odd-odd and observationally stable, but is predicted to decay with a very long half-life.) Also, only four naturally occurring, radioactive odd-odd nuclides have a half-life over a billion years: potassium-40, vanadium-50, lanthanum-138, and lutetium-176. Most odd-odd nuclei are highly unstable with respect to beta decay, because the decay products are even-even, and are therefore more strongly bound, due to nuclear pairing effects.
Mass
The large majority of an atom's mass comes from the protons and neutrons that make it up. The total number of these particles (called "nucleons") in a given atom is called the mass number. It is a positive integer and dimensionless (instead of having dimension of mass), because it expresses a count. An example of use of a mass number is "carbon-12," which has 12 nucleons (six protons and six neutrons).
The actual mass of an atom at rest is often expressed in daltons (Da), also called the unified atomic mass unit (u). This unit is defined as a twelfth of the mass of a free neutral atom of carbon-12, which is approximately . Hydrogen-1 (the lightest isotope of hydrogen which is also the nuclide with the lowest mass) has an atomic weight of 1.007825 Da. The value of this number is called the atomic mass. A given atom has an atomic mass approximately equal (within 1%) to its mass number times the atomic mass unit (for example the mass of a nitrogen-14 is roughly 14 Da), but this number will not be exactly an integer except (by definition) in the case of carbon-12. The heaviest stable atom is lead-208, with a mass of .
As even the most massive atoms are far too light to work with directly, chemists instead use the unit of moles. One mole of atoms of any element always has the same number of atoms (about ). This number was chosen so that if an element has an atomic mass of 1 u, a mole of atoms of that element has a mass close to one gram. Because of the definition of the unified atomic mass unit, each carbon-12 atom has an atomic mass of exactly 12 Da, and so a mole of carbon-12 atoms weighs exactly 0.012 kg.
Shape and size
Atoms lack a well-defined outer boundary, so their dimensions are usually described in terms of an atomic radius. This is a measure of the distance out to which the electron cloud extends from the nucleus. This assumes the atom to exhibit a spherical shape, which is only obeyed for atoms in vacuum or free space. Atomic radii may be derived from the distances between two nuclei when the two atoms are joined in a chemical bond. The radius varies with the location of an atom on the atomic chart, the type of chemical bond, the number of neighboring atoms (coordination number) and a quantum mechanical property known as spin. On the periodic table of the elements, atom size tends to increase when moving down columns, but decrease when moving across rows (left to right). Consequently, the smallest atom is helium with a radius of 32 pm, while one of the largest is caesium at 225 pm.
When subjected to external forces, like electrical fields, the shape of an atom may deviate from spherical symmetry. The deformation depends on the field magnitude and the orbital type of outer shell electrons, as shown by group-theoretical considerations. Aspherical deviations might be elicited for instance in crystals, where large crystal-electrical fields may occur at low-symmetry lattice sites. Significant ellipsoidal deformations have been shown to occur for sulfur ions and chalcogen ions in pyrite-type compounds.
Atomic dimensions are thousands of times smaller than the wavelengths of light (400–700 nm) so they cannot be viewed using an optical microscope, although individual atoms can be observed using a scanning tunneling microscope. To visualize the minuteness of the atom, consider that a typical human hair is about 1 million carbon atoms in width. A single drop of water contains about 2 sextillion () atoms of oxygen, and twice the number of hydrogen atoms. A single carat diamond with a mass of contains about 10 sextillion (1022) atoms of carbon. If an apple were magnified to the size of the Earth, then the atoms in the apple would be approximately the size of the original apple.
Radioactive decay
Every element has one or more isotopes that have unstable nuclei that are subject to radioactive decay, causing the nucleus to emit particles or electromagnetic radiation. Radioactivity can occur when the radius of a nucleus is large compared with the radius of the strong force, which only acts over distances on the order of 1 fm.
The most common forms of radioactive decay are:
Alpha decay: this process is caused when the nucleus emits an alpha particle, which is a helium nucleus consisting of two protons and two neutrons. The result of the emission is a new element with a lower atomic number.
Beta decay (and electron capture): these processes are regulated by the weak force, and result from a transformation of a neutron into a proton, or a proton into a neutron. The neutron to proton transition is accompanied by the emission of an electron and an antineutrino, while proton to neutron transition (except in electron capture) causes the emission of a positron and a neutrino. The electron or positron emissions are called beta particles. Beta decay either increases or decreases the atomic number of the nucleus by one. Electron capture is more common than positron emission, because it requires less energy. In this type of decay, an electron is absorbed by the nucleus, rather than a positron emitted from the nucleus. A neutrino is still emitted in this process, and a proton changes to a neutron.
Gamma decay: this process results from a change in the energy level of the nucleus to a lower state, resulting in the emission of electromagnetic radiation. The excited state of a nucleus which results in gamma emission usually occurs following the emission of an alpha or a beta particle. Thus, gamma decay usually follows alpha or beta decay.
Other more rare types of radioactive decay include ejection of neutrons or protons or clusters of nucleons from a nucleus, or more than one beta particle. An analog of gamma emission which allows excited nuclei to lose energy in a different way, is internal conversion—a process that produces high-speed electrons that are not beta rays, followed by production of high-energy photons that are not gamma rays. A few large nuclei explode into two or more charged fragments of varying masses plus several neutrons, in a decay called spontaneous nuclear fission.
Each radioactive isotope has a characteristic decay time period—the half-life—that is determined by the amount of time needed for half of a sample to decay. This is an exponential decay process that steadily decreases the proportion of the remaining isotope by 50% every half-life. Hence after two half-lives have passed only 25% of the isotope is present, and so forth.
Magnetic moment
Elementary particles possess an intrinsic quantum mechanical property known as spin. This is analogous to the angular momentum of an object that is spinning around its center of mass, although strictly speaking these particles are believed to be point-like and cannot be said to be rotating. Spin is measured in units of the reduced Planck constant (ħ), with electrons, protons and neutrons all having spin ħ, or "spin-". In an atom, electrons in motion around the nucleus possess orbital angular momentum in addition to their spin, while the nucleus itself possesses angular momentum due to its nuclear spin.
The magnetic field produced by an atom—its magnetic moment—is determined by these various forms of angular momentum, just as a rotating charged object classically produces a magnetic field, but the most dominant contribution comes from electron spin. Due to the nature of electrons to obey the Pauli exclusion principle, in which no two electrons may be found in the same quantum state, bound electrons pair up with each other, with one member of each pair in a spin up state and the other in the opposite, spin down state. Thus these spins cancel each other out, reducing the total magnetic dipole moment to zero in some atoms with even number of electrons.
In ferromagnetic elements such as iron, cobalt and nickel, an odd number of electrons leads to an unpaired electron and a net overall magnetic moment. The orbitals of neighboring atoms overlap and a lower energy state is achieved when the spins of unpaired electrons are aligned with each other, a spontaneous process known as an exchange interaction. When the magnetic moments of ferromagnetic atoms are lined up, the material can produce a measurable macroscopic field. Paramagnetic materials have atoms with magnetic moments that line up in random directions when no magnetic field is present, but the magnetic moments of the individual atoms line up in the presence of a field.
The nucleus of an atom will have no spin when it has even numbers of both neutrons and protons, but for other cases of odd numbers, the nucleus may have a spin. Normally nuclei with spin are aligned in random directions because of thermal equilibrium, but for certain elements (such as xenon-129) it is possible to polarize a significant proportion of the nuclear spin states so that they are aligned in the same direction—a condition called hyperpolarization. This has important applications in magnetic resonance imaging.
Energy levels
The potential energy of an electron in an atom is negative relative to when the distance from the nucleus goes to infinity; its dependence on the electron's position reaches the minimum inside the nucleus, roughly in inverse proportion to the distance. In the quantum-mechanical model, a bound electron can occupy only a set of states centered on the nucleus, and each state corresponds to a specific energy level; see time-independent Schrödinger equation for a theoretical explanation. An energy level can be measured by the amount of energy needed to unbind the electron from the atom, and is usually given in units of electronvolts (eV). The lowest energy state of a bound electron is called the ground state, i.e. stationary state, while an electron transition to a higher level results in an excited state. The electron's energy increases along with n because the (average) distance to the nucleus increases. Dependence of the energy on is caused not by the electrostatic potential of the nucleus, but by interaction between electrons.
For an electron to transition between two different states, e.g. ground state to first excited state, it must absorb or emit a photon at an energy matching the difference in the potential energy of those levels, according to the Niels Bohr model, what can be precisely calculated by the Schrödinger equation.
Electrons jump between orbitals in a particle-like fashion. For example, if a single photon strikes the electrons, only a single electron changes states in response to the photon; see Electron properties.
The energy of an emitted photon is proportional to its frequency, so these specific energy levels appear as distinct bands in the electromagnetic spectrum. Each element has a characteristic spectrum that can depend on the nuclear charge, subshells filled by electrons, the electromagnetic interactions between the electrons and other factors.
When a continuous spectrum of energy is passed through a gas or plasma, some of the photons are absorbed by atoms, causing electrons to change their energy level. Those excited electrons that remain bound to their atom spontaneously emit this energy as a photon, traveling in a random direction, and so drop back to lower energy levels. Thus the atoms behave like a filter that forms a series of dark absorption bands in the energy output. (An observer viewing the atoms from a view that does not include the continuous spectrum in the background, instead sees a series of emission lines from the photons emitted by the atoms.) Spectroscopic measurements of the strength and width of atomic spectral lines allow the composition and physical properties of a substance to be determined.
Close examination of the spectral lines reveals that some display a fine structure splitting. This occurs because of spin–orbit coupling, which is an interaction between the spin and motion of the outermost electron. When an atom is in an external magnetic field, spectral lines become split into three or more components; a phenomenon called the Zeeman effect. This is caused by the interaction of the magnetic field with the magnetic moment of the atom and its electrons. Some atoms can have multiple electron configurations with the same energy level, which thus appear as a single spectral line. The interaction of the magnetic field with the atom shifts these electron configurations to slightly different energy levels, resulting in multiple spectral lines. The presence of an external electric field can cause a comparable splitting and shifting of spectral lines by modifying the electron energy levels, a phenomenon called the Stark effect.
If a bound electron is in an excited state, an interacting photon with the proper energy can cause stimulated emission of a photon with a matching energy level. For this to occur, the electron must drop to a lower energy state that has an energy difference matching the energy of the interacting photon. The emitted photon and the interacting photon then move off in parallel and with matching phases. That is, the wave patterns of the two photons are synchronized. This physical property is used to make lasers, which can emit a coherent beam of light energy in a narrow frequency band.
Valence and bonding behavior
Valency is the combining power of an element. It is determined by the number of bonds it can form to other atoms or groups. The outermost electron shell of an atom in its uncombined state is known as the valence shell, and the electrons in
that shell are called valence electrons. The number of valence electrons determines the bonding
behavior with other atoms. Atoms tend to chemically react with each other in a manner that fills (or empties) their outer valence shells. For example, a transfer of a single electron between atoms is a useful approximation for bonds that form between atoms with one-electron more than a filled shell, and others that are one-electron short of a full shell, such as occurs in the compound sodium chloride and other chemical ionic salts. Many elements display multiple valences, or tendencies to share differing numbers of electrons in different compounds. Thus, chemical bonding between these elements takes many forms of electron-sharing that are more than simple electron transfers. Examples include the element carbon and the organic compounds.
The chemical elements are often displayed in a periodic table that is laid out to display recurring chemical properties, and elements with the same number of valence electrons form a group that is aligned in the same column of the table. (The horizontal rows correspond to the filling of a quantum shell of electrons.) The elements at the far right of the table have their outer shell completely filled with electrons, which results in chemically inert elements known as the noble gases.
States
Quantities of atoms are found in different states of matter that depend on the physical conditions, such as temperature and pressure. By varying the conditions, materials can transition between solids, liquids, gases and plasmas. Within a state, a material can also exist in different allotropes. An example of this is solid carbon, which can exist as graphite or diamond. Gaseous allotropes exist as well, such as dioxygen and ozone.
At temperatures close to absolute zero, atoms can form a Bose–Einstein condensate, at which point quantum mechanical effects, which are normally only observed at the atomic scale, become apparent on a macroscopic scale. This super-cooled collection of atoms
then behaves as a single super atom, which may allow fundamental checks of quantum mechanical behavior.
Identification
While atoms are too small to be seen, devices such as the scanning tunneling microscope (STM) enable their visualization at the surfaces of solids. The microscope uses the quantum tunneling phenomenon, which allows particles to pass through a barrier that would be insurmountable in the classical perspective. Electrons tunnel through the vacuum between two biased electrodes, providing a tunneling current that is exponentially dependent on their separation. One electrode is a sharp tip ideally ending with a single atom. At each point of the scan of the surface the tip's height is adjusted so as to keep the tunneling current at a set value. How much the tip moves to and away from the surface is interpreted as the height profile. For low bias, the microscope images the averaged electron orbitals across closely packed energy levels—the local density of the electronic states near the Fermi level. Because of the distances involved, both electrodes need to be extremely stable; only then periodicities can be observed that correspond to individual atoms. The method alone is not chemically specific, and cannot identify the atomic species present at the surface.
Atoms can be easily identified by their mass. If an atom is ionized by removing one of its electrons, its trajectory when it passes through a magnetic field will bend. The radius by which the trajectory of a moving ion is turned by the magnetic field is determined by the mass of the atom. The mass spectrometer uses this principle to measure the mass-to-charge ratio of ions. If a sample contains multiple isotopes, the mass spectrometer can determine the proportion of each isotope in the sample by measuring the intensity of the different beams of ions. Techniques to vaporize atoms include inductively coupled plasma atomic emission spectroscopy and inductively coupled plasma mass spectrometry, both of which use a plasma to vaporize samples for analysis.
The atom-probe tomograph has sub-nanometer resolution in 3-D and can chemically identify individual atoms using time-of-flight mass spectrometry.
Electron emission techniques such as X-ray photoelectron spectroscopy (XPS) and Auger electron spectroscopy (AES), which measure the binding energies of the core electrons, are used to identify the atomic species present in a sample in a non-destructive way. With proper focusing both can be made area-specific. Another such method is electron energy loss spectroscopy (EELS), which measures the energy loss of an electron beam within a transmission electron microscope when it interacts with a portion of a sample.
Spectra of excited states can be used to analyze the atomic composition of distant stars. Specific light wavelengths contained in the observed light from stars can be separated out and related to the quantized transitions in free gas atoms. These colors can be replicated using a gas-discharge lamp containing the same element. Helium was discovered in this way in the spectrum of the Sun 23 years before it was found on Earth.
Origin and current state
Baryonic matter forms about 4% of the total energy density of the observable universe, with an average density of about 0.25 particles/m3 (mostly protons and electrons). Within a galaxy such as the Milky Way, particles have a much higher concentration, with the density of matter in the interstellar medium (ISM) ranging from 105 to 109 atoms/m3. The Sun is believed to be inside the Local Bubble, so the density in the solar neighborhood is only about 103 atoms/m3. Stars form from dense clouds in the ISM, and the evolutionary processes of stars result in the steady enrichment of the ISM with elements more massive than hydrogen and helium.
Up to 95% of the Milky Way's baryonic matter are concentrated inside stars, where conditions are unfavorable for atomic matter. The total baryonic mass is about 10% of the mass of the galaxy; the remainder of the mass is an unknown dark matter. High temperature inside stars makes most "atoms" fully ionized, that is, separates all electrons from the nuclei. In stellar remnants—with exception of their surface layers—an immense pressure make electron shells impossible.
Formation
Electrons are thought to exist in the Universe since early stages of the Big Bang. Atomic nuclei forms in nucleosynthesis reactions. In about three minutes Big Bang nucleosynthesis produced most of the helium, lithium, and deuterium in the Universe, and perhaps some of the beryllium and boron.
Ubiquitousness and stability of atoms relies on their binding energy, which means that an atom has a lower energy than an unbound system of the nucleus and electrons. Where the temperature is much higher than ionization potential, the matter exists in the form of plasma—a gas of positively charged ions (possibly, bare nuclei) and electrons. When the temperature drops below the ionization potential, atoms become statistically favorable. Atoms (complete with bound electrons) became to dominate over charged particles 380,000 years after the Big Bang—an epoch called recombination, when the expanding Universe cooled enough to allow electrons to become attached to nuclei.
Since the Big Bang, which produced no carbon or heavier elements, atomic nuclei have been combined in stars through the process of nuclear fusion to produce more of the element helium, and (via the triple-alpha process) the sequence of elements from carbon up to iron; see stellar nucleosynthesis for details.
Isotopes such as lithium-6, as well as some beryllium and boron are generated in space through cosmic ray spallation. This occurs when a high-energy proton strikes an atomic nucleus, causing large numbers of nucleons to be ejected.
Elements heavier than iron were produced in supernovae and colliding neutron stars through the r-process, and in AGB stars through the s-process, both of which involve the capture of neutrons by atomic nuclei. Elements such as lead formed largely through the radioactive decay of heavier elements.
Earth
Most of the atoms that make up the Earth and its inhabitants were present in their current form in the nebula that collapsed out of a molecular cloud to form the Solar System. The rest are the result of radioactive decay, and their relative proportion can be used to determine the age of the Earth through radiometric dating. Most of the helium in the crust of the Earth (about 99% of the helium from gas wells, as shown by its lower abundance of helium-3) is a product of alpha decay.
There are a few trace atoms on Earth that were not present at the beginning (i.e., not "primordial"), nor are results of radioactive decay. Carbon-14 is continuously generated by cosmic rays in the atmosphere. Some atoms on Earth have been artificially generated either deliberately or as by-products of nuclear reactors or explosions. Of the transuranic elements—those with atomic numbers greater than 92—only plutonium and neptunium occur naturally on Earth. Transuranic elements have radioactive lifetimes shorter than the current age of the Earth and thus identifiable quantities of these elements have long since decayed, with the exception of traces of plutonium-244 possibly deposited by cosmic dust. Natural deposits of plutonium and neptunium are produced by neutron capture in uranium ore.
The Earth contains approximately atoms. Although small numbers of independent atoms of noble gases exist, such as argon, neon, and helium, 99% of the atmosphere is bound in the form of molecules, including carbon dioxide and diatomic oxygen and nitrogen. At the surface of the Earth, an overwhelming majority of atoms combine to form various compounds, including water, salt, silicates and oxides. Atoms can also combine to create materials that do not consist of discrete molecules, including crystals and liquid or solid metals. This atomic matter forms networked arrangements that lack the particular type of small-scale interrupted order associated with molecular matter.
Rare and theoretical forms
Superheavy elements
All nuclides with atomic numbers higher than 82 (lead) are known to be radioactive. No nuclide with an atomic number exceeding 92 (uranium) exists on Earth as a primordial nuclide, and heavier elements generally have shorter half-lives. Nevertheless, an "island of stability" encompassing relatively long-lived isotopes of superheavy elements with atomic numbers 110 to 114 might exist. Predictions for the half-life of the most stable nuclide on the island range from a few minutes to millions of years. In any case, superheavy elements (with Z > 104) would not exist due to increasing Coulomb repulsion (which results in spontaneous fission with increasingly short half-lives) in the absence of any stabilizing effects.
Exotic matter
Each particle of matter has a corresponding antimatter particle with the opposite electrical charge. Thus, the positron is a positively charged antielectron and the antiproton is a negatively charged equivalent of a proton. When a matter and corresponding antimatter particle meet, they annihilate each other. Because of this, along with an imbalance between the number of matter and antimatter particles, the latter are rare in the universe. The first causes of this imbalance are not yet fully understood, although theories of baryogenesis may offer an explanation. As a result, no antimatter atoms have been discovered in nature. In 1996, the antimatter counterpart of the hydrogen atom (antihydrogen) was synthesized at the CERN laboratory in Geneva.
Other exotic atoms have been created by replacing one of the protons, neutrons or electrons with other particles that have the same charge. For example, an electron can be replaced by a more massive muon, forming a muonic atom. These types of atoms can be used to test fundamental predictions of physics.
See also
Notes
References
Bibliography
Further reading
External links
Atoms in Motion – The Feynman Lectures on Physics
Chemistry
Articles containing video clips | Atom | [
"Physics"
] | 10,285 | [
"Atoms",
"Matter"
] |
904 | https://en.wikipedia.org/wiki/Aluminium | Aluminium (or aluminum in North American English) is a chemical element; it has symbol Al and atomic number 13. Aluminium has a density lower than that of other common metals, about one-third that of steel. It has a great affinity towards oxygen, forming a protective layer of oxide on the surface when exposed to air. Aluminium visually resembles silver, both in its color and in its great ability to reflect light. It is soft, nonmagnetic, and ductile. It has one stable isotope, 27Al, which is highly abundant, making aluminium the twelfth-most common element in the universe. The radioactivity of 26Al leads to it being used in radiometric dating.
Chemically, aluminium is a post-transition metal in the boron group; as is common for the group, aluminium forms compounds primarily in the +3 oxidation state. The aluminium cation Al3+ is small and highly charged; as such, it has more polarizing power, and bonds formed by aluminium have a more covalent character. The strong affinity of aluminium for oxygen leads to the common occurrence of its oxides in nature. Aluminium is found on Earth primarily in rocks in the crust, where it is the third-most abundant element, after oxygen and silicon, rather than in the mantle, and virtually never as the free metal. It is obtained industrially by mining bauxite, a sedimentary rock rich in aluminium minerals.
The discovery of aluminium was announced in 1825 by Danish physicist Hans Christian Ørsted. The first industrial production of aluminium was initiated by French chemist Henri Étienne Sainte-Claire Deville in 1856. Aluminium became much more available to the public with the Hall–Héroult process developed independently by French engineer Paul Héroult and American engineer Charles Martin Hall in 1886, and the mass production of aluminium led to its extensive use in industry and everyday life. In the First and Second World Wars, aluminium was a crucial strategic resource for aviation. In 1954, aluminium became the most produced non-ferrous metal, surpassing copper. In the 21st century, most aluminium was consumed in transportation, engineering, construction, and packaging in the United States, Western Europe, and Japan.
Despite its prevalence in the environment, no living organism is known to metabolize aluminium salts, but this aluminium is well tolerated by plants and animals. Because of the abundance of these salts, the potential for a biological role for them is of interest, and studies are ongoing.
Physical characteristics
Isotopes
Of aluminium isotopes, only is stable. This situation is common for elements with an odd atomic number. It is the only primordial aluminium isotope, i.e. the only one that has existed on Earth in its current form since the formation of the planet. It is therefore a mononuclidic element and its standard atomic weight is virtually the same as that of the isotope. This makes aluminium very useful in nuclear magnetic resonance (NMR), as its single stable isotope has a high NMR sensitivity. The standard atomic weight of aluminium is low in comparison with many other metals.
All other isotopes of aluminium are radioactive. The most stable of these is 26Al: while it was present along with stable 27Al in the interstellar medium from which the Solar System formed, having been produced by stellar nucleosynthesis as well, its half-life is only 717,000 years and therefore a detectable amount has not survived since the formation of the planet. However, minute traces of 26Al are produced from argon in the atmosphere by spallation caused by cosmic ray protons. The ratio of 26Al to 10Be has been used for radiodating of geological processes over 105 to 106 year time scales, in particular transport, deposition, sediment storage, burial times, and erosion. Most meteorite scientists believe that the energy released by the decay of 26Al was responsible for the melting and differentiation of some asteroids after their formation 4.55 billion years ago.
The remaining isotopes of aluminium, with mass numbers ranging from 21 to 43, all have half-lives well under an hour. Three metastable states are known, all with half-lives under a minute.
Electron shell
An aluminium atom has 13 electrons, arranged in an electron configuration of , with three electrons beyond a stable noble gas configuration. Accordingly, the combined first three ionization energies of aluminium are far lower than the fourth ionization energy alone. Such an electron configuration is shared with the other well-characterized members of its group, boron, gallium, indium, and thallium; it is also expected for nihonium. Aluminium can surrender its three outermost electrons in many chemical reactions (see below). The electronegativity of aluminium is 1.61 (Pauling scale).
A free aluminium atom has a radius of 143 pm. With the three outermost electrons removed, the radius shrinks to 39 pm for a 4-coordinated atom or 53.5 pm for a 6-coordinated atom. At standard temperature and pressure, aluminium atoms (when not affected by atoms of other elements) form a face-centered cubic crystal system bound by metallic bonding provided by atoms' outermost electrons; hence aluminium (at these conditions) is a metal. This crystal system is shared by many other metals, such as lead and copper; the size of a unit cell of aluminium is comparable to that of those other metals. The system, however, is not shared by the other members of its group: boron has ionization energies too high to allow metallization, thallium has a hexagonal close-packed structure, and gallium and indium have unusual structures that are not close-packed like those of aluminium and thallium. The few electrons that are available for metallic bonding in aluminium are a probable cause for it being soft with a low melting point and low electrical resistivity.
Bulk
Aluminium metal has an appearance ranging from silvery white to dull gray depending on its surface roughness. Aluminium mirrors are the most reflective of all metal mirrors for near ultraviolet and far infrared light. It is also one of the most reflective for light in the visible spectrum, nearly on par with silver in this respect, and the two therefore look similar. Aluminium is also good at reflecting solar radiation, although prolonged exposure to sunlight in air adds wear to the surface of the metal; this may be prevented if aluminium is anodized, which adds a protective layer of oxide on the surface.
The density of aluminium is 2.70 g/cm3, about 1/3 that of steel, much lower than other commonly encountered metals, making aluminium parts easily identifiable through their lightness. Aluminium's low density compared to most other metals arises from the fact that its nuclei are much lighter, while difference in the unit cell size does not compensate for this difference. The only lighter metals are the metals of groups 1 and 2, which apart from beryllium and magnesium are too reactive for structural use (and beryllium is very toxic). Aluminium is not as strong or stiff as steel, but the low density makes up for this in the aerospace industry and for many other applications where light weight and relatively high strength are crucial.
Pure aluminium is quite soft and lacking in strength. In most applications various aluminium alloys are used instead because of their higher strength and hardness. The yield strength of pure aluminium is 7–11 MPa, while aluminium alloys have yield strengths ranging from 200 MPa to 600 MPa. Aluminium is ductile, with a percent elongation of 50–70%, and malleable allowing it to be easily drawn and extruded. It is also easily machined and cast.
Aluminium is an excellent thermal and electrical conductor, having around 60% the conductivity of copper, both thermal and electrical, while having only 30% of copper's density. Aluminium is capable of superconductivity, with a superconducting critical temperature of 1.2 kelvin and a critical magnetic field of about 100 gauss (10 milliteslas). It is paramagnetic and thus essentially unaffected by static magnetic fields. The high electrical conductivity, however, means that it is strongly affected by alternating magnetic fields through the induction of eddy currents.
Chemistry
Aluminium combines characteristics of pre- and post-transition metals. Since it has few available electrons for metallic bonding, like its heavier group 13 congeners, it has the characteristic physical properties of a post-transition metal, with longer-than-expected interatomic distances. Furthermore, as Al3+ is a small and highly charged cation, it is strongly polarizing and bonding in aluminium compounds tends towards covalency; this behavior is similar to that of beryllium (Be2+), and the two display an example of a diagonal relationship.
The underlying core under aluminium's valence shell is that of the preceding noble gas, whereas those of its heavier congeners gallium, indium, thallium, and nihonium also include a filled d-subshell and in some cases a filled f-subshell. Hence, the inner electrons of aluminium shield the valence electrons almost completely, unlike those of aluminium's heavier congeners. As such, aluminium is the most electropositive metal in its group, and its hydroxide is in fact more basic than that of gallium. Aluminium also bears minor similarities to the metalloid boron in the same group: AlX3 compounds are valence isoelectronic to BX3 compounds (they have the same valence electronic structure), and both behave as Lewis acids and readily form adducts. Additionally, one of the main motifs of boron chemistry is regular icosahedral structures, and aluminium forms an important part of many icosahedral quasicrystal alloys, including the Al–Zn–Mg class.
Aluminium has a high chemical affinity to oxygen, which renders it suitable for use as a reducing agent in the thermite reaction. A fine powder of aluminium reacts explosively on contact with liquid oxygen; under normal conditions, however, aluminium forms a thin oxide layer (~5 nm at room temperature) that protects the metal from further corrosion by oxygen, water, or dilute acid, a process termed passivation. Because of its general resistance to corrosion, aluminium is one of the few metals that retains silvery reflectance in finely powdered form, making it an important component of silver-colored paints. Aluminium is not attacked by oxidizing acids because of its passivation. This allows aluminium to be used to store reagents such as nitric acid, concentrated sulfuric acid, and some organic acids.
In hot concentrated hydrochloric acid, aluminium reacts with water with evolution of hydrogen, and in aqueous sodium hydroxide or potassium hydroxide at room temperature to form aluminates—protective passivation under these conditions is negligible. Aqua regia also dissolves aluminium. Aluminium is corroded by dissolved chlorides, such as common sodium chloride, which is why household plumbing is never made from aluminium. The oxide layer on aluminium is also destroyed by contact with mercury due to amalgamation or with salts of some electropositive metals. As such, the strongest aluminium alloys are less corrosion-resistant due to galvanic reactions with alloyed copper, and aluminium's corrosion resistance is greatly reduced by aqueous salts, particularly in the presence of dissimilar metals.
Aluminium reacts with most nonmetals upon heating, forming compounds such as aluminium nitride (AlN), aluminium sulfide (Al2S3), and the aluminium halides (AlX3). It also forms a wide range of intermetallic compounds involving metals from every group on the periodic table.
Inorganic compounds
The vast majority of compounds, including all aluminium-containing minerals and all commercially significant aluminium compounds, feature aluminium in the oxidation state 3+. The coordination number of such compounds varies, but generally Al3+ is either six- or four-coordinate. Almost all compounds of aluminium(III) are colorless.
In aqueous solution, Al3+ exists as the hexaaqua cation [Al(H2O)6]3+, which has an approximate Ka of 10−5. Such solutions are acidic as this cation can act as a proton donor and progressively hydrolyze until a precipitate of aluminium hydroxide, Al(OH)3, forms. This is useful for clarification of water, as the precipitate nucleates on suspended particles in the water, hence removing them. Increasing the pH even further leads to the hydroxide dissolving again as aluminate, [Al(H2O)2(OH)4]−, is formed.
Aluminium hydroxide forms both salts and aluminates and dissolves in acid and alkali, as well as on fusion with acidic and basic oxides. This behavior of Al(OH)3 is termed amphoterism and is characteristic of weakly basic cations that form insoluble hydroxides and whose hydrated species can also donate their protons. One effect of this is that aluminium salts with weak acids are hydrolyzed in water to the aquated hydroxide and the corresponding nonmetal hydride: for example, aluminium sulfide yields hydrogen sulfide. However, some salts like aluminium carbonate exist in aqueous solution but are unstable as such; and only incomplete hydrolysis takes place for salts with strong acids, such as the halides, nitrate, and sulfate. For similar reasons, anhydrous aluminium salts cannot be made by heating their "hydrates": hydrated aluminium chloride is in fact not AlCl3·6H2O but [Al(H2O)6]Cl3, and the Al–O bonds are so strong that heating is not sufficient to break them and form Al–Cl bonds instead:
2[Al(H2O)6]Cl3 Al2O3 + 6 HCl + 9 H2O
All four trihalides are well known. Unlike the structures of the three heavier trihalides, aluminium fluoride (AlF3) features six-coordinate aluminium, which explains its involatility and insolubility as well as high heat of formation. Each aluminium atom is surrounded by six fluorine atoms in a distorted octahedral arrangement, with each fluorine atom being shared between the corners of two octahedra. Such {AlF6} units also exist in complex fluorides such as cryolite, Na3AlF6. AlF3 melts at and is made by reaction of aluminium oxide with hydrogen fluoride gas at .
With heavier halides, the coordination numbers are lower. The other trihalides are dimeric or polymeric with tetrahedral four-coordinate aluminium centers. Aluminium trichloride (AlCl3) has a layered polymeric structure below its melting point of but transforms on melting to Al2Cl6 dimers. At higher temperatures those increasingly dissociate into trigonal planar AlCl3 monomers similar to the structure of BCl3. Aluminium tribromide and aluminium triiodide form Al2X6 dimers in all three phases and hence do not show such significant changes of properties upon phase change. These materials are prepared by treating aluminium with the halogen. The aluminium trihalides form many addition compounds or complexes; their Lewis acidic nature makes them useful as catalysts for the Friedel–Crafts reactions. Aluminium trichloride has major industrial uses involving this reaction, such as in the manufacture of anthraquinones and styrene; it is also often used as the precursor for many other aluminium compounds and as a reagent for converting nonmetal fluorides into the corresponding chlorides (a transhalogenation reaction).
Aluminium forms one stable oxide with the chemical formula Al2O3, commonly called alumina. It can be found in nature in the mineral corundum, α-alumina; there is also a γ-alumina phase. Its crystalline form, corundum, is very hard (Mohs hardness 9), has a high melting point of , has very low volatility, is chemically inert, and a good electrical insulator, it is often used in abrasives (such as toothpaste), as a refractory material, and in ceramics, as well as being the starting material for the electrolytic production of aluminium. Sapphire and ruby are impure corundum contaminated with trace amounts of other metals. The two main oxide-hydroxides, AlO(OH), are boehmite and diaspore. There are three main trihydroxides: bayerite, gibbsite, and nordstrandite, which differ in their crystalline structure (polymorphs). Many other intermediate and related structures are also known. Most are produced from ores by a variety of wet processes using acid and base. Heating the hydroxides leads to formation of corundum. These materials are of central importance to the production of aluminium and are themselves extremely useful. Some mixed oxide phases are also very useful, such as spinel (MgAl2O4), Na-β-alumina (NaAl11O17), and tricalcium aluminate (Ca3Al2O6, an important mineral phase in Portland cement).
The only stable chalcogenides under normal conditions are aluminium sulfide (Al2S3), selenide (Al2Se3), and telluride (Al2Te3). All three are prepared by direct reaction of their elements at about and quickly hydrolyze completely in water to yield aluminium hydroxide and the respective hydrogen chalcogenide. As aluminium is a small atom relative to these chalcogens, these have four-coordinate tetrahedral aluminium with various polymorphs having structures related to wurtzite, with two-thirds of the possible metal sites occupied either in an orderly (α) or random (β) fashion; the sulfide also has a γ form related to γ-alumina, and an unusual high-temperature hexagonal form where half the aluminium atoms have tetrahedral four-coordination and the other half have trigonal bipyramidal five-coordination.
Four pnictides – aluminium nitride (AlN), aluminium phosphide (AlP), aluminium arsenide (AlAs), and aluminium antimonide (AlSb) – are known. They are all III-V semiconductors isoelectronic to silicon and germanium, all of which but AlN have the zinc blende structure. All four can be made by high-temperature (and possibly high-pressure) direct reaction of their component elements.
Aluminium alloys well with most other metals (with the exception of most alkali metals and group 13 metals) and over 150 intermetallics with other metals are known. Preparation involves heating fixed metals together in certain proportion, followed by gradual cooling and annealing. Bonding in them is predominantly metallic and the crystal structure primarily depends on efficiency of packing.
There are few compounds with lower oxidation states. A few aluminium(I) compounds exist: AlF, AlCl, AlBr, and AlI exist in the gaseous phase when the respective trihalide is heated with aluminium, and at cryogenic temperatures. A stable derivative of aluminium monoiodide is the cyclic adduct formed with triethylamine, Al4I4(NEt3)4. Al2O and Al2S also exist but are very unstable. Very simple aluminium(II) compounds are invoked or observed in the reactions of Al metal with oxidants. For example, aluminium monoxide, AlO, has been detected in the gas phase after explosion and in stellar absorption spectra. More thoroughly investigated are compounds of the formula R4Al2 which contain an Al–Al bond and where R is a large organic ligand.
Organoaluminium compounds and related hydrides
A variety of compounds of empirical formula AlR3 and AlR1.5Cl1.5 exist. The aluminium trialkyls and triaryls are reactive, volatile, and colorless liquids or low-melting solids. They catch fire spontaneously in air and react with water, thus necessitating precautions when handling them. They often form dimers, unlike their boron analogues, but this tendency diminishes for branched-chain alkyls (e.g. Pri, Bui, Me3CCH2); for example, triisobutylaluminium exists as an equilibrium mixture of the monomer and dimer. These dimers, such as trimethylaluminium (Al2Me6), usually feature tetrahedral Al centers formed by dimerization with some alkyl group bridging between both aluminium atoms. They are hard acids and react readily with ligands, forming adducts. In industry, they are mostly used in alkene insertion reactions, as discovered by Karl Ziegler, most importantly in "growth reactions" that form long-chain unbranched primary alkenes and alcohols, and in the low-pressure polymerization of ethene and propene. There are also some heterocyclic and cluster organoaluminium compounds involving Al–N bonds.
The industrially most important aluminium hydride is lithium aluminium hydride (LiAlH4), which is used as a reducing agent in organic chemistry. It can be produced from lithium hydride and aluminium trichloride. The simplest hydride, aluminium hydride or alane, is not as important. It is a polymer with the formula (AlH3)n, in contrast to the corresponding boron hydride that is a dimer with the formula (BH3)2.
Natural occurrence
Space
Aluminium's per-particle abundance in the Solar System is 3.15 ppm (parts per million). It is the twelfth most abundant of all elements and third most abundant among the elements that have odd atomic numbers, after hydrogen and nitrogen. The only stable isotope of aluminium, 27Al, is the eighteenth most abundant nucleus in the universe. It is created almost entirely after fusion of carbon in massive stars that will later become Type II supernovas: this fusion creates 26Mg, which upon capturing free protons and neutrons, becomes aluminium. Some smaller quantities of 27Al are created in hydrogen burning shells of evolved stars, where 26Mg can capture free protons. Essentially all aluminium now in existence is 27Al. 26Al was present in the early Solar System with abundance of 0.005% relative to 27Al but its half-life of 728,000 years is too short for any original nuclei to survive; 26Al is therefore extinct. Unlike for 27Al, hydrogen burning is the primary source of 26Al, with the nuclide emerging after a nucleus of 25Mg catches a free proton. However, the trace quantities of 26Al that do exist are the most common gamma ray emitter in the interstellar gas; if the original 26Al were still present, gamma ray maps of the Milky Way would be brighter.
Earth
Overall, the Earth is about 1.59% aluminium by mass (seventh in abundance by mass). Aluminium occurs in greater proportion in the Earth's crust than in the universe at large. This is because aluminium easily forms the oxide and becomes bound into rocks and stays in the Earth's crust, while less reactive metals sink to the core. In the Earth's crust, aluminium is the most abundant metallic element (8.23% by mass) and the third most abundant of all elements (after oxygen and silicon). A large number of silicates in the Earth's crust contain aluminium. In contrast, the Earth's mantle is only 2.38% aluminium by mass. Aluminium also occurs in seawater at a concentration of 0.41 µg/kg.
Because of its strong affinity for oxygen, aluminium is almost never found in the elemental state; instead it is found in oxides or silicates. Feldspars, the most common group of minerals in the Earth's crust, are aluminosilicates. Aluminium also occurs in the minerals beryl, cryolite, garnet, spinel, and turquoise. Impurities in Al2O3, such as chromium and iron, yield the gemstones ruby and sapphire, respectively. Native aluminium metal is extremely rare and can only be found as a minor phase in low oxygen fugacity environments, such as the interiors of certain volcanoes. Native aluminium has been reported in cold seeps in the northeastern continental slope of the South China Sea. It is possible that these deposits resulted from bacterial reduction of tetrahydroxoaluminate Al(OH)4−.
Although aluminium is a common and widespread element, not all aluminium minerals are economically viable sources of the metal. Almost all metallic aluminium is produced from the ore bauxite (AlOx(OH)3–2x). Bauxite occurs as a weathering product of low iron and silica bedrock in tropical climatic conditions. In 2017, most bauxite was mined in Australia, China, Guinea, and India.
History
The history of aluminium has been shaped by usage of alum. The first written record of alum, made by Greek historian Herodotus, dates back to the 5th century BCE. The ancients are known to have used alum as a dyeing mordant and for city defense. After the Crusades, alum, an indispensable good in the European fabric industry, was a subject of international commerce; it was imported to Europe from the eastern Mediterranean until the mid-15th century.
The nature of alum remained unknown. Around 1530, Swiss physician Paracelsus suggested alum was a salt of an earth of alum. In 1595, German doctor and chemist Andreas Libavius experimentally confirmed this. In 1722, German chemist Friedrich Hoffmann announced his belief that the base of alum was a distinct earth. In 1754, German chemist Andreas Sigismund Marggraf synthesized alumina by boiling clay in sulfuric acid and subsequently adding potash.
Attempts to produce aluminium date back to 1760. The first successful attempt, however, was completed in 1824 by Danish physicist and chemist Hans Christian Ørsted. He reacted anhydrous aluminium chloride with potassium amalgam, yielding a lump of metal looking similar to tin. He presented his results and demonstrated a sample of the new metal in 1825. In 1827, German chemist Friedrich Wöhler repeated Ørsted's experiments but did not identify any aluminium. (The reason for this inconsistency was only discovered in 1921.) He conducted a similar experiment in the same year by mixing anhydrous aluminium chloride with potassium (the Wöhler process) and produced a powder of aluminium. In 1845, he was able to produce small pieces of the metal and described some physical properties of this metal. For many years thereafter, Wöhler was credited as the discoverer of aluminium.
As Wöhler's method could not yield great quantities of aluminium, the metal remained rare; its cost exceeded that of gold. The first industrial production of aluminium was established in 1856 by French chemist Henri Etienne Sainte-Claire Deville and companions. Deville had discovered that aluminium trichloride could be reduced by sodium, which was more convenient and less expensive than potassium, which Wöhler had used. Even then, aluminium was still not of great purity and produced aluminium differed in properties by sample. Because of its electricity-conducting capacity, aluminium was used as the cap of the Washington Monument, completed in 1885, the tallest building in the world at the time. The non-corroding metal cap was intended to serve as a lightning rod peak.
The first industrial large-scale production method was independently developed in 1886 by French engineer Paul Héroult and American engineer Charles Martin Hall; it is now known as the Hall–Héroult process. The Hall–Héroult process converts alumina into metal. Austrian chemist Carl Joseph Bayer discovered a way of purifying bauxite to yield alumina, now known as the Bayer process, in 1889. Modern production of aluminium is based on the Bayer and Hall–Héroult processes.
As large-scale production caused aluminium prices to drop, the metal became widely used in jewelry, eyeglass frames, optical instruments, tableware, and foil, and other everyday items in the 1890s and early 20th century. Aluminium's ability to form hard yet light alloys with other metals provided the metal with many uses at the time. During World War I, major governments demanded large shipments of aluminium for light strong airframes; during World War II, demand by major governments for aviation was even higher.
By the mid-20th century, aluminium had become a part of everyday life and an essential component of housewares. In 1954, production of aluminium surpassed that of copper, historically second in production only to iron, making it the most produced non-ferrous metal. During the mid-20th century, aluminium emerged as a civil engineering material, with building applications in both basic construction and interior finish work, and increasingly being used in military engineering, for both airplanes and land armor vehicle engines. Earth's first artificial satellite, launched in 1957, consisted of two separate aluminium semi-spheres joined and all subsequent space vehicles have used aluminium to some extent. The aluminium can was invented in 1956 and employed as a storage for drinks in 1958.
Throughout the 20th century, the production of aluminium rose rapidly: while the world production of aluminium in 1900 was 6,800 metric tons, the annual production first exceeded 100,000 metric tons in 1916; 1,000,000 tons in 1941; 10,000,000 tons in 1971. In the 1970s, the increased demand for aluminium made it an exchange commodity; it entered the London Metal Exchange, the oldest industrial metal exchange in the world, in 1978. The output continued to grow: the annual production of aluminium exceeded 50,000,000 metric tons in 2013.
The real price for aluminium declined from $14,000 per metric ton in 1900 to $2,340 in 1948 (in 1998 United States dollars). Extraction and processing costs were lowered over technological progress and the scale of the economies. However, the need to exploit lower-grade poorer quality deposits and the use of fast increasing input costs (above all, energy) increased the net cost of aluminium; the real price began to grow in the 1970s with the rise of energy cost. Production moved from the industrialized countries to countries where production was cheaper. Production costs in the late 20th century changed because of advances in technology, lower energy prices, exchange rates of the United States dollar, and alumina prices. The BRIC countries' combined share in primary production and primary consumption grew substantially in the first decade of the 21st century. China is accumulating an especially large share of the world's production thanks to an abundance of resources, cheap energy, and governmental stimuli; it also increased its consumption share from 2% in 1972 to 40% in 2010. In the United States, Western Europe, and Japan, most aluminium was consumed in transportation, engineering, construction, and packaging. In 2021, prices for industrial metals such as aluminium have soared to near-record levels as energy shortages in China drive up costs for electricity.
Etymology
The names aluminium and aluminum are derived from the word alumine, an obsolete term for alumina, the primary naturally occurring oxide of aluminium. Alumine was borrowed from French, which in turn derived it from alumen, the classical Latin name for alum, the mineral from which it was collected. The Latin word alumen stems from the Proto-Indo-European root *alu- meaning "bitter" or "beer".
Origins
British chemist Humphry Davy, who performed a number of experiments aimed to isolate the metal, is credited as the person who named the element. The first name proposed for the metal to be isolated from alum was alumium, which Davy suggested in an 1808 article on his electrochemical research, published in Philosophical Transactions of the Royal Society. It appeared that the name was created from the English word alum and the Latin suffix -ium; but it was customary then to give elements names originating in Latin, so this name was not adopted universally. This name was criticized by contemporary chemists from France, Germany, and Sweden, who insisted the metal should be named for the oxide, alumina, from which it would be isolated. The English name alum does not come directly from Latin, whereas alumine/alumina comes from the Latin word alumen (upon declension, alumen changes to alumin-).
One example was Essai sur la Nomenclature chimique (July 1811), written in French by a Swedish chemist, Jöns Jacob Berzelius, in which the name aluminium is given to the element that would be synthesized from alum. (Another article in the same journal issue also refers to the metal whose oxide is the basis of sapphire, i.e. the same metal, as to aluminium.) A January 1811 summary of one of Davy's lectures at the Royal Society mentioned the name aluminium as a possibility. The next year, Davy published a chemistry textbook in which he used the spelling aluminum. Both spellings have coexisted since. Their usage is currently regional: aluminum dominates in the United States and Canada; aluminium is prevalent in the rest of the English-speaking world.
Spelling
In 1812, British scientist Thomas Young wrote an anonymous review of Davy's book, in which he proposed the name aluminium instead of aluminum, which he thought had a "less classical sound". This name persisted: although the spelling was occasionally used in Britain, the American scientific language used from the start.
Ludwig Wilhelm Gilbert had proposed Thonerde-metall, after the German "Thonerde" for alumina, in his Annalen der Physik but that name never caught on at all even in Germany. Joseph W. Richards in 1891 found just one occurrence of argillium in Swedish, from the French "argille" for clay. The French themselves had used aluminium from the start. However, in England and Germany Davy's spelling aluminum was initially used; until German chemist Friedrich Wöhler published his account of the Wöhler process in 1827 in which he used the spelling aluminium, which caused that spelling's largely wholesale adoption in England and Germany, with the exception of a small number of what Richards characterized as "patriotic" English chemists that were "averse to foreign innovations" who occasionally still used aluminum.
Most scientists throughout the world used in the 19th century; and it was entrenched in several other European languages, such as French, German, and Dutch.
In 1828, an American lexicographer, Noah Webster, entered only the aluminum spelling in his American Dictionary of the English Language. In the 1830s, the spelling gained usage in the United States; by the 1860s, it had become the more common spelling there outside science. In 1892, Hall used the spelling in his advertising handbill for his new electrolytic method of producing the metal, despite his constant use of the spelling in all the patents he filed between 1886 and 1903. It is unknown whether this spelling was introduced by mistake or intentionally, but Hall preferred aluminum since its introduction because it resembled platinum, the name of a prestigious metal. By 1890, both spellings had been common in the United States, the spelling being slightly more common; by 1895, the situation had reversed; by 1900, aluminum had become twice as common as aluminium; in the next decade, the spelling dominated American usage. In 1925, the American Chemical Society adopted this spelling.
The International Union of Pure and Applied Chemistry (IUPAC) adopted aluminium as the standard international name for the element in 1990. In 1993, they recognized aluminum as an acceptable variant; the most recent 2005 edition of the IUPAC nomenclature of inorganic chemistry also acknowledges this spelling. IUPAC official publications use the spelling as primary, and they list both where it is appropriate.
Production and refinement
The production of aluminium starts with the extraction of bauxite rock from the ground. The bauxite is processed and transformed using the Bayer process into alumina, which is then processed using the Hall–Héroult process, resulting in the final aluminium.
Aluminium production is highly energy-consuming, and so the producers tend to locate smelters in places where electric power is both plentiful and inexpensive. Production of one kilogram of aluminium requires 7 kilograms of oil energy equivalent, as compared to 1.5 kilograms for steel and 2 kilograms for plastic. As of 2023, the world's largest producers of aluminium were China, Russia, India, Canada, and the United Arab Emirates, while China is by far the top producer of aluminium with a world share of over 55%.
According to the International Resource Panel's Metal Stocks in Society report, the global per capita stock of aluminium in use in society (i.e. in cars, buildings, electronics, etc.) is . Much of this is in more-developed countries ( per capita) rather than less-developed countries ( per capita).
Bayer process
Bauxite is converted to alumina by the Bayer process. Bauxite is blended for uniform composition and then is ground. The resulting slurry is mixed with a hot solution of sodium hydroxide; the mixture is then treated in a digester vessel at a pressure well above atmospheric, dissolving the aluminium hydroxide in bauxite while converting impurities into relatively insoluble compounds:
After this reaction, the slurry is at a temperature above its atmospheric boiling point. It is cooled by removing steam as pressure is reduced. The bauxite residue is separated from the solution and discarded. The solution, free of solids, is seeded with small crystals of aluminium hydroxide; this causes decomposition of the [Al(OH)4]− ions to aluminium hydroxide. After about half of aluminium has precipitated, the mixture is sent to classifiers. Small crystals of aluminium hydroxide are collected to serve as seeding agents; coarse particles are converted to alumina by heating; the excess solution is removed by evaporation, (if needed) purified, and recycled.
Hall–Héroult process
The conversion of alumina to aluminium is achieved by the Hall–Héroult process. In this energy-intensive process, a solution of alumina in a molten () mixture of cryolite (Na3AlF6) with calcium fluoride is electrolyzed to produce metallic aluminium. The liquid aluminium sinks to the bottom of the solution and is tapped off, and usually cast into large blocks called aluminium billets for further processing.
Anodes of the electrolysis cell are made of carbon—the most resistant material against fluoride corrosion—and either bake at the process or are prebaked. The former, also called Söderberg anodes, are less power-efficient and fumes released during baking are costly to collect, which is why they are being replaced by prebaked anodes even though they save the power, energy, and labor to prebake the cathodes. Carbon for anodes should be preferably pure so that neither aluminium nor the electrolyte is contaminated with ash. Despite carbon's resistivity against corrosion, it is still consumed at a rate of 0.4–0.5 kg per each kilogram of produced aluminium. Cathodes are made of anthracite; high purity for them is not required because impurities leach only very slowly. The cathode is consumed at a rate of 0.02–0.04 kg per each kilogram of produced aluminium. A cell is usually terminated after 2–6 years following a failure of the cathode.
The Hall–Heroult process produces aluminium with a purity of above 99%. Further purification can be done by the Hoopes process. This process involves the electrolysis of molten aluminium with a sodium, barium, and aluminium fluoride electrolyte. The resulting aluminium has a purity of 99.99%.
Electric power represents about 20 to 40% of the cost of producing aluminium, depending on the location of the smelter. Aluminium production consumes roughly 5% of electricity generated in the United States. Because of this, alternatives to the Hall–Héroult process have been researched, but none has turned out to be economically feasible.
Recycling
Recovery of the metal through recycling has become an important task of the aluminium industry. Recycling was a low-profile activity until the late 1960s, when the growing use of aluminium beverage cans brought it to public awareness. Recycling involves melting the scrap, a process that requires only 5% of the energy used to produce aluminium from ore, though a significant part (up to 15% of the input material) is lost as dross (ash-like oxide). An aluminium stack melter produces significantly less dross, with values reported below 1%.
White dross from primary aluminium production and from secondary recycling operations still contains useful quantities of aluminium that can be extracted industrially. The process produces aluminium billets, together with a highly complex waste material. This waste is difficult to manage. It reacts with water, releasing a mixture of gases including, among others, acetylene, hydrogen sulfide and significant amounts of ammonia. Despite these difficulties, the waste is used as a filler in asphalt and concrete. Its potential for hydrogen production has also been considered and researched.
Applications
Metal
The global production of aluminium in 2016 was 58.8 million metric tons. It exceeded that of any other metal except iron (1,231 million metric tons).
Aluminium is almost always alloyed, which markedly improves its mechanical properties, especially when tempered. For example, the common aluminium foils and beverage cans are alloys of 92% to 99% aluminium. The main alloying agents are copper, zinc, magnesium, manganese, and silicon (e.g., duralumin) with the levels of other metals in a few percent by weight. Aluminium, both wrought and cast, has been alloyed with: manganese, silicon, magnesium, copper and zinc among others.
The major uses for aluminium are in:
Transportation (automobiles, aircraft, trucks, railway cars, marine vessels, bicycles, spacecraft, etc.). Aluminium is used because of its low density;
Packaging (cans, foil, frame, etc.). Aluminium is used because it is non-toxic (see below), non-adsorptive, and splinter-proof;
Building and construction (windows, doors, siding, building wire, sheathing, roofing, etc.). Since steel is cheaper, aluminium is used when lightness, corrosion resistance, or engineering features are important;
Electricity-related uses (conductor alloys, motors, and generators, transformers, capacitors, etc.). Aluminium is used because it is relatively cheap, highly conductive, has adequate mechanical strength and low density, and resists corrosion;
A wide range of household items, from cooking utensils to furniture. Low density, good appearance, ease of fabrication, and durability are the key factors of aluminium usage;
Machinery and equipment (processing equipment, pipes, tools). Aluminium is used because of its corrosion resistance, non-pyrophoricity, and mechanical strength.
Compounds
The great majority (about 90%) of aluminium oxide is converted to metallic aluminium. Being a very hard material (Mohs hardness 9), alumina is widely used as an abrasive; being extraordinarily chemically inert, it is useful in highly reactive environments such as high pressure sodium lamps. Aluminium oxide is commonly used as a catalyst for industrial processes; e.g. the Claus process to convert hydrogen sulfide to sulfur in refineries and to alkylate amines. Many industrial catalysts are supported by alumina, meaning that the expensive catalyst material is dispersed over a surface of the inert alumina. Another principal use is as a drying agent or absorbent.
Several sulfates of aluminium have industrial and commercial application. Aluminium sulfate (in its hydrate form) is produced on the annual scale of several millions of metric tons. About two-thirds is consumed in water treatment. The next major application is in the manufacture of paper. It is also used as a mordant in dyeing, in pickling seeds, deodorizing of mineral oils, in leather tanning, and in production of other aluminium compounds. Two kinds of alum, ammonium alum and potassium alum, were formerly used as mordants and in leather tanning, but their use has significantly declined following availability of high-purity aluminium sulfate. Anhydrous aluminium chloride is used as a catalyst in chemical and petrochemical industries, the dyeing industry, and in synthesis of various inorganic and organic compounds. Aluminium hydroxychlorides are used in purifying water, in the paper industry, and as antiperspirants. Sodium aluminate is used in treating water and as an accelerator of solidification of cement.
Many aluminium compounds have niche applications, for example:
Aluminium acetate in solution is used as an astringent.
Aluminium phosphate is used in the manufacture of glass, ceramic, pulp and paper products, cosmetics, paints, varnishes, and in dental cement.
Aluminium hydroxide is used as an antacid, and mordant; it is used also in water purification, the manufacture of glass and ceramics, and in the waterproofing of fabrics.
Lithium aluminium hydride is a powerful reducing agent used in organic chemistry.
Organoaluminiums are used as Lewis acids and co-catalysts.
Methylaluminoxane is a co-catalyst for Ziegler–Natta olefin polymerization to produce vinyl polymers such as polyethene.
Aqueous aluminium ions (such as aqueous aluminium sulfate) are used to treat against fish parasites such as Gyrodactylus salaris.
In many vaccines, certain aluminium salts serve as an immune adjuvant (immune response booster) to allow the protein in the vaccine to achieve sufficient potency as an immune stimulant. Until 2004, most of the adjuvants used in vaccines were aluminium-adjuvanted.
Biology
Despite its widespread occurrence in the Earth's crust, aluminium has no known function in biology. At pH 6–9 (relevant for most natural waters), aluminium precipitates out of water as the hydroxide and is hence not available; most elements behaving this way have no biological role or are toxic. Aluminium sulfate has an LD50 of 6207 mg/kg (oral, mouse), which corresponds to 435 grams (about one pound) for a mouse.
Toxicity
Aluminium is classified as a non-carcinogen by the United States Department of Health and Human Services. A review published in 1988 said that there was little evidence that normal exposure to aluminium presents a risk to healthy adult, and a 2014 multi-element toxicology review was unable to find deleterious effects of aluminium consumed in amounts not greater than 40 mg/day per kg of body mass. Most aluminium consumed will leave the body in feces; most of the small part of it that enters the bloodstream, will be excreted via urine; nevertheless some aluminium does pass the blood-brain barrier and is lodged preferentially in the brains of Alzheimer's patients. Evidence published in 1989 indicates that, for Alzheimer's patients, aluminium may act by electrostatically crosslinking proteins, thus down-regulating genes in the superior temporal gyrus.
Effects
Aluminium, although rarely, can cause vitamin D-resistant osteomalacia, erythropoietin-resistant microcytic anemia, and central nervous system alterations. People with kidney insufficiency are especially at a risk. Chronic ingestion of hydrated aluminium silicates (for excess gastric acidity control) may result in aluminium binding to intestinal contents and increased elimination of other metals, such as iron or zinc; sufficiently high doses (>50 g/day) can cause anemia.
During the 1988 Camelford water pollution incident people in Camelford had their drinking water contaminated with aluminium sulfate for several weeks. A final report into the incident in 2013 concluded it was unlikely that this had caused long-term health problems.
Aluminium has been suspected of being a possible cause of Alzheimer's disease, but research into this for over 40 years has found, , no good evidence of causal effect.
Aluminium increases estrogen-related gene expression in human breast cancer cells cultured in the laboratory. In very high doses, aluminium is associated with altered function of the blood–brain barrier. A small percentage of people have contact allergies to aluminium and experience itchy red rashes, headache, muscle pain, joint pain, poor memory, insomnia, depression, asthma, irritable bowel syndrome, or other symptoms upon contact with products containing aluminium.
Exposure to powdered aluminium or aluminium welding fumes can cause pulmonary fibrosis. Fine aluminium powder can ignite or explode, posing another workplace hazard.
Exposure routes
Food is the main source of aluminium. Drinking water contains more aluminium than solid food; however, aluminium in food may be absorbed more than aluminium from water. Major sources of human oral exposure to aluminium include food (due to its use in food additives, food and beverage packaging, and cooking utensils), drinking water (due to its use in municipal water treatment), and aluminium-containing medications (particularly antacid/antiulcer and buffered aspirin formulations). Dietary exposure in Europeans averages to 0.2–1.5 mg/kg/week but can be as high as 2.3 mg/kg/week. Higher exposure levels of aluminium are mostly limited to miners, aluminium production workers, and dialysis patients.
Consumption of antacids, antiperspirants, vaccines, and cosmetics provide possible routes of exposure. Consumption of acidic foods or liquids with aluminium enhances aluminium absorption, and maltol has been shown to increase the accumulation of aluminium in nerve and bone tissues.
Treatment
In case of suspected sudden intake of a large amount of aluminium, the only treatment is deferoxamine mesylate which may be given to help eliminate aluminium from the body by chelation therapy. However, this should be applied with caution as this reduces not only aluminium body levels, but also those of other metals such as copper or iron.
Environmental effects
High levels of aluminium occur near mining sites; small amounts of aluminium are released to the environment at coal-fired power plants or incinerators. Aluminium in the air is washed out by the rain or normally settles down but small particles of aluminium remain in the air for a long time.
Acidic precipitation is the main natural factor to mobilize aluminium from natural sources and the main reason for the environmental effects of aluminium; however, the main factor of presence of aluminium in salt and freshwater are the industrial processes that also release aluminium into air.
In water, aluminium acts as a toxiс agent on gill-breathing animals such as fish when the water is acidic, in which aluminium may precipitate on gills, which causes loss of plasma- and hemolymph ions leading to osmoregulatory failure. Organic complexes of aluminium may be easily absorbed and interfere with metabolism in mammals and birds, even though this rarely happens in practice.
Aluminium is primary among the factors that reduce plant growth on acidic soils. Although it is generally harmless to plant growth in pH-neutral soils, in acid soils the concentration of toxic Al3+ cations increases and disturbs root growth and function. Wheat has developed a tolerance to aluminium, releasing organic compounds that bind to harmful aluminium cations. Sorghum is believed to have the same tolerance mechanism.
Aluminium production possesses its own challenges to the environment on each step of the production process. The major challenge is the greenhouse gas emissions. These gases result from electrical consumption of the smelters and the byproducts of processing. The most potent of these gases are perfluorocarbons from the smelting process. Released sulfur dioxide is one of the primary precursors of acid rain.
Biodegradation of metallic aluminium is extremely rare; most aluminium-corroding organisms do not directly attack or consume the aluminium, but instead produce corrosive wastes. The fungus Geotrichum candidum can consume the aluminium in compact discs. The bacterium Pseudomonas aeruginosa and the fungus Cladosporium resinae are commonly detected in aircraft fuel tanks that use kerosene-based fuels (not avgas), and laboratory cultures can degrade aluminium.
See also
Aluminium granules
Aluminium joining
Aluminium–air battery
Aluminized steel, for corrosion resistance and other properties
Aluminized screen, for display devices
Aluminized cloth, to reflect heat
Aluminized mylar, to reflect heat
Panel edge staining
Quantum clock
Notes
References
Bibliography
Further reading
Mimi Sheller, Aluminum Dream: The Making of Light Modernity. Cambridge, Mass.: Massachusetts Institute of Technology Press, 2014.
External links
Aluminium at The Periodic Table of Videos (University of Nottingham)
Toxicological Profile for Aluminum (PDF) (September 2008) – 357-page report from the United States Department of Health and Human Services, Public Health Service, Agency for Toxic Substances and Disease Registry
Aluminum entry (last reviewed 30 October 2019) in the NIOSH Pocket Guide to Chemical Hazards published by the CDC's National Institute for Occupational Safety and Health
Current and historical prices (1998–present) for aluminum futures on the global commodities market
Chemical elements
Post-transition metals
Aluminium
Electrical conductors
Pyrotechnic fuels
Airship technology
Reducing agents
E-number additives
Native element minerals
Chemical elements with face-centered cubic structure | Aluminium | [
"Physics",
"Chemistry"
] | 11,070 | [
"Chemical elements",
"Redox",
"Aluminium alloys",
"Reducing agents",
"Materials",
"Alloys",
"Electrical conductors",
"Atoms",
"Matter"
] |
928 | https://en.wikipedia.org/wiki/Axiom | An axiom, postulate, or assumption is a statement that is taken to be true, to serve as a premise or starting point for further reasoning and arguments. The word comes from the Ancient Greek word (), meaning 'that which is thought worthy or fit' or 'that which commends itself as evident'.
The precise definition varies across fields of study. In classic philosophy, an axiom is a statement that is so evident or well-established, that it is accepted without controversy or question. In modern logic, an axiom is a premise or starting point for reasoning.
In mathematics, an axiom may be a "logical axiom" or a "non-logical axiom". Logical axioms are taken to be true within the system of logic they define and are often shown in symbolic form (e.g., (A and B) implies A), while non-logical axioms are substantive assertions about the elements of the domain of a specific mathematical theory, for example a + 0 = a in integer arithmetic.
Non-logical axioms may also be called "postulates", "assumptions" or "proper axioms". In most cases, a non-logical axiom is simply a formal logical expression used in deduction to build a mathematical theory, and might or might not be self-evident in nature (e.g., the parallel postulate in Euclidean geometry). To axiomatize a system of knowledge is to show that its claims can be derived from a small, well-understood set of sentences (the axioms), and there are typically many ways to axiomatize a given mathematical domain.
Any axiom is a statement that serves as a starting point from which other statements are logically derived. Whether it is meaningful (and, if so, what it means) for an axiom to be "true" is a subject of debate in the philosophy of mathematics.
Etymology
The word axiom comes from the Greek word (axíōma), a verbal noun from the verb (axioein), meaning "to deem worthy", but also "to require", which in turn comes from (áxios), meaning "being in balance", and hence "having (the same) value (as)", "worthy", "proper". Among the ancient Greek philosophers and mathematicians, axioms were taken to be immediately evident propositions, foundational and common to many fields of investigation, and self-evidently true without any further argument or proof.
The root meaning of the word postulate is to "demand"; for instance, Euclid demands that one agree that some things can be done (e.g., any two points can be joined by a straight line).
Ancient geometers maintained some distinction between axioms and postulates. While commenting on Euclid's books, Proclus remarks that "Geminus held that this [4th] Postulate should not be classed as a postulate but as an axiom, since it does not, like the first three Postulates, assert the possibility of some construction but expresses an essential property." Boethius translated 'postulate' as petitio and called the axioms notiones communes but in later manuscripts this usage was not always strictly kept.
Historical development
Early Greeks
The logico-deductive method whereby conclusions (new knowledge) follow from premises (old knowledge) through the application of sound arguments (syllogisms, rules of inference) was developed by the ancient Greeks, and has become the core principle of modern mathematics. Tautologies excluded, nothing can be deduced if nothing is assumed. Axioms and postulates are thus the basic assumptions underlying a given body of deductive knowledge. They are accepted without demonstration. All other assertions (theorems, in the case of mathematics) must be proven with the aid of these basic assumptions. However, the interpretation of mathematical knowledge has changed from ancient times to the modern, and consequently the terms axiom and postulate hold a slightly different meaning for the present day mathematician, than they did for Aristotle and Euclid.
The ancient Greeks considered geometry as just one of several sciences, and held the theorems of geometry on par with scientific facts. As such, they developed and used the logico-deductive method as a means of avoiding error, and for structuring and communicating knowledge. Aristotle's posterior analytics is a definitive exposition of the classical view.
An "axiom", in classical terminology, referred to a self-evident assumption common to many branches of science. A good example would be the assertion that:
When an equal amount is taken from equals, an equal amount results.
At the foundation of the various sciences lay certain additional hypotheses that were accepted without proof. Such a hypothesis was termed a postulate. While the axioms were common to many sciences, the postulates of each particular science were different. Their validity had to be established by means of real-world experience. Aristotle warns that the content of a science cannot be successfully communicated if the learner is in doubt about the truth of the postulates.
The classical approach is well-illustrated by Euclid's Elements, where a list of postulates is given (common-sensical geometric facts drawn from our experience), followed by a list of "common notions" (very basic, self-evident assertions).
Postulates
It is possible to draw a straight line from any point to any other point.
It is possible to extend a line segment continuously in both directions.
It is possible to describe a circle with any center and any radius.
It is true that all right angles are equal to one another.
("Parallel postulate") It is true that, if a straight line falling on two straight lines make the interior angles on the same side less than two right angles, the two straight lines, if produced indefinitely, intersect on that side on which are the angles less than the two right angles.
Common notions
Things which are equal to the same thing are also equal to one another.
If equals are added to equals, the wholes are equal.
If equals are subtracted from equals, the remainders are equal.
Things which coincide with one another are equal to one another.
The whole is greater than the part.
Modern development
A lesson learned by mathematics in the last 150 years is that it is useful to strip the meaning away from the mathematical assertions (axioms, postulates, propositions, theorems) and definitions. One must concede the need for primitive notions, or undefined terms or concepts, in any study. Such abstraction or formalization makes mathematical knowledge more general, capable of multiple different meanings, and therefore useful in multiple contexts. Alessandro Padoa, Mario Pieri, and Giuseppe Peano were pioneers in this movement.
Structuralist mathematics goes further, and develops theories and axioms (e.g. field theory, group theory, topology, vector spaces) without any particular application in mind. The distinction between an "axiom" and a "postulate" disappears. The postulates of Euclid are profitably motivated by saying that they lead to a great wealth of geometric facts. The truth of these complicated facts rests on the acceptance of the basic hypotheses. However, by throwing out Euclid's fifth postulate, one can get theories that have meaning in wider contexts (e.g., hyperbolic geometry). As such, one must simply be prepared to use labels such as "line" and "parallel" with greater flexibility. The development of hyperbolic geometry taught mathematicians that it is useful to regard postulates as purely formal statements, and not as facts based on experience.
When mathematicians employ the field axioms, the intentions are even more abstract. The propositions of field theory do not concern any one particular application; the mathematician now works in complete abstraction. There are many examples of fields; field theory gives correct knowledge about them all.
It is not correct to say that the axioms of field theory are "propositions that are regarded as true without proof." Rather, the field axioms are a set of constraints. If any given system of addition and multiplication satisfies these constraints, then one is in a position to instantly know a great deal of extra information about this system.
Modern mathematics formalizes its foundations to such an extent that mathematical theories can be regarded as mathematical objects, and mathematics itself can be regarded as a branch of logic. Frege, Russell, Poincaré, Hilbert, and Gödel are some of the key figures in this development.
Another lesson learned in modern mathematics is to examine purported proofs carefully for hidden assumptions.
In the modern understanding, a set of axioms is any collection of formally stated assertions from which other formally stated assertions follow – by the application of certain well-defined rules. In this view, logic becomes just another formal system. A set of axioms should be consistent; it should be impossible to derive a contradiction from the axioms. A set of axioms should also be non-redundant; an assertion that can be deduced from other axioms need not be regarded as an axiom.
It was the early hope of modern logicians that various branches of mathematics, perhaps all of mathematics, could be derived from a consistent collection of basic axioms. An early success of the formalist program was Hilbert's formalization of Euclidean geometry, and the related demonstration of the consistency of those axioms.
In a wider context, there was an attempt to base all of mathematics on Cantor's set theory. Here, the emergence of Russell's paradox and similar antinomies of naïve set theory raised the possibility that any such system could turn out to be inconsistent.
The formalist project suffered a setback a century ago, when Gödel showed that it is possible, for any sufficiently large set of axioms (Peano's axioms, for example) to construct a statement whose truth is independent of that set of axioms. As a corollary, Gödel proved that the consistency of a theory like Peano arithmetic is an unprovable assertion within the scope of that theory.
It is reasonable to believe in the consistency of Peano arithmetic because it is satisfied by the system of natural numbers, an infinite but intuitively accessible formal system. However, at present, there is no known way of demonstrating the consistency of the modern Zermelo–Fraenkel axioms for set theory. Furthermore, using techniques of forcing (Cohen) one can show that the continuum hypothesis (Cantor) is independent of the Zermelo–Fraenkel axioms. Thus, even this very general set of axioms cannot be regarded as the definitive foundation for mathematics.
Other sciences
Experimental sciences - as opposed to mathematics and logic - also have general founding assertions from which a deductive reasoning can be built so as to express propositions that predict properties - either still general or much more specialized to a specific experimental context. For instance, Newton's laws in classical mechanics, Maxwell's equations in classical electromagnetism, Einstein's equation in general relativity, Mendel's laws of genetics, Darwin's Natural selection law, etc. These founding assertions are usually called principles or postulates so as to distinguish from mathematical axioms.
As a matter of facts, the role of axioms in mathematics and postulates in experimental sciences is different. In mathematics one neither "proves" nor "disproves" an axiom. A set of mathematical axioms gives a set of rules that fix a conceptual realm, in which the theorems logically follow. In contrast, in experimental sciences, a set of postulates shall allow deducing results that match or do not match experimental results. If postulates do not allow deducing experimental predictions, they do not set a scientific conceptual framework and have to be completed or made more accurate. If the postulates allow deducing predictions of experimental results, the comparison with experiments allows falsifying (falsified) the theory that the postulates install. A theory is considered valid as long as it has not been falsified.
Now, the transition between the mathematical axioms and scientific postulates is always slightly blurred, especially in physics. This is due to the heavy use of mathematical tools to support the physical theories. For instance, the introduction of Newton's laws rarely establishes as a prerequisite neither Euclidean geometry or differential calculus that they imply. It became more apparent when Albert Einstein first introduced special relativity where the invariant quantity is no more the Euclidean length (defined as ) > but the Minkowski spacetime interval (defined as ), and then general relativity where flat Minkowskian geometry is replaced with pseudo-Riemannian geometry on curved manifolds.
In quantum physics, two sets of postulates have coexisted for some time, which provide a very nice example of falsification. The 'Copenhagen school' (Niels Bohr, Werner Heisenberg, Max Born) developed an operational approach with a complete mathematical formalism that involves the description of quantum system by vectors ('states') in a separable Hilbert space, and physical quantities as linear operators that act in this Hilbert space. This approach is fully falsifiable and has so far produced the most accurate predictions in physics. But it has the unsatisfactory aspect of not allowing answers to questions one would naturally ask. For this reason, another 'hidden variables' approach was developed for some time by Albert Einstein, Erwin Schrödinger, David Bohm. It was created so as to try to give deterministic explanation to phenomena such as entanglement. This approach assumed that the Copenhagen school description was not complete, and postulated that some yet unknown variable was to be added to the theory so as to allow answering some of the questions it does not answer (the founding elements of which were discussed as the EPR paradox in 1935). Taking this idea seriously, John Bell derived in 1964 a prediction that would lead to different experimental results (Bell's inequalities) in the Copenhagen and the Hidden variable case. The experiment was conducted first by Alain Aspect in the early 1980s, and the result excluded the simple hidden variable approach (sophisticated hidden variables could still exist but their properties would still be more disturbing than the problems they try to solve). This does not mean that the conceptual framework of quantum physics can be considered as complete now, since some open questions still exist (the limit between the quantum and classical realms, what happens during a quantum measurement, what happens in a completely closed quantum system such as the universe itself, etc.).
Mathematical logic
In the field of mathematical logic, a clear distinction is made between two notions of axioms: logical and non-logical (somewhat similar to the ancient distinction between "axioms" and "postulates" respectively).
Logical axioms
These are certain formulas in a formal language that are universally valid, that is, formulas that are satisfied by every assignment of values. Usually one takes as logical axioms at least some minimal set of tautologies that is sufficient for proving all tautologies in the language; in the case of predicate logic more logical axioms than that are required, in order to prove logical truths that are not tautologies in the strict sense.
Examples
Propositional logic
In propositional logic, it is common to take as logical axioms all formulae of the following forms, where , , and can be any formulae of the language and where the included primitive connectives are only "" for negation of the immediately following proposition and "" for implication from antecedent to consequent propositions:
Each of these patterns is an axiom schema, a rule for generating an infinite number of axioms. For example, if , , and are propositional variables, then and are both instances of axiom schema 1, and hence are axioms. It can be shown that with only these three axiom schemata and modus ponens, one can prove all tautologies of the propositional calculus. It can also be shown that no pair of these schemata is sufficient for proving all tautologies with modus ponens.
Other axiom schemata involving the same or different sets of primitive connectives can be alternatively constructed.
These axiom schemata are also used in the predicate calculus, but additional logical axioms are needed to include a quantifier in the calculus.
First-order logic
Axiom of Equality.Let be a first-order language. For each variable , the below formula is universally valid.
This means that, for any variable symbol , the formula can be regarded as an axiom. Additionally, in this example, for this not to fall into vagueness and a never-ending series of "primitive notions", either a precise notion of what we mean by (or, for that matter, "to be equal") has to be well established first, or a purely formal and syntactical usage of the symbol has to be enforced, only regarding it as a string and only a string of symbols, and mathematical logic does indeed do that.
Another, more interesting example axiom scheme, is that which provides us with what is known as Universal Instantiation:
Axiom scheme for Universal Instantiation.Given a formula in a first-order language , a variable and a term that is substitutable for in , the below formula is universally valid.
Where the symbol stands for the formula with the term substituted for . (See Substitution of variables.) In informal terms, this example allows us to state that, if we know that a certain property holds for every and that stands for a particular object in our structure, then we should be able to claim . Again, we are claiming that the formula is valid, that is, we must be able to give a "proof" of this fact, or more properly speaking, a metaproof. These examples are metatheorems of our theory of mathematical logic since we are dealing with the very concept of proof itself. Aside from this, we can also have Existential Generalization:
Axiom scheme for Existential Generalization. Given a formula in a first-order language , a variable and a term that is substitutable for in , the below formula is universally valid.
Non-logical axioms
Non-logical axioms are formulas that play the role of theory-specific assumptions. Reasoning about two different structures, for example, the natural numbers and the integers, may involve the same logical axioms; the non-logical axioms aim to capture what is special about a particular structure (or set of structures, such as groups). Thus non-logical axioms, unlike logical axioms, are not tautologies. Another name for a non-logical axiom is postulate.
Almost every modern mathematical theory starts from a given set of non-logical axioms, and it was thought that, in principle, every theory could be axiomatized in this way and formalized down to the bare language of logical formulas.
Non-logical axioms are often simply referred to as axioms in mathematical discourse. This does not mean that it is claimed that they are true in some absolute sense. For instance, in some groups, the group operation is commutative, and this can be asserted with the introduction of an additional axiom, but without this axiom, we can do quite well developing (the more general) group theory, and we can even take its negation as an axiom for the study of non-commutative groups.
Examples
This section gives examples of mathematical theories that are developed entirely from a set of non-logical axioms (axioms, henceforth). A rigorous treatment of any of these topics begins with a specification of these axioms.
Basic theories, such as arithmetic, real analysis and complex analysis are often introduced non-axiomatically, but implicitly or explicitly there is generally an assumption that the axioms being used are the axioms of Zermelo–Fraenkel set theory with choice, abbreviated ZFC, or some very similar system of axiomatic set theory like Von Neumann–Bernays–Gödel set theory, a conservative extension of ZFC. Sometimes slightly stronger theories such as Morse–Kelley set theory or set theory with a strongly inaccessible cardinal allowing the use of a Grothendieck universe is used, but in fact, most mathematicians can actually prove all they need in systems weaker than ZFC, such as second-order arithmetic.
The study of topology in mathematics extends all over through point set topology, algebraic topology, differential topology, and all the related paraphernalia, such as homology theory, homotopy theory. The development of abstract algebra brought with itself group theory, rings, fields, and Galois theory.
This list could be expanded to include most fields of mathematics, including measure theory, ergodic theory, probability, representation theory, and differential geometry.
Arithmetic
The Peano axioms are the most widely used axiomatization of first-order arithmetic. They are a set of axioms strong enough to prove many important facts about number theory and they allowed Gödel to establish his famous second incompleteness theorem.
We have a language where is a constant symbol and is a unary function and the following axioms:
for any formula with one free variable.
The standard structure is where is the set of natural numbers, is the successor function and is naturally interpreted as the number 0.
Euclidean geometry
Probably the oldest, and most famous, list of axioms are the 4 + 1 Euclid's postulates of plane geometry. The axioms are referred to as "4 + 1" because for nearly two millennia the fifth (parallel) postulate ("through a point outside a line there is exactly one parallel") was suspected of being derivable from the first four. Ultimately, the fifth postulate was found to be independent of the first four. One can assume that exactly one parallel through a point outside a line exists, or that infinitely many exist. This choice gives us two alternative forms of geometry in which the interior angles of a triangle add up to exactly 180 degrees or less, respectively, and are known as Euclidean and hyperbolic geometries. If one also removes the second postulate ("a line can be extended indefinitely") then elliptic geometry arises, where there is no parallel through a point outside a line, and in which the interior angles of a triangle add up to more than 180 degrees.
Real analysis
The objectives of the study are within the domain of real numbers. The real numbers are uniquely picked out (up to isomorphism) by the properties of a Dedekind complete ordered field, meaning that any nonempty set of real numbers with an upper bound has a least upper bound. However, expressing these properties as axioms requires the use of second-order logic. The Löwenheim–Skolem theorems tell us that if we restrict ourselves to first-order logic, any axiom system for the reals admits other models, including both models that are smaller than the reals and models that are larger. Some of the latter are studied in non-standard analysis.
Role in mathematical logic
Deductive systems and completeness
A deductive system consists of a set of logical axioms, a set of non-logical axioms, and a set of rules of inference. A desirable property of a deductive system is that it be complete. A system is said to be complete if, for all formulas ,
that is, for any statement that is a logical consequence of there actually exists a deduction of the statement from . This is sometimes expressed as "everything that is true is provable", but it must be understood that "true" here means "made true by the set of axioms", and not, for example, "true in the intended interpretation". Gödel's completeness theorem establishes the completeness of a certain commonly used type of deductive system.
Note that "completeness" has a different meaning here than it does in the context of Gödel's first incompleteness theorem, which states that no recursive, consistent set of non-logical axioms of the Theory of Arithmetic is complete, in the sense that there will always exist an arithmetic statement such that neither nor can be proved from the given set of axioms.
There is thus, on the one hand, the notion of completeness of a deductive system and on the other hand that of completeness of a set of non-logical axioms. The completeness theorem and the incompleteness theorem, despite their names, do not contradict one another.
Further discussion
Early mathematicians regarded axiomatic geometry as a model of physical space, implying, there could ultimately only be one such model. The idea that alternative mathematical systems might exist was very troubling to mathematicians of the 19th century and the developers of systems such as Boolean algebra made elaborate efforts to derive them from traditional arithmetic. Galois showed just before his untimely death that these efforts were largely wasted. Ultimately, the abstract parallels between algebraic systems were seen to be more important than the details, and modern algebra was born. In the modern view, axioms may be any set of formulas, as long as they are not known to be inconsistent.
See also
Axiomatic system
Dogma
First principle, axiom in science and philosophy
List of axioms
Model theory
Regulæ Juris
Theorem
Presupposition
Principle
Notes
References
Further reading
Mendelson, Elliot (1987). Introduction to mathematical logic. Belmont, California: Wadsworth & Brooks.
External links
Metamath axioms page
Concepts in logic | Axiom | [
"Mathematics"
] | 5,242 | [
"Mathematical logic",
"Mathematical axioms"
] |
988 | https://en.wikipedia.org/wiki/Applied%20ethics | Applied ethics is the practical aspect of moral considerations. It is ethics with respect to real-world actions and their moral considerations in private and public life, the professions, health, technology, law, and leadership. For example, bioethics is concerned with identifying the best approach to moral issues in the life sciences, such as euthanasia, the allocation of scarce health resources, or the use of human embryos in research. Environmental ethics is concerned with ecological issues such as the responsibility of government and corporations to clean up pollution. Business ethics includes the duties of whistleblowers to the public and to their employers.
History
Applied ethics has expanded the study of ethics beyond the realms of academic philosophical discourse. The field of applied ethics, as it appears today, emerged from debate surrounding rapid medical and technological advances in the early 1970s and is now established as a subdiscipline of moral philosophy. However, applied ethics is, by its very nature, a multi-professional subject because it requires specialist understanding of the potential ethical issues in fields like medicine, business or information technology. Nowadays, ethical codes of conduct exist in almost every profession.
An applied ethics approach to the examination of moral dilemmas can take many different forms but one of the most influential and most widely utilised approaches in bioethics and health care ethics is the four-principle approach developed by Tom Beauchamp and James Childress. The four-principle approach, commonly termed principlism, entails consideration and application of four prima facie ethical principles: autonomy, non-maleficence, beneficence, and justice.
Underpinning theory
Applied ethics is distinguished from normative ethics, which concerns standards for right and wrong behavior, and from meta-ethics, which concerns the nature of ethical properties, statements, attitudes, and judgments.
Whilst these three areas of ethics appear to be distinct, they are also interrelated. The use of an applied ethics approach often draws upon these normative ethical theories:
Consequentialist ethics, which hold that the rightness of acts depends only on their consequences. The paradigmatic consequentialist theory is utilitarianism, which classically holds that whether an act is morally right depends on whether it maximizes net aggregated psychological wellbeing. This theory's main developments came from Jeremy Bentham and John Stuart Mill who distinguished between act and rule utilitarianism. Notable later developments were made by Henry Sidgwick who introduced the significance of motive or intent, and R. M. Hare who introduced the significance of preference in utilitarian decision-making. Other forms of consequentialism include prioritarianism.
Deontological ethics, which hold that acts have an inherent rightness or wrongness regardless of their context or consequences. This approach is epitomized by Immanuel Kant's notion of the categorical imperative, which was the centre of Kant's ethical theory based on duty. Another key deontological theory is natural law, which was heavily developed by Thomas Aquinas and is an important part of the Catholic Church's teaching on morals. Threshold deontology holds that rules ought to govern up to a point despite adverse consequences; but when the consequences become so dire that they cross a stipulated threshold, consequentialism takes over.
Virtue ethics, derived from Aristotle's and Confucius' notions, which asserts that the right action will be that chosen by a suitably 'virtuous' agent.
Normative ethical theories can clash when trying to resolve real-world ethical dilemmas. One approach attempting to overcome the divide between consequentialism and deontology is case-based reasoning, also known as casuistry. Casuistry does not begin with theory, rather it starts with the immediate facts of a real and concrete case. While casuistry makes use of ethical theory, it does not view ethical theory as the most important feature of moral reasoning. Casuists, like Albert Jonsen and Stephen Toulmin (The Abuse of Casuistry, 1988), challenge the traditional paradigm of applied ethics. Instead of starting from theory and applying theory to a particular case, casuists start with the particular case itself and then ask what morally significant features (including both theory and practical considerations) ought to be considered for that particular case. In their observations of medical ethics committees, Jonsen and Toulmin note that a consensus on particularly problematic moral cases often emerges when participants focus on the facts of the case, rather than on ideology or theory. Thus, a Rabbi, a Catholic priest, and an agnostic might agree that, in this particular case, the best approach is to withhold extraordinary medical care, while disagreeing on the reasons that support their individual positions. By focusing on cases and not on theory, those engaged in moral debate increase the possibility of agreement.
Applied ethics was later distinguished from the nascent applied epistemology, which is also under the umbrella of applied philosophy. While the former was concerned with the practical application of moral considerations, the latter focuses on the application of epistemology in solving practical problems.
See also
References
Further reading
(monograph)
External links
Ethics | Applied ethics | [
"Biology"
] | 1,052 | [
"Behavior",
"Human behavior",
"Applied ethics"
] |
991 | https://en.wikipedia.org/wiki/Absolute%20value | In mathematics, the absolute value or modulus of a real number , is the non-negative value without regard to its sign. Namely, if is a positive number, and if is negative (in which case negating makes positive), and For example, the absolute value of 3 and the absolute value of −3 is The absolute value of a number may be thought of as its distance from zero.
Generalisations of the absolute value for real numbers occur in a wide variety of mathematical settings. For example, an absolute value is also defined for the complex numbers, the quaternions, ordered rings, fields and vector spaces. The absolute value is closely related to the notions of magnitude, distance, and norm in various mathematical and physical contexts.
Terminology and notation
In 1806, Jean-Robert Argand introduced the term module, meaning unit of measure in French, specifically for the complex absolute value, and it was borrowed into English in 1866 as the Latin equivalent modulus. The term absolute value has been used in this sense from at least 1806 in French and 1857 in English. The notation , with a vertical bar on each side, was introduced by Karl Weierstrass in 1841. Other names for absolute value include numerical value and magnitude. In programming languages and computational software packages, the absolute value of is generally represented by abs(x), or a similar expression.
The vertical bar notation also appears in a number of other mathematical contexts: for example, when applied to a set, it denotes its cardinality; when applied to a matrix, it denotes its determinant. Vertical bars denote the absolute value only for algebraic objects for which the notion of an absolute value is defined, notably an element of a normed division algebra, for example a real number, a complex number, or a quaternion. A closely related but distinct notation is the use of vertical bars for either the Euclidean norm or sup norm of a vector although double vertical bars with subscripts respectively) are a more common and less ambiguous notation.
Definition and properties
Real numbers
For any the absolute value or modulus is denoted , with a vertical bar on each side of the quantity, and is defined as
The absolute value is thus always either a positive number or zero, but never negative. When itself is negative then its absolute value is necessarily positive
From an analytic geometry point of view, the absolute value of a real number is that number's distance from zero along the real number line, and more generally the absolute value of the difference of two real numbers (their absolute difference) is the distance between them. The notion of an abstract distance function in mathematics can be seen to be a generalisation of the absolute value of the difference (see "Distance" below).
Since the square root symbol represents the unique positive square root, when applied to a positive number, it follows that
This is equivalent to the definition above, and may be used as an alternative definition of the absolute value of real numbers.
The absolute value has the following four fundamental properties (, are real numbers), that are used for generalization of this notion to other domains:
Non-negativity, positive definiteness, and multiplicativity are readily apparent from the definition. To see that subadditivity holds, first note that with its sign chosen to make the result positive. Now, since it follows that, whichever of is the value one has for all Consequently, , as desired.
Some additional useful properties are given below. These are either immediate consequences of the definition or implied by the four fundamental properties above.
Two other useful properties concerning inequalities are:
These relations may be used to solve inequalities involving absolute values. For example:
The absolute value, as "distance from zero", is used to define the absolute difference between arbitrary real numbers, the standard metric on the real numbers.
Complex numbers
Since the complex numbers are not ordered, the definition given at the top for the real absolute value cannot be directly applied to complex numbers. However, the geometric interpretation of the absolute value of a real number as its distance from 0 can be generalised. The absolute value of a complex number is defined by the Euclidean distance of its corresponding point in the complex plane from the origin. This can be computed using the Pythagorean theorem: for any complex number
where and are real numbers, the absolute value or modulus is and is defined by
the Pythagorean addition of and , where and denote the real and imaginary parts respectively. When the is zero, this coincides with the definition of the absolute value of the
When a complex number is expressed in its polar form its absolute value
Since the product of any complex number and its with the same absolute value, is always the non-negative real number the absolute value of a complex number is the square root which is therefore called the absolute square or squared modulus
This generalizes the alternative definition for reals:
The complex absolute value shares the four fundamental properties given above for the real absolute value. The identity is a special case of multiplicativity that is often useful by itself.
Absolute value function
The real absolute value function is continuous everywhere. It is differentiable everywhere except for . It is monotonically decreasing on the interval and monotonically increasing on the interval . Since a real number and its opposite have the same absolute value, it is an even function, and is hence not invertible. The real absolute value function is a piecewise linear, convex function.
For both real and complex numbers the absolute value function is idempotent (meaning that the absolute value of any absolute value is itself).
Relationship to the sign function
The absolute value function of a real number returns its value irrespective of its sign, whereas the sign (or signum) function returns a number's sign irrespective of its value. The following equations show the relationship between these two functions:
or
and for ,
Relationship to the max and min functions
Let , then
and
Derivative
The real absolute value function has a derivative for every , but is not differentiable at . Its derivative for is given by the step function:
The real absolute value function is an example of a continuous function that achieves a global minimum where the derivative does not exist.
The subdifferential of at is the interval .
The complex absolute value function is continuous everywhere but complex differentiable nowhere because it violates the Cauchy–Riemann equations.
The second derivative of with respect to is zero everywhere except zero, where it does not exist. As a generalised function, the second derivative may be taken as two times the Dirac delta function.
Antiderivative
The antiderivative (indefinite integral) of the real absolute value function is
where is an arbitrary constant of integration. This is not a complex antiderivative because complex antiderivatives can only exist for complex-differentiable (holomorphic) functions, which the complex absolute value function is not.
Derivatives of compositions
The following two formulae are special cases of the chain rule:
if the absolute value is inside a function, and
if another function is inside the absolute value. In the first case, the derivative is always discontinuous at in the first case and where in the second case.
Distance
The absolute value is closely related to the idea of distance. As noted above, the absolute value of a real or complex number is the distance from that number to the origin, along the real number line, for real numbers, or in the complex plane, for complex numbers, and more generally, the absolute value of the difference of two real or complex numbers is the distance between them.
The standard Euclidean distance between two points
and
in Euclidean -space is defined as:
This can be seen as a generalisation, since for and real, i.e. in a 1-space, according to the alternative definition of the absolute value,
and for and complex numbers, i.e. in a 2-space,
{|
|-
|
|
|-
|
|
|-
|
|
|}
The above shows that the "absolute value"-distance, for real and complex numbers, agrees with the standard Euclidean distance, which they inherit as a result of considering them as one and two-dimensional Euclidean spaces, respectively.
The properties of the absolute value of the difference of two real or complex numbers: non-negativity, identity of indiscernibles, symmetry and the triangle inequality given above, can be seen to motivate the more general notion of a distance function as follows:
A real valued function on a set is called a metric (or a distance function) on , if it satisfies the following four axioms:
{|
|-
|style="width:250px" |
|Non-negativity
|-
|
|Identity of indiscernibles
|-
|
|Symmetry
|-
|
|Triangle inequality
|}
Generalizations
Ordered rings
The definition of absolute value given for real numbers above can be extended to any ordered ring. That is, if is an element of an ordered ring R, then the absolute value of , denoted by , is defined to be:
where is the additive inverse of , 0 is the additive identity, and < and ≥ have the usual meaning with respect to the ordering in the ring.
Fields
The four fundamental properties of the absolute value for real numbers can be used to generalise the notion of absolute value to an arbitrary field, as follows.
A real-valued function on a field is called an absolute value (also a modulus, magnitude, value, or valuation) if it satisfies the following four axioms:
{| cellpadding=10
|-
|
|Non-negativity
|-
|
|Positive-definiteness
|-
|
|Multiplicativity
|-
|
|Subadditivity or the triangle inequality
|}
Where 0 denotes the additive identity of . It follows from positive-definiteness and multiplicativity that , where 1 denotes the multiplicative identity of . The real and complex absolute values defined above are examples of absolute values for an arbitrary field.
If is an absolute value on , then the function on , defined by , is a metric and the following are equivalent:
satisfies the ultrametric inequality for all , , in .
is bounded in R.
for every .
for all .
for all .
An absolute value which satisfies any (hence all) of the above conditions is said to be non-Archimedean, otherwise it is said to be Archimedean.
Vector spaces
Again the fundamental properties of the absolute value for real numbers can be used, with a slight modification, to generalise the notion to an arbitrary vector space.
A real-valued function on a vector space over a field , represented as , is called an absolute value, but more usually a norm, if it satisfies the following axioms:
For all in , and , in ,
{| cellpadding=10
|-
|
|Non-negativity
|-
|
|Positive-definiteness
|-
|
|Absolute homogeneity or positive scalability
|-
|
|Subadditivity or the triangle inequality
|}
The norm of a vector is also called its length or magnitude.
In the case of Euclidean space , the function defined by
is a norm called the Euclidean norm. When the real numbers are considered as the one-dimensional vector space , the absolute value is a norm, and is the -norm (see Lp space) for any . In fact the absolute value is the "only" norm on , in the sense that, for every norm on , .
The complex absolute value is a special case of the norm in an inner product space, which is identical to the Euclidean norm when the complex plane is identified as the Euclidean plane .
Composition algebras
Every composition algebra A has an involution x → x* called its conjugation. The product in A of an element x and its conjugate x* is written N(x) = x x* and called the norm of x.
The real numbers , complex numbers , and quaternions are all composition algebras with norms given by definite quadratic forms. The absolute value in these division algebras is given by the square root of the composition algebra norm.
In general the norm of a composition algebra may be a quadratic form that is not definite and has null vectors. However, as in the case of division algebras, when an element x has a non-zero norm, then x has a multiplicative inverse given by x*/N(x).
See also
Least absolute values
Notes
References
Bartle; Sherbert; Introduction to real analysis (4th ed.), John Wiley & Sons, 2011 .
Nahin, Paul J.; An Imaginary Tale; Princeton University Press; (hardcover, 1998). .
Mac Lane, Saunders, Garrett Birkhoff, Algebra, American Mathematical Soc., 1999. .
Mendelson, Elliott, Schaum's Outline of Beginning Calculus, McGraw-Hill Professional, 2008. .
O'Connor, J.J. and Robertson, E.F.; "Jean Robert Argand".
Schechter, Eric; Handbook of Analysis and Its Foundations, pp. 259–263, "Absolute Values", Academic Press (1997) .
External links
Special functions
Real numbers
Norms (mathematics) | Absolute value | [
"Mathematics"
] | 2,727 | [
"Mathematical analysis",
"Special functions",
"Real numbers",
"Mathematical objects",
"Combinatorics",
"Norms (mathematics)",
"Numbers"
] |
993 | https://en.wikipedia.org/wiki/Analog%20signal | An analog signal (American English) or analogue signal (British and Commonwealth English) is any continuous-time signal representing some other quantity, i.e., analogous to another quantity. For example, in an analog audio signal, the instantaneous signal voltage varies continuously with the pressure of the sound waves.
In contrast, a digital signal represents the original time-varying quantity as a sampled sequence of quantized values. Digital sampling imposes some bandwidth and dynamic range constraints on the representation and adds quantization noise.
The term analog signal usually refers to electrical signals; however, mechanical, pneumatic, hydraulic, and other systems may also convey or be considered analog signals.
Representation
An analog signal uses some property of the medium to convey the signal's information. For example, an aneroid barometer uses rotary position as the signal to convey pressure information. In an electrical signal, the voltage, current, or frequency of the signal may be varied to represent the information.
Any information may be conveyed by an analog signal; such a signal may be a measured response to changes in a physical variable, such as sound, light, temperature, position, or pressure. The physical variable is converted to an analog signal by a transducer. For example, sound striking the diaphragm of a microphone induces corresponding fluctuations in the current produced by a coil in an electromagnetic microphone or the voltage produced by a condenser microphone. The voltage or the current is said to be an analog of the sound.
Noise
An analog signal is subject to electronic noise and distortion introduced by communication channels, recording and signal processing operations, which can progressively degrade the signal-to-noise ratio (SNR). As the signal is transmitted, copied, or processed, the unavoidable noise introduced in the signal path will accumulate as a generation loss, progressively and irreversibly degrading the SNR, until in extreme cases, the signal can be overwhelmed. Noise can show up as hiss and intermodulation distortion in audio signals, or snow in video signals. Generation loss is irreversible as there is no reliable method to distinguish the noise from the signal.
Converting an analog signal to digital form introduces a low-level quantization noise into the signal due to finite resolution of digital systems. Once in digital form, the signal can be transmitted, stored, and processed without introducing additional noise or distortion using error detection and correction.
Noise accumulation in analog systems can be minimized by electromagnetic shielding, balanced lines, low-noise amplifiers and high-quality electrical components.
See also
Amplifier
Analog computer
Analog device
Analog signal processing
Magnetic tape
Preamplifier
References
Further reading
Analog circuits
Electronic design
Television terminology
Video signal | Analog signal | [
"Engineering"
] | 549 | [
"Electronic design",
"Analog circuits",
"Electronic engineering",
"Design"
] |
1,014 | https://en.wikipedia.org/wiki/Alcohol%20%28chemistry%29 | In chemistry, an alcohol (), is a type of organic compound that carries at least one hydroxyl () functional group bound to a saturated carbon atom. Alcohols range from the simple, like methanol and ethanol, to complex, like sugars and cholesterol. The presence of an OH group strongly modifies the properties of hydrocarbons, conferring hydrophilic (water-loving) properties. The OH group provides a site at which many reactions can occur.
History
The flammable nature of the exhalations of wine was already known to ancient natural philosophers such as Aristotle (384–322 BCE), Theophrastus (–287 BCE), and Pliny the Elder (23/24–79 CE). However, this did not immediately lead to the isolation of alcohol, even despite the development of more advanced distillation techniques in second- and third-century Roman Egypt. An important recognition, first found in one of the writings attributed to Jābir ibn Ḥayyān (ninth century CE), was that by adding salt to boiling wine, which increases the wine's relative volatility, the flammability of the resulting vapors may be enhanced. The distillation of wine is attested in Arabic works attributed to al-Kindī (–873 CE) and to al-Fārābī (–950), and in the 28th book of al-Zahrāwī's (Latin: Abulcasis, 936–1013) Kitāb al-Taṣrīf (later translated into Latin as Liber servatoris). In the twelfth century, recipes for the production of aqua ardens ("burning water", i.e., alcohol) by distilling wine with salt started to appear in a number of Latin works, and by the end of the thirteenth century, it had become a widely known substance among Western European chemists.
The works of Taddeo Alderotti (1223–1296) describe a method for concentrating alcohol involving repeated fractional distillation through a water-cooled still, by which an alcohol purity of 90% could be obtained. The medicinal properties of ethanol were studied by Arnald of Villanova (1240–1311 CE) and John of Rupescissa (–1366), the latter of whom regarded it as a life-preserving substance able to prevent all diseases (the aqua vitae or "water of life", also called by John the quintessence of wine).
Nomenclature
Etymology
The word "alcohol" derives from the Arabic kohl (), a powder used as an eyeliner. The first part of the word () is the Arabic definite article, equivalent to the in English. The second part of the word () has several antecedents in Semitic languages, ultimately deriving from the Akkadian (), meaning stibnite or antimony.
Like its antecedents in Arabic and older languages, the term alcohol was originally used for the very fine powder produced by the sublimation of the natural mineral stibnite to form antimony trisulfide . It was considered to be the essence or "spirit" of this mineral. It was used as an antiseptic, eyeliner, and cosmetic. Later the meaning of alcohol was extended to distilled substances in general, and then narrowed again to ethanol, when "spirits" was a synonym for hard liquor.
Paracelsus and Libavius both used the term alcohol to denote a fine powder, the latter speaking of an alcohol derived from antimony. At the same time Paracelsus uses the word for a volatile liquid; alcool or alcool vini occurs often in his writings.
Bartholomew Traheron, in his 1543 translation of John of Vigo, introduces the word as a term used by "barbarous" authors for "fine powder." Vigo wrote: "the barbarous auctours use alcohol, or (as I fynde it sometymes wryten) alcofoll, for moost fine poudre."
The 1657 Lexicon Chymicum, by William Johnson glosses the word as "antimonium sive stibium." By extension, the word came to refer to any fluid obtained by distillation, including "alcohol of wine," the distilled essence of wine. Libavius in Alchymia (1594) refers to "". Johnson (1657) glosses alcohol vini as "." The word's meaning became restricted to "spirit of wine" (the chemical known today as ethanol) in the 18th century and was extended to the class of substances so-called as "alcohols" in modern chemistry after 1850.
The term ethanol was invented in 1892, blending "ethane" with the "-ol" ending of "alcohol", which was generalized as a libfix.
The term alcohol originally referred to the primary alcohol ethanol (ethyl alcohol), which is used as a drug and is the main alcohol present in alcoholic drinks.
The suffix -ol appears in the International Union of Pure and Applied Chemistry (IUPAC) chemical name of all substances where the hydroxyl group is the functional group with the highest priority. When a higher priority group is present in the compound, the prefix hydroxy- is used in its IUPAC name. The suffix -ol in non-IUPAC names (such as paracetamol or cholesterol) also typically indicates that the substance is an alcohol. However, some compounds that contain hydroxyl functional groups have trivial names that do not include the suffix -ol or the prefix hydroxy-, e.g. the sugars glucose and sucrose.
Systematic names
IUPAC nomenclature is used in scientific publications, and in writings where precise identification of the substance is important. In naming simple alcohols, the name of the alkane chain loses the terminal e and adds the suffix -ol, e.g., as in "ethanol" from the alkane chain name "ethane". When necessary, the position of the hydroxyl group is indicated by a number between the alkane name and the -ol: propan-1-ol for , propan-2-ol for . If a higher priority group is present (such as an aldehyde, ketone, or carboxylic acid), then the prefix hydroxy-is used, e.g., as in 1-hydroxy-2-propanone (). Compounds having more than one hydroxy group are called polyols. They are named using suffixes -diol, -triol, etc., following a list of the position numbers of the hydroxyl groups, as in propane-1,2-diol for CH3CH(OH)CH2OH (propylene glycol).
In cases where the hydroxy group is bonded to an sp2 carbon on an aromatic ring, the molecule is classified separately as a phenol and is named using the IUPAC rules for naming phenols. Phenols have distinct properties and are not classified as alcohols.
Common names
In other less formal contexts, an alcohol is often called with the name of the corresponding alkyl group followed by the word "alcohol", e.g., methyl alcohol, ethyl alcohol. Propyl alcohol may be n-propyl alcohol or isopropyl alcohol, depending on whether the hydroxyl group is bonded to the end or middle carbon on the straight propane chain. As described under systematic naming, if another group on the molecule takes priority, the alcohol moiety is often indicated using the "hydroxy-" prefix.
In archaic nomenclature, alcohols can be named as derivatives of methanol using "-carbinol" as the ending. For instance, can be named trimethylcarbinol.
Primary, secondary, and tertiary
Alcohols are then classified into primary, secondary (sec-, s-), and tertiary (tert-, t-), based upon the number of carbon atoms connected to the carbon atom that bears the hydroxyl functional group. The respective numeric shorthands 1°, 2°, and 3° are sometimes used in informal settings. The primary alcohols have general formulas . The simplest primary alcohol is methanol (), for which R = H, and the next is ethanol, for which , the methyl group. Secondary alcohols are those of the form RR'CHOH, the simplest of which is 2-propanol (). For the tertiary alcohols, the general form is RR'R"COH. The simplest example is tert-butanol (2-methylpropan-2-ol), for which each of R, R', and R" is . In these shorthands, R, R', and R" represent substituents, alkyl or other attached, generally organic groups.
Examples
Applications
Alcohols have a long history of myriad uses. For simple mono-alcohols, which is the focus on this article, the following are most important industrial alcohols:
methanol, mainly for the production of formaldehyde and as a fuel additive
ethanol, mainly for alcoholic beverages, fuel additive, solvent, and to sterilize hospital instruments.
1-propanol, 1-butanol, and isobutyl alcohol for use as a solvent and precursor to solvents
C6–C11 alcohols used for plasticizers, e.g. in polyvinylchloride
fatty alcohol (C12–C18), precursors to detergents
Methanol is the most common industrial alcohol, with about 12 million tons/y produced in 1980. The combined capacity of the other alcohols is about the same, distributed roughly equally.
Toxicity
With respect to acute toxicity, simple alcohols have low acute toxicities. Doses of several milliliters are tolerated. For pentanols, hexanols, octanols, and longer alcohols, LD50 range from 2–5 g/kg (rats, oral). Ethanol is less acutely toxic. All alcohols are mild skin irritants.
Methanol and ethylene glycol are more toxic than other simple alcohols. Their metabolism is affected by the presence of ethanol, which has a higher affinity for liver alcohol dehydrogenase. In this way, methanol will be excreted intact in urine.
Physical properties
In general, the hydroxyl group makes alcohols polar. Those groups can form hydrogen bonds to one another and to most other compounds. Owing to the presence of the polar OH alcohols are more water-soluble than simple hydrocarbons. Methanol, ethanol, and propanol are miscible in water. 1-Butanol, with a four-carbon chain, is moderately soluble.
Because of hydrogen bonding, alcohols tend to have higher boiling points than comparable hydrocarbons and ethers. The boiling point of the alcohol ethanol is 78.29 °C, compared to 69 °C for the hydrocarbon hexane, and 34.6 °C for diethyl ether.
Occurrence in nature
Alcohols occur widely in nature, as derivatives of glucose such as cellulose and hemicellulose, and in phenols and their derivatives such as lignin. Starting from biomass, 180 billion tons/y of complex carbohydrates (sugar polymers) are produced commercially (as of 2014). Many other alcohols are pervasive in organisms, as manifested in other sugars such as fructose and sucrose, in polyols such as glycerol, and in some amino acids such as serine. Simple alcohols like methanol, ethanol, and propanol occur in modest quantities in nature, and are industrially synthesized in large quantities for use as chemical precursors, fuels, and solvents.
Production
Hydroxylation
Many alcohols are produced by hydroxylation, i.e., the installation of a hydroxy group using oxygen or a related oxidant. Hydroxylation is the means by which the body processes many poisons, converting lipophilic compounds into hydrophilic derivatives that are more readily excreted. Enzymes called hydroxylases and oxidases facilitate these conversions.
Many industrial alcohols, such as cyclohexanol for the production of nylon, are produced by hydroxylation.
Ziegler and oxo processes
In the Ziegler process, linear alcohols are produced from ethylene and triethylaluminium followed by oxidation and hydrolysis. An idealized synthesis of 1-octanol is shown:
Al(C2H5)3 + 9 C2H4 -> Al(C8H17)3
Al(C8H17)3 + 3O + 3 H2O -> 3 HOC8H17 + Al(OH)3
The process generates a range of alcohols that are separated by distillation.
Many higher alcohols are produced by hydroformylation of alkenes followed by hydrogenation. When applied to a terminal alkene, as is common, one typically obtains a linear alcohol:
RCH=CH2 + H2 + CO -> RCH2CH2CHO
RCH2CH2CHO + 3 H2 -> RCH2CH2CH2OH
Such processes give fatty alcohols, which are useful for detergents.
Hydration reactions
Some low molecular weight alcohols of industrial importance are produced by the addition of water to alkenes. Ethanol, isopropanol, 2-butanol, and tert-butanol are produced by this general method. Two implementations are employed, the direct and indirect methods. The direct method avoids the formation of stable intermediates, typically using acid catalysts. In the indirect method, the alkene is converted to the sulfate ester, which is subsequently hydrolyzed. The direct hydration uses ethylene (ethylene hydration) or other alkenes from cracking of fractions of distilled crude oil.
Hydration is also used industrially to produce the diol ethylene glycol from ethylene oxide.
Fermentation
Ethanol is obtained by fermentation of glucose (which is often obtained from starch) in the presence of yeast. Carbon dioxide is cogenerated. Like ethanol, butanol can be produced by fermentation processes. Saccharomyces yeast are known to produce these higher alcohols at temperatures above . The bacterium Clostridium acetobutylicum can feed on cellulose (also an alcohol) to produce butanol on an industrial scale.
Substitution
Primary alkyl halides react with aqueous NaOH or KOH to give alcohols in nucleophilic aliphatic substitution. Secondary and especially tertiary alkyl halides will give the elimination (alkene) product instead. Grignard reagents react with carbonyl groups to give secondary and tertiary alcohols. Related reactions are the Barbier reaction and the Nozaki-Hiyama reaction.
Reduction
Aldehydes or ketones are reduced with sodium borohydride or lithium aluminium hydride (after an acidic workup). Another reduction using aluminium isopropoxide is the Meerwein-Ponndorf-Verley reduction. Noyori asymmetric hydrogenation is the asymmetric reduction of β-keto-esters.
Hydrolysis
Alkenes engage in an acid catalyzed hydration reaction using concentrated sulfuric acid as a catalyst that gives usually secondary or tertiary alcohols. Formation of a secondary alcohol via alkene reduction and hydration is shown:
The hydroboration-oxidation and oxymercuration-reduction of alkenes are more reliable in organic synthesis. Alkenes react with N-bromosuccinimide and water in halohydrin formation reaction. Amines can be converted to diazonium salts, which are then hydrolyzed.
Reactions
Deprotonation
With aqueous pKa values of around 16–19, alcohols are, in general, slightly weaker acids than water. With strong bases such as sodium hydride or sodium they form salts called alkoxides, with the general formula (where R is an alkyl and M is a metal).
2 R-OH + 2 NaH -> 2 R-O-Na + 2 H2
2 R-OH + 2 Na -> 2 R-O-Na + H2
The acidity of alcohols is strongly affected by solvation. In the gas phase, alcohols are more acidic than in water. In DMSO, alcohols (and water) have a pKa of around 29–32. As a consequence, alkoxides (and hydroxide) are powerful bases and nucleophiles (e.g., for the Williamson ether synthesis) in this solvent. In particular, or in DMSO can be used to generate significant equilibrium concentrations of acetylide ions through the deprotonation of alkynes (see Favorskii reaction).
Nucleophilic substitution
Tertiary alcohols react with hydrochloric acid to produce tertiary alkyl chloride. Primary and secondary alcohols are converted to the corresponding chlorides using thionyl chloride and various phosphorus chloride reagents.
Primary and secondary alcohols, likewise, convert to alkyl bromides using phosphorus tribromide, for example:
3 R-OH + PBr3 -> 3 RBr + H3PO3
In the Barton-McCombie deoxygenation an alcohol is deoxygenated to an alkane with tributyltin hydride or a trimethylborane-water complex in a radical substitution reaction.
Dehydration
Meanwhile, the oxygen atom has lone pairs of nonbonded electrons that render it weakly basic in the presence of strong acids such as sulfuric acid. For example, with methanol:
Upon treatment with strong acids, alcohols undergo the E1 elimination reaction to produce alkenes. The reaction, in general, obeys Zaitsev's Rule, which states that the most stable (usually the most substituted) alkene is formed. Tertiary alcohols are eliminated easily at just above room temperature, but primary alcohols require a higher temperature.
This is a diagram of acid catalyzed dehydration of ethanol to produce ethylene:
A more controlled elimination reaction requires the formation of the xanthate ester.
Protonolysis
Tertiary alcohols react with strong acids to generate carbocations. The reaction is related to their dehydration, e.g. isobutylene from tert-butyl alcohol. A special kind of dehydration reaction involves triphenylmethanol and especially its amine-substituted derivatives. When treated with acid, these alcohols lose water to give stable carbocations, which are commercial dyes.
Esterification
Alcohol and carboxylic acids react in the so-called Fischer esterification. The reaction usually requires a catalyst, such as concentrated sulfuric acid:
R-OH + R'-CO2H -> R'-CO2R + H2O
Other types of ester are prepared in a similar manner−for example, tosyl (tosylate) esters are made by reaction of the alcohol with 4-toluenesulfonyl chloride in pyridine.
Oxidation
Primary alcohols () can be oxidized either to aldehydes () or to carboxylic acids (). The oxidation of secondary alcohols () normally terminates at the ketone () stage. Tertiary alcohols () are resistant to oxidation.
The direct oxidation of primary alcohols to carboxylic acids normally proceeds via the corresponding aldehyde, which is transformed via an aldehyde hydrate () by reaction with water before it can be further oxidized to the carboxylic acid.
Reagents useful for the transformation of primary alcohols to aldehydes are normally also suitable for the oxidation of secondary alcohols to ketones. These include Collins reagent and Dess-Martin periodinane. The direct oxidation of primary alcohols to carboxylic acids can be carried out using potassium permanganate or the Jones reagent.
See also
Beer chemistry
Enol
Ethanol fuel
Fatty alcohol
Index of alcohol-related articles
List of alcohols
Lucas test
Polyol
Rubbing alcohol
Sugar alcohol
Transesterification
Wine chemistry
Notes
Citations
General references
Antiseptics
Functional groups
Organic chemistry
Addiction | Alcohol (chemistry) | [
"Chemistry"
] | 4,295 | [
"Functional groups",
"nan"
] |
1,021 | https://en.wikipedia.org/wiki/Aspect%20ratio | The aspect ratio of a geometric shape is the ratio of its sizes in different dimensions. For example, the aspect ratio of a rectangle is the ratio of its longer side to its shorter side—the ratio of width to height, when the rectangle is oriented as a "landscape".
The aspect ratio is most often expressed as two integer numbers separated by a colon (x:y), less commonly as a simple or decimal fraction. The values x and y do not represent actual widths and heights but, rather, the proportion between width and height. As an example, 8:5, 16:10, 1.6:1, and 1.6 are all ways of representing the same aspect ratio.
In objects of more than two dimensions, such as hyperrectangles, the aspect ratio can still be defined as the ratio of the longest side to the shortest side.
Applications and uses
The term is most commonly used with reference to:
Graphic / image
Image aspect ratio
Display aspect ratio
Paper size
Standard photographic print sizes
Motion picture film formats
Standard ad size
Pixel aspect ratio
Photolithography: the aspect ratio of an etched, or deposited structure is the ratio of the height of its vertical side wall to its width.
HARMST High Aspect Ratios allow the construction of tall microstructures without slant
Tire code
Tire sizing
Turbocharger impeller sizing
Wing aspect ratio of an aircraft or bird
Astigmatism of an optical lens
Nanorod dimensions
Shape factor (image analysis and microscopy)
Finite Element Analysis
Flag design; see List of aspect ratios of national flags
Aspect ratios of simple shapes
Rectangles
For a rectangle, the aspect ratio denotes the ratio of the width to the height of the rectangle. A square has the smallest possible aspect ratio of 1:1.
Examples:
4:3 = 1.: Some (not all) 20th century computer monitors (VGA, XGA, etc.), standard-definition television
: international paper sizes (ISO 216)
3:2 = 1.5: 35mm still camera film, iPhone (until iPhone 5) displays
16:10 = 1.6: commonly used widescreen computer displays (WXGA)
Φ:1 = 1.618...: golden ratio, close to 16:10
5:3 = 1.: super 16 mm, a standard film gauge in many European countries
16:9 = 1.: widescreen TV and most laptops
2:1 = 2: dominoes
64:27 = 2.: ultra-widescreen, 21:9
32:9 = 3.: super ultra-widescreen
Ellipses
For an ellipse, the aspect ratio denotes the ratio of the major axis to the minor axis. An ellipse with an aspect ratio of 1:1 is a circle.
Aspect ratios of general shapes
In geometry, there are several alternative definitions to aspect ratios of general compact sets in a d-dimensional space:
The diameter-width aspect ratio (DWAR) of a compact set is the ratio of its diameter to its width. A circle has the minimal DWAR which is 1. A square has a DWAR of .
The cube-volume aspect ratio (CVAR) of a compact set is the d-th root of the ratio of the d-volume of the smallest enclosing axes-parallel d-cube, to the set's own d-volume. A square has the minimal CVAR which is 1. A circle has a CVAR of . An axis-parallel rectangle of width W and height H, where W>H, has a CVAR of .
If the dimension d is fixed, then all reasonable definitions of aspect ratio are equivalent to within constant factors.
Notations
Aspect ratios are mathematically expressed as x:y (pronounced "x-to-y").
Cinematographic aspect ratios are usually denoted as a (rounded) decimal multiple of width vs unit height, while photographic and videographic aspect ratios are usually defined and denoted by whole number ratios of width to height. In digital images there is a subtle distinction between the display aspect ratio (the image as displayed) and the storage aspect ratio (the ratio of pixel dimensions); see Distinctions.
See also
Axial ratio
Ratio
Equidimensional ratios in 3D
List of film formats
Squeeze mapping
Scale (ratio)
Vertical orientation
References
Ratios | Aspect ratio | [
"Mathematics"
] | 884 | [
"Arithmetic",
"Ratios"
] |
1,130 | https://en.wikipedia.org/wiki/Avicenna | Ibn Sina (; – 22 June 1037), commonly known in the West as Avicenna (), was a preeminent philosopher and physician of the Muslim world, flourishing during the Islamic Golden Age, serving in the courts of various Iranian rulers. He is often described as the father of early modern medicine. His philosophy was of the Peripatetic school derived from Aristotelianism.
His most famous works are The Book of Healing, a philosophical and scientific encyclopedia, and The Canon of Medicine, a medical encyclopedia which became a standard medical text at many medieval European universities and remained in use as late as 1650. Besides philosophy and medicine, Avicenna's corpus includes writings on astronomy, alchemy, geography and geology, psychology, Islamic theology, logic, mathematics, physics, and works of poetry.
Avicenna wrote most of his philosophical and scientific works in Arabic, but also wrote several key works in Persian, while his poetic works were written in both languages. Of the 450 works he is believed to have written, around 240 have survived, including 150 on philosophy and 40 on medicine.
Name
is a Latin corruption of the Arabic patronym Ibn Sīnā (), meaning "Son of Sina". However, Avicenna was not the son but the great-great-grandson of a man named Sina. His formal Arabic name was Abū ʿAlī al-Ḥusayn bin ʿAbdullāh ibn al-Ḥasan bin ʿAlī bin Sīnā al-Balkhi al-Bukhari ().
Circumstances
Avicenna created an extensive corpus of works during what is commonly known as the Islamic Golden Age, in which the translations of Byzantine, Greco-Roman, Persian, and Indian texts were studied extensively. Greco-Roman (Middle Platonic, Neoplatonic, and Aristotelian) texts translated by the Kindi school were commented, redacted and developed substantially by Islamic intellectuals, who also built upon Persian and Indian mathematical systems, astronomy, algebra, trigonometry and medicine.
The Samanid Empire in the eastern part of Persia, Greater Khorasan, and Central Asia, as well as the Buyid dynasty in the western part of Persia and Iraq, provided a thriving atmosphere for scholarly and cultural development. Under the Samanids, Bukhara rivaled Baghdad for cultural capital of the Muslim world. There, Avicenna had access to the great libraries of Balkh, Khwarazm, Gorgan, Rey, Isfahan and Hamadan.
Various texts (such as the 'Ahd with Bahmanyar) show that Avicenna debated philosophical points with the greatest scholars of the time. Nizami Aruzi described how before ibn Sina left Khwarazm, he had met al-Biruni (a scientist and astronomer), Abu Nasr Mansur (a renowned mathematician), Abu Sahl 'Isa ibn Yahya al-Masihi (a respected philosopher) and ibn al-Khammar (a great physician). The study of the Quran and the Hadith also thrived, and Islamic philosophy, fiqh "jurisprudence", and kalam "speculative theology" were all further developed by ibn Sina and his opponents at this time.
Biography
Early life and education
Avicenna was born in in the village of Afshana in Transoxiana to a Persian family. The village was near the Samanid capital of Bukhara, which was his mother's hometown. His father Abd Allah was a native of the city of Balkh in Bactria. An official of the Samanid bureaucracy, he had served as the governor of a village of the royal estate of Harmaytan near Bukhara during the reign of Nuh II (). Avicenna also had a younger brother. A few years later, the family settled in Bukhara, a center of learning, which attracted many scholars. It was there that Avicenna was educated, which early on was seemingly administered by his father.
Although both Avicenna's father and brother had converted to Isma'ilism, he himself did not follow the faith. He was instead a Hanafi Sunni, the same school followed by the Samanids.
Avicenna was first schooled in the Quran and literature, and by the age of 10, he had memorized the entire Quran. He was later sent by his father to an Indian greengrocer, who taught him arithmetic. Afterwards, he was schooled in fiqh by the Hanafi jurist Ismail al-Zahid. Sometime later, his father invited the physician and philosopher al-Natili to their house to educate ibn Sina. Together, they studied the Isagoge of Porphyry (died 305) and possibly the Categories of Aristotle (died 322 BCE) as well. After Avicenna had read the Almagest of Ptolemy (died 170) and Euclid's Elements, al-Natili told him to continue his research independently. By the time Avicenna was eighteen, he was well-educated in Greek sciences. Although ibn Sina only mentions al-Natili as his teacher in his autobiography, he most likely had other teachers as well, such as the physicians Qumri and Abu Sahl 'Isa ibn Yahya al-Masihi.
Career
In Bukhara and Gurganj
At the age of seventeen, Avicenna was made a physician of Nuh II. By the time Avicenna was at least 21 years old, his father died. He was subsequently given an administrative post, possibly succeeding his father as the governor of Harmaytan. Avicenna later moved to Gurganj, the capital of Khwarazm, which he reports that he did due to "necessity". The date he went to the place is uncertain, as he reports that he served the Khwarazmshah, the ruler of Khwarazm, the Ma'munid ruler Abu al-Hasan Ali. The latter ruled from 997 to 1009, which indicates that Avicenna moved sometime during that period.
He may have moved in 999, the year in which the Samanid Empire fell after the Kara-Khanid Khanate captured Bukhara and imprisoned the Samanid emir Abd al-Malik II. Due to his high position and strong connection with the Samanids, ibn Sina may have found himself in an unfavorable position after the fall of his suzerain.
It was through the minister of Gurganj, Abu'l-Husayn as-Sahi, a patron of Greek sciences, that Avicenna entered into the service of Abu al-Hasan Ali. Under the Ma'munids, Gurganj became a centre of learning, attracting many prominent figures, such as ibn Sina and his former teacher Abu Sahl al-Masihi, the mathematician Abu Nasr Mansur, the physician ibn al-Khammar, and the philologist al-Tha'alibi.
In Gorgan
Avicenna later moved due to "necessity" once more (in 1012), this time to the west. There he travelled through the Khurasani cities of Nasa, Abivard, Tus, Samangan and Jajarm. He was planning to visit the ruler of the city of Gorgan, the Ziyarid Qabus (), a cultivated patron of writing, whose court attracted many distinguished poets and scholars. However, when Avicenna eventually arrived, he discovered that the ruler had been dead since the winter of 1013. Avicenna then left Gorgan for Dihistan, but returned after becoming ill. There he met Abu 'Ubayd al-Juzjani (died 1070) who became his pupil and companion. Avicenna stayed briefly in Gorgan, reportedly serving Qabus's son and successor Manuchihr () and resided in the house of a patron.
In Ray and Hamadan
In , Avicenna went to the city of Ray, where he entered into the service of the Buyid amir Majd al-Dawla () and his mother Sayyida Shirin, the de facto ruler of the realm. There he served as the physician at the court, treating Majd al-Dawla, who was suffering from melancholia. Avicenna reportedly later served as the "business manager" of Sayyida Shirin in Qazvin and Hamadan, though details regarding this tenure are unclear. During this period, Avicenna finished writing The Canon of Medicine and started writing his The Book of Healing.
In 1015, during Avicenna's stay in Hamadan, he participated in a public debate, as was customary for newly arrived scholars in western Iran at that time. The purpose of the debate was to examine one's reputation against a prominent resident. The person whom Avicenna debated against was Abu'l-Qasim al-Kirmani, a member of the school of philosophers of Baghdad. The debate became heated, resulting in ibn Sina accusing Abu'l-Qasim of lack of basic knowledge in logic, while Abu'l-Qasim accused ibn Sina of impoliteness.
After the debate, Avicenna sent a letter to the Baghdad Peripatetics, asking if Abu'l-Qasim's claim that he shared the same opinion as them was true. Abu'l-Qasim later retaliated by writing a letter to an unknown person in which he made accusations so serious that ibn Sina wrote to Abu Sa'd, the deputy of Majd al-Dawla, to investigate the matter. The accusation made towards Avicenna may have been the same as he had received earlier, in which he was accused by the people of Hamadan of copying the stylistic structures of the Quran in his Sermons on Divine Unity. The seriousness of this charge, in the words of the historian Peter Adamson, "cannot be underestimated in the larger Muslim culture".
Not long afterwards, Avicenna shifted his allegiance to the rising Buyid amir Shams al-Dawla, the younger brother of Majd al-Dawla, which Adamson suggests was due to Abu'l-Qasim also working under Sayyida Shirin. Avicenna had been called upon by Shams al-Dawla to treat him, but after the latter's campaign in the same year against his former ally, the Annazid ruler Abu Shawk (), he forced Avicenna to become his vizier.
Although Avicenna would sometimes clash with Shams al-Dawla's troops, he remained vizier until the latter died of colic in 1021. Avicenna was asked to stay as vizier by Shams al-Dawla's son and successor Sama' al-Dawla (), but he instead went into hiding with his patron, Abu Ghalib al-Attar, to wait for better opportunities to emerge. It was during this period that Avicenna was secretly in contact with Ala al-Dawla Muhammad (), the Kakuyid ruler of Isfahan and uncle of Sayyida Shirin.
It was during his stay at Attar's home that Avicenna completed The Book of Healing, writing 50 pages a day. The Buyid court in Hamadan, particularly the Kurdish vizier Taj al-Mulk, suspected Avicenna of correspondence with Ala al-Dawla, and as a result, had the house of Attar ransacked and ibn Sina imprisoned in the fortress of Fardajan, outside Hamadan. Juzjani blames one of ibn Sina's informers for his capture. He was imprisoned for four months until Ala al-Dawla captured Hamadan, ending Sama al-Dawla's reign.
In Isfahan
Avicenna was subsequently released, and went to Isfahan, where he was well received by Ala al-Dawla. In the words of Juzjani, the Kakuyid ruler gave Avicenna "the respect and esteem which someone like him deserved". Adamson also says that Avicenna's service under Ala al-Dawla "proved to be the most stable period of his life". Avicenna served as the advisor, if not vizier of Ala al-Dawla, accompanying him in many of his military expeditions and travels. Avicenna dedicated two Persian works to him, a philosophical treatise named Danish-nama-yi Ala'i ("Book of Science for Ala"), and a medical treatise about the pulse.
During the brief occupation of Isfahan by the Ghaznavids in January 1030, Avicenna and Ala al-Dawla relocated to the southwestern Iranian region of Khuzistan, where they stayed until the death of the Ghaznavid ruler Mahmud (), which occurred two months later. It was seemingly when Avicenna returned to Isfahan that he started writing his Pointers and Reminders. In 1037, while Avicenna was accompanying Ala al-Dawla to a battle near Isfahan, he contracted a severe colic, having suffered from colic throughout his life. He died shortly afterwards in Hamadan, where he was buried.
Philosophy
Avicenna wrote extensively on early Islamic philosophy, especially the subjects logic, ethics and metaphysics, including treatises named Logic and Metaphysics. Most of his works were written in Arabic, then the language of science in the Muslim world, and some in Early New Persian. Of linguistic significance even to this day are a few books that he wrote in Persian, particularly the Danishnama. Avicenna's commentaries on Aristotle often criticized the philosopher, encouraging a lively debate in the spirit of ijtihad.
Avicenna's Neoplatonic scheme of emanations became fundamental in kalam in the 12th century.
The Book of Healing became available in Europe in a partial Latin translation some fifty years after its composition under the title Sufficientia, and some authors have identified a "Latin Avicennism" as flourishing for some time paralleling the more influential Latin Averroism, but it was suppressed by the Parisian decrees of 1210 and 1215.
Avicenna's psychology and theory of knowledge influenced the theologian William of Auvergne and Albertus Magnus, while his metaphysics influenced the thought of Thomas Aquinas.
Metaphysical doctrine
Early Islamic philosophy and Islamic metaphysics, imbued as it is with kalam, distinguishes between essence and existence more clearly than Aristotelianism. Whereas existence is the domain of the contingent and the accidental, essence endures within a being beyond the accidental. The philosophy of Avicenna, particularly that part relating to metaphysics, owes much to al-Farabi. The search for a definitive Islamic philosophy separate from Occasionalism can be seen in what is left of his work.
Following al-Farabi's lead, Avicenna initiated a full-fledged inquiry into the question of being, in which he distinguished between essence () and existence (). He argued that the fact of existence cannot be inferred from or accounted for by the essence of existing things, and that form and matter by themselves cannot interact and originate the movement of the universe or the progressive actualization of existing things. Existence must, therefore, be due to an agent-cause that necessitates, imparts, gives, or adds existence to an essence. To do so, the cause must be an existing thing and coexist with its effect.
Impossibility, contingency, necessity
Avicenna's consideration of the essence-attributes question may be elucidated in terms of his ontological analysis of the modalities of being; namely impossibility, contingency and necessity. Avicenna argued that the impossible being is that which cannot exist, while the contingent in itself (mumkin bi-dhatihi) has the potentiality to be or not to be without entailing a contradiction. When actualized, the contingent becomes a 'necessary existent due to what is other than itself' (wajib al-wujud bi-ghayrihi). Thus, contingency-in-itself is potential beingness that could eventually be actualized by an external cause other than itself. The metaphysical structures of necessity and contingency are different. Necessary being due to itself (wajib al-wujud bi-dhatihi) is true in itself, while the contingent being is 'false in itself' and 'true due to something else other than itself'. The necessary is the source of its own being without borrowed existence. It is what always exists.
Differentia
The Necessary exists 'due-to-Its-Self', and has no quiddity/essence other than existence. Furthermore, It is 'One' (wahid ahad) since there cannot be more than one 'Necessary-Existent-due-to-Itself' without differentia (fasl) to distinguish them from each other. Yet, to require differentia entails that they exist 'due-to-themselves' as well as 'due to what is other than themselves'; and this is contradictory. If no differentia distinguishes them from each other, then, in no sense are these 'Existents' not the same. Avicenna adds that the 'Necessary-Existent-due-to-Itself' has no genus (jins), nor a definition (hadd), nor a counterpart (nadd), nor an opposite (did), and is detached (bari) from matter (madda), quality (kayf), quantity (kam), place (ayn), situation (wad) and time (waqt).
Reception
Avicenna's theology on metaphysical issues (ilāhiyyāt) has been criticized by some Islamic scholars, among them al-Ghazali, ibn Taymiyya, and ibn Qayyim al-Jawziyya. While discussing the views of the theists among the Greek philosophers, namely Socrates, Plato and Aristotle in Al-Munqidh min ad-Dalal "Deliverance from Error", al-Ghazali noted:
Argument for God's existence
Avicenna made an argument for the existence of God which would be known as the "Proof of the Truthful" (wajib al-wujud). Avicenna argued that there must be a Proof of the Truthful, an entity that cannot not exist and through a series of arguments, he identified it with God in Islam. Present-day historian of philosophy Peter Adamson called this argument one of the most influential medieval arguments for God's existence, and Avicenna's biggest contribution to the history of philosophy.
Al-Biruni correspondence
Correspondence between ibn Sina with his student Ahmad ibn ʿAli al-Maʿsumi and al-Biruni has survived in which they debated Aristotelian natural philosophy and the Peripatetic school. al-Biruni began by asking eighteen questions, ten of which were criticisms of Aristotle's On the Heavens.
Theology
Ibn Sina was a devout Muslim and sought to reconcile rational philosophy with Islamic theology. He aimed to prove the existence of God and His creation of the world scientifically and through reason and logic. His views on Islamic theology and philosophy were enormously influential, forming part of the core of the curriculum at Islamic religious schools until the 19th century.
Avicenna wrote several short treatises dealing with Islamic theology. These included treatises on the prophets and messengers in Islam, whom he viewed as "inspired philosophers", and also on various scientific and philosophical interpretations of the Quran, such as how Quranic cosmology corresponds to his philosophical system. In general, these treatises linked his philosophical writings to Islamic religious ideas; for example, the body's afterlife.
There are occasional brief hints and allusions in his longer works, however, that Avicenna considered philosophy as the only sensible way to distinguish real prophecy from illusion. He did not state this more clearly because of the political implications of such a theory if prophecy could be questioned, and also because most of the time he was writing shorter works which concentrated on explaining his theories on philosophy and theology clearly, without digressing to consider epistemological matters which could only be properly considered by other philosophers.
Later interpretations of Avicenna's philosophy split into three different schools; those (such as al-Tusi) who continued to apply his philosophy as a system to interpret later political events and scientific advances; those (such as al-Razi) who considered Avicenna's theological works in isolation from his wider philosophical concerns; and those (such as al-Ghazali) who selectively used parts of his philosophy to support their own attempts to gain greater spiritual insights through a variety of mystical means. It was the theological interpretation championed by those such as al-Razi which eventually came to predominate in the madrasahs.
Avicenna memorized the Quran by the age of ten, and as an adult, wrote five treatises commenting on surahs of the Quran. One of these texts included the Proof of Prophecies, in which he comments on several Quranic verses and holds the Quran in high esteem. Avicenna argued that the Islamic prophets should be considered higher than philosophers.
Avicenna is generally understood to have been aligned with the Hanafi school of Sunni thought. Avicenna studied Hanafi law, many of his notable teachers were Hanafi jurists, and he served under the Hanafi court of Ali ibn Mamun. Avicenna said at an early age that he remained "unconvinced" by Ismaili missionary attempts to convert him.
Medieval historian Ẓahīr al-dīn al-Bayhaqī (d. 1169) believed Avicenna to be a follower of the Brethren of Purity.
Thought experiments
While he was imprisoned in the castle of Fardajan near Hamadhan, Avicenna wrote his famous "floating man"—literally falling man—a thought experiment to demonstrate human self-awareness and the substantiality and immateriality of the soul. Avicenna believed his "Floating Man" thought experiment demonstrated that the soul is a substance, and claimed humans cannot doubt their own consciousness, even in a situation that prevents all sensory data input. The thought experiment told its readers to imagine themselves created all at once while suspended in the air, isolated from all sensations, which includes no sensory contact with even their own bodies. He argued that, in this scenario, one would still have self-consciousness. Because it is conceivable that a person, suspended in air while cut off from sense experience, would still be capable of determining his own existence, the thought experiment points to the conclusions that the soul is a perfection, independent of the body, and an immaterial substance. The conceivability of this "Floating Man" indicates that the soul is perceived intellectually, which entails the soul's separateness from the body. Avicenna referred to the living human intelligence, particularly the active intellect, which he believed to be the hypostasis by which God communicates truth to the human mind and imparts order and intelligibility to nature. Following is an English translation of the argument:
However, Avicenna posited the brain as the place where reason interacts with sensation. Sensation prepares the soul to receive rational concepts from the universal Agent Intellect. The first knowledge of the flying person would be "I am," affirming his or her essence. That essence could not be the body, obviously, as the flying person has no sensation. Thus, the knowledge that "I am" is the core of a human being: the soul exists and is self-aware. Avicenna thus concluded that the idea of the self is not logically dependent on any physical thing, and that the soul should not be seen in relative terms, but as a primary given, a substance. The body is unnecessary; in relation to it, the soul is its perfection. In itself, the soul is an immaterial substance.
Principal works
The Canon of Medicine
Avicenna authored a five-volume medical encyclopedia, The Canon of Medicine (). It was used as the standard medical textbook in the Islamic world and Europe up to the 18th century. The Canon still plays an important role in Unani medicine.
Liber Primus Naturalium
Avicenna considered whether events like rare diseases or disorders have natural causes. He used the example of polydactyly to explain his perception that causal reasons exist for all medical events. This view of medical phenomena anticipated developments in the Enlightenment by seven centuries.
The Book of Healing
Earth sciences
Avicenna wrote on Earth sciences such as geology in The Book of Healing. While discussing the formation of mountains, he explained:
Philosophy of science
In the Al-Burhan (On Demonstration) section of The Book of Healing, Avicenna discussed the philosophy of science and described an early scientific method of inquiry. He discussed Aristotle's Posterior Analytics and significantly diverged from it on several points. Avicenna discussed the issue of a proper methodology for scientific inquiry and the question of "How does one acquire the first principles of a science?" He asked how a scientist would arrive at "the initial axioms or hypotheses of a deductive science without inferring them from some more basic premises?" He explained that the ideal situation is when one grasps that a "relation holds between the terms, which would allow for absolute, universal certainty". Avicenna then added two further methods for arriving at the first principles: the ancient Aristotelian method of induction (istiqra), and the method of examination and experimentation (tajriba). Avicenna criticized Aristotelian induction, arguing that "it does not lead to the absolute, universal, and certain premises that it purports to provide." In its place, he developed a "method of experimentation as a means for scientific inquiry."
Logic
An early formal system of temporal logic was studied by Avicenna. Although he did not develop a real theory of temporal propositions, he did study the relationship between temporalis and the implication. Avicenna's work was further developed by Najm al-Dīn al-Qazwīnī al-Kātibī and became the dominant system of Islamic logic until modern times. Avicennian logic also influenced several early European logicians such as Albertus Magnus and William of Ockham. Avicenna endorsed the law of non-contradiction proposed by Aristotle, that a fact could not be both true and false at the same time and in the same sense of the terminology used. He stated, "Anyone who denies the law of non-contradiction should be beaten and burned until he admits that to be beaten is not the same as not to be beaten, and to be burned is not the same as not to be burned."
Physics
In mechanics, Avicenna, in The Book of Healing, developed a theory of motion, in which he made a distinction between the inclination (tendency to motion) and force of a projectile, and concluded that motion was a result of an inclination (mayl) transferred to the projectile by the thrower, and that projectile motion in a vacuum would not cease. He viewed inclination as a permanent force whose effect is dissipated by external forces such as air resistance.
The theory of motion presented by Avicenna was probably influenced by the 6th-century Alexandrian scholar John Philoponus. Avicenna's is a less sophisticated variant of the theory of impetus developed by Buridan in the 14th century. It is unclear if Buridan was influenced by Avicenna, or by Philoponus directly.
In optics, Avicenna was among those who argued that light had a speed, observing that "if the perception of light is due to the emission of some sort of particles by a luminous source, the speed of light must be finite." He also provided a wrong explanation of the rainbow phenomenon. Carl Benjamin Boyer described Avicenna's ("Ibn Sīnā") theory on the rainbow as follows:
In 1253, a Latin text entitled Speculum Tripartitum stated the following regarding Avicenna's theory on heat:
Psychology
Avicenna's legacy in classical psychology is primarily embodied in the Kitab al-nafs parts of his Kitab al-shifa (The Book of Healing) and Kitab al-najat (The Book of Deliverance). These were known in Latin under the title De Anima (treatises "on the soul"). Notably, Avicenna develops what is called the Flying Man argument in the Psychology of The Cure I.1.7 as defence of the argument that the soul is without quantitative extension, which has an affinity with Descartes's cogito argument (or what phenomenology designates as a form of an "epoche").
Avicenna's psychology requires that connection between the body and soul be strong enough to ensure the soul's individuation, but weak enough to allow for its immortality. Avicenna grounds his psychology on physiology, which means his account of the soul is one that deals almost entirely with the natural science of the body and its abilities of perception. Thus, the philosopher's connection between the soul and body is explained almost entirely by his understanding of perception; in this way, bodily perception interrelates with the immaterial human intellect. In sense perception, the perceiver senses the form of the object; first, by perceiving features of the object by our external senses. This sensory information is supplied to the internal senses, which merge all the pieces into a whole, unified conscious experience. This process of perception and abstraction is the nexus of the soul and body, for the material body may only perceive material objects, while the immaterial soul may only receive the immaterial, universal forms. The way the soul and body interact in the final abstraction of the universal from the concrete particular is the key to their relationship and interaction, which takes place in the physical body.
The soul completes the action of intellection by accepting forms that have been abstracted from matter. This process requires a concrete particular (material) to be abstracted into the universal intelligible (immaterial). The material and immaterial interact through the Active Intellect, which is a "divine light" containing the intelligible forms. The Active Intellect reveals the universals concealed in material objects much like the sun makes colour available to our eyes.
Other contributions
Astronomy and astrology
Avicenna wrote an attack on astrology titled Missive on the Champions of the Rule of the Stars () in which he cited passages from the Quran to dispute the power of astrology to foretell the future. He believed that each classical planet had some influence on the Earth but argued against current astrological practices.
Avicenna's astronomical writings had some influence on later writers, although in general his work could be considered less developed than that of ibn al-Haytham or al-Biruni. One important feature of his writing is that he considers mathematical astronomy a separate discipline from astrology. He criticized Aristotle's view of the stars receiving their light from the Sun, stating that the stars are self-luminous, and believed that the planets are also self-luminous. He claimed to have observed the transit of Venus. This is possible as there was a transit on 24 May 1032, but ibn Sina did not give the date of his observation and modern scholars have questioned whether he could have observed the transit from his location at that time; he may have mistaken a sunspot for Venus. He used his transit observation to help establish that Venus was, at least sometimes, below the Sun in the geocentric model, i.e. the sphere of Venus comes before the sphere of the Sun when moving out from the Earth.
He also wrote the Summary of the Almagest based on Ptolemy's Almagest with an appended treatise "to bring that which is stated in the Almagest and what is understood from Natural Science into conformity". For example, ibn Sina considers the motion of the solar apsis, which Ptolemy had taken to be fixed.
Chemistry
Avicenna was first to derive the attar of flowers from distillation and used steam distillation to produce essential oils such as rose essence, which he used as aromatherapeutic treatments for heart conditions.
Unlike al-Razi, Avicenna explicitly disputed the theory of the transmutation of substances commonly believed by alchemists:
Four works on alchemy attributed to Avicenna were translated into Latin as:
was the most influential, having influenced later medieval chemists and alchemists such as Vincent of Beauvais. However, Anawati argues (following Ruska) that the de Anima is a fake by a Spanish author. Similarly the Declaratio is believed not to be actually by Avicenna. The third work (The Book of Minerals) is agreed to be Avicenna's writing, adapted from the Kitab al-Shifa (Book of the Remedy). Avicenna classified minerals into stones, fusible substances, sulfurs and salts, building on the ideas of Aristotle and Jabir. The epistola de Re recta is somewhat less sceptical of alchemy; Anawati argues that it is by Avicenna, but written earlier in his career when he had not yet firmly decided that transmutation was impossible.
Poetry
Almost half of Avicenna's works are versified. His poems appear in both Arabic and Persian. As an example, Edward Granville Browne claims that the following Persian verses are incorrectly attributed to Omar Khayyám, and were originally written by Ibn Sīnā:
Legacy
Classical Islamic civilization
Robert Wisnovsky, a scholar of Avicenna attached to McGill University, says that "Avicenna was the central figure in the long history of the rational sciences in Islam, particularly in the fields of metaphysics, logic and medicine" but that his works didn't only have an influence in these "secular" fields of knowledge alone, as "these works, or portions of them, were read, taught, copied, commented upon, quoted, paraphrased and cited by thousands of post-Avicennian scholars—not only philosophers, logicians, physicians and specialists in the mathematical or exact sciences, but also by those who specialized in the disciplines of ʿilm al-kalām (rational theology, but understood to include natural philosophy, epistemology and philosophy of mind) and usūl al-fiqh (jurisprudence, but understood to include philosophy of law, dialectic, and philosophy of language)."
Medieval and Renaissance Europe
As early as the 14th century when Dante Alighieri depicted him in Limbo alongside the virtuous non-Christian thinkers in his Divine Comedy such as Virgil, Averroes, Homer, Horace, Ovid, Lucan, Socrates, Plato and Saladin. Avicenna has been recognized by both East and West as one of the great figures in intellectual history. Johannes Kepler cites Avicenna's opinion when discussing the causes of planetary motions in Chapter 2 of Astronomia Nova.
George Sarton, the author of The History of Science, described Avicenna as "one of the greatest thinkers and medical scholars in history" and called him "the most famous scientist of Islam and one of the most famous of all races, places, and times". He was one of the Islamic world's leading writers in the field of medicine.
Along with Rhazes, Abulcasis, Ibn al-Nafis and al-Ibadi, Avicenna is considered an important compiler of early Muslim medicine. He is remembered in the Western history of medicine as a major historical figure who made important contributions to medicine and the European Renaissance. His medical texts were unusual in that where controversy existed between Galen and Aristotle's views on medical matters (such as anatomy), he preferred to side with Aristotle, where necessary updating Aristotle's position to take into account post-Aristotelian advances in anatomical knowledge. Aristotle's dominant intellectual influence among medieval European scholars meant that Avicenna's linking of Galen's medical writings with Aristotle's philosophical writings in the Canon of Medicine (along with its comprehensive and logical organisation of knowledge) significantly increased Avicenna's importance in medieval Europe in comparison to other Islamic writers on medicine. His influence following translation of the Canon was such that from the early fourteenth to the mid-sixteenth centuries he was ranked with Hippocrates and Galen as one of the acknowledged authorities, ("prince of physicians").
Modern reception
Institutions in a variety of counties have been named after Avicenna in honour of his scientific accomplishments, including the Avicenna Mausoleum and Museum, Bu-Ali Sina University, Avicenna Research Institute and Ibn Sina Academy of Medieval Medicine and Sciences. There is also a crater on the Moon named Avicenna.
The Avicenna Prize, established in 2003, is awarded every two years by UNESCO and rewards individuals and groups for their achievements in the field of ethics in science.
The Avicenna Directories (2008–15; now the World Directory of Medical Schools) list universities and schools where doctors, public health practitioners, pharmacists and others, are educated. The original project team stated:
In June 2009, Iran donated a "Persian Scholars Pavilion" to the United Nations Office in Vienna. It now sits in the Vienna International Center.
In popular culture
The 1982 Soviet film Youth of Genius () by recounts Avicenna's younger years. The film is set in Bukhara at the turn of the millennium.
In Louis L'Amour's 1985 historical novel The Walking Drum, Kerbouchard studies and discusses Avicenna's The Canon of Medicine.
In his book The Physician (1988) Noah Gordon tells the story of a young English medical apprentice who disguises himself as a Jew to travel from England to Persia and learn from Avicenna, the great master of his time. The novel was adapted into a feature film, The Physician, in 2013. Avicenna was played by Ben Kingsley.
List of works
The treatises of Avicenna influenced later Muslim thinkers in many areas including theology, philology, mathematics, astronomy, physics and music. His works numbered almost 450 volumes on a wide range of subjects, of which around 240 have survived. In particular, 150 volumes of his surviving works concentrate on philosophy and 40 of them concentrate on medicine. His most famous works are The Book of Healing, and The Canon of Medicine.
Avicenna wrote at least one treatise on alchemy, but several others have been falsely attributed to him. His Logic, Metaphysics, Physics, and De Caelo, are treatises giving a synoptic view of Aristotelian doctrine, though Metaphysics demonstrates a significant departure from the brand of Neoplatonism known as Aristotelianism in Avicenna's world; Arabic philosophers have hinted at the idea that Avicenna was attempting to "re-Aristotelianise" Muslim philosophy in its entirety, unlike his predecessors, who accepted the conflation of Platonic, Aristotelian, Neo- and Middle-Platonic works transmitted into the Muslim world.
The Logic and Metaphysics have been extensively reprinted, the latter, e.g., at Venice in 1493, 1495 and 1546. Some of his shorter essays on medicine, logic, etc., take a poetical form (the poem on logic was published by Schmoelders in 1836). Two encyclopedic treatises, dealing with philosophy, are often mentioned. The larger, Al-Shifa' (Sanatio), exists nearly complete in manuscript in the Bodleian Library and elsewhere; part of it on the De Anima appeared at Pavia (1490) as the Liber Sextus Naturalium, and the long account of Avicenna's philosophy given by Muhammad al-Shahrastani seems to be mainly an analysis, and in many places a reproduction, of the Al-Shifa'. A shorter form of the work is known as the An-najat (Liberatio). The Latin editions of part of these works have been modified by the corrections which the monastic editors confess that they applied. There is also a (hikmat-al-mashriqqiyya, in Latin Philosophia Orientalis), mentioned by Roger Bacon, the majority of which is lost in antiquity, which according to Averroes was pantheistic in tone.
Avicenna's works further include:
Sirat al-shaykh al-ra'is (The Life of Avicenna), ed. and trans. WE. Gohlman, Albany, NY: State University of New York Press, 1974. (The only critical edition of Avicenna's autobiography, supplemented with material from a biography by his student Abu 'Ubayd al-Juzjani. A more recent translation of the Autobiography appears in D. Gutas, Avicenna and the Aristotelian Tradition: Introduction to Reading Avicenna's Philosophical Works, Leiden: Brill, 1988; second edition 2014.)
Al-isharat wa al-tanbihat (Remarks and Admonitions), ed. S. Dunya, Cairo, 1960; parts translated by S.C. Inati, Remarks and Admonitions, Part One: Logic, Toronto, Ont.: Pontifical Institute for Mediaeval Studies, 1984, and Ibn Sina and Mysticism, Remarks and Admonitions: Part 4, London: Kegan Paul International, 1996.
Al-Qanun fi'l-tibb (The Canon of Medicine), ed. I. a-Qashsh, Cairo, 1987. (Encyclopedia of medicine.) manuscript, Latin translation, Flores Avicenne, Michael de Capella, 1508, Modern text. Ahmed Shawkat Al-Shatti, Jibran Jabbur.
Risalah fi sirr al-qadar (Essay on the Secret of Destiny), trans. G. Hourani in Reason and Tradition in Islamic Ethics, Cambridge: Cambridge University Press, 1985.
Danishnama "The Book of Scientific Knowledge", ed. and trans. P. Morewedge, The Metaphysics of Avicenna, London: Routledge and Kegan Paul, 1973.
The Book of Healing, Avicenna's major work on philosophy. He probably began to compose al-Shifa' in 1014, and completed it in 1020. Critical editions of the Arabic text have been published in Cairo, 1952–83, originally under the supervision of I. Madkour.
Kitab al-Najat "The Book of Salvation", trans. F. Rahman, Avicenna's Psychology: An English Translation of Kitab al-Najat, Book II, Chapter VI with Historical-philosophical Notes and Textual Improvements on the Cairo Edition, Oxford: Oxford University Press, 1952. (The psychology of al-Shifa'.) (Digital version of the Arabic text)
Risala fi'l-Ishq "A Treatise on Love". Translated by Emil L. Fackenheim.
Persian works
Avicenna's most important Persian work is the Danishnama (, "Book of Knowledge". Avicenna created a new scientific vocabulary that had not previously existed in Persian. The Danishnama covers such topics as logic, metaphysics, music theory and other sciences of his time. It has been translated into English by Parwiz Morewedge in 1977. The book is also important in respect to Persian scientific works.
Andar Dānish-i Rag (, "On the Science of the Pulse") contains nine chapters on the science of the pulse and is a condensed synopsis.
Persian poetry from Avicenna is recorded in various manuscripts and later anthologies such as Nozhat al-Majales.
See also
Al-Qumri (possibly Avicenna's teacher)
Abdol Hamid Khosro Shahi (Iranian theologian)
Mummia (Persian medicine)
Eastern philosophy
Iranian philosophy
Islamic philosophy
Contemporary Islamic philosophy
Science in the medieval Islamic world
List of scientists in medieval Islamic world
Sufi philosophy
Science and technology in Iran
Ancient Iranian medicine
List of pre-modern Iranian scientists and scholars
Namesakes of Ibn Sina
Ibn Sina Academy of Medieval Medicine and Sciences in Aligarh
Avicenna Bay in Antarctica
Avicenna (crater) on the far side of the Moon
Avicenna Cultural and Scientific Foundation
Avicenne Hospital in Paris, France
Avicenna International College in Budapest, Hungary
Avicenna Mausoleum (complex dedicated to Avicenna) in Hamadan, Iran
Avicenna Research Institute in Tehran, Iran
Avicenna Tajik State Medical University in Dushanbe, Tajikistan
Bu-Ali Sina University in Hamedan, Iran
Ibn Sina Peak – named after the Scientist, on the Kyrgyzstan–Tajikistan border
Ibn Sina Foundation in Houston, Texas
Ibn Sina Hospital, Baghdad, Iraq
Ibn Sina Hospital, Istanbul, Turkey
Ibn Sina Medical College Hospital, Dhaka, Bangladesh
Ibn Sina University Hospital of Rabat-Salé at Mohammed V University in Rabat, Morocco
Ibne Sina Hospital, Multan, Punjab, Pakistan
International Ibn Sina Clinic, Dushanbe, Tajikistan
References
Citations
Sources
Further reading
Encyclopedic articles
(PDF version)
Avicenna entry by Sajjad H. Rizvi in the Internet Encyclopedia of Philosophy
Primary literature
For an old list of other extant works, C. Brockelmann's Geschichte der arabischen Litteratur (Weimar 1898), vol. i. pp. 452–458. (XV. W.; G. W. T.)
For a current list of his works see A. Bertolacci (2006) and D. Gutas (2014) in the section "Philosophy".
Avicenne: Réfutation de l'astrologie. Edition et traduction du texte arabe, introduction, notes et lexique par Yahya Michot. Préface d'Elizabeth Teissier (Beirut-Paris: Albouraq, 2006) .
William E. Gohlam (ed.), The Life of Ibn Sina. A Critical Edition and Annotated Translation, Albany, State of New York University Press, 1974.
For Ibn Sina's life, see Ibn Khallikan's Biographical Dictionary, translated by de Slane (1842); F. Wüstenfeld's Geschichte der arabischen Aerzte und Naturforscher (Göttingen, 1840).
Madelung, Wilferd and Toby Mayer (ed. and tr.), Struggling with the Philosopher: A Refutation of Avicenna's Metaphysics. A New Arabic Edition and English Translation of Shahrastani's Kitab al-Musara'a.
Secondary literature
This is, on the whole, an informed and good account of the life and accomplishments of one of the greatest influences on the development of thought both Eastern and Western. ... It is not as philosophically thorough as the works of D. Saliba, A.M. Goichon, or L. Gardet, but it is probably the best essay in English on this important thinker of the Middle Ages. (Julius R. Weinberg, The Philosophical Review, Vol. 69, No. 2, Apr. 1960, pp. 255–259)
This is a distinguished work which stands out from, and above, many of the books and articles which have been written in this century on Avicenna (Ibn Sīnā) (980–1037). It has two main features on which its distinction as a major contribution to Avicennan studies may be said to rest: the first is its clarity and readability; the second is the comparative approach adopted by the author. ... (Ian Richard Netton, Journal of the Royal Asiatic Society, Third Series, Vol. 4, No. 2, July 1994, pp. 263–264)
Y.T. Langermann (ed.), Avicenna and his Legacy. A Golden Age of Science and Philosophy, Brepols Publishers, 2010,
For a new understanding of his early career, based on a newly discovered text, see also: Michot, Yahya, Ibn Sînâ: Lettre au vizir Abû Sa'd. Editio princeps d'après le manuscrit de Bursa, traduction de l'arabe, introduction, notes et lexique (Beirut-Paris: Albouraq, 2000) .
This German publication is both one of the most comprehensive general introductions to the life and works of the philosopher and physician Avicenna (Ibn Sīnā, d. 1037) and an extensive and careful survey of his contribution to the history of science. Its author is a renowned expert in Greek and Arabic medicine who has paid considerable attention to Avicenna in his recent studies. ... (Amos Bertolacci, Isis, Vol. 96, No. 4, December 2005, p. 649)
Shaikh al Rais Ibn Sina (Special number) 1958–59, Ed. Hakim Syed Zillur Rahman, Tibbia College Magazine, Aligarh Muslim University, Aligarh, India.
Medicine
Browne, Edward G. Islamic Medicine. Fitzpatrick Lectures Delivered at the Royal College of Physicians in 1919–1920, reprint: New Delhi: Goodword Books, 2001.
Pormann, Peter & Savage-Smith, Emilie. Medieval Islamic Medicine, Washington: Georgetown University Press, 2007.
Prioreschi, Plinio. Byzantine and Islamic Medicine, A History of Medicine, Vol. 4, Omaha: Horatius Press, 2001.
Syed Ziaur Rahman. Pharmacology of Avicennian Cardiac Drugs (Metaanalysis of researches and studies in Avicennian Cardiac Drugs along with English translation of Risalah al Adwiya al Qalbiyah), Ibn Sina Academy of Medieval Medicine and Sciences, Aligarh, India, 2020
Philosophy
Amos Bertolacci, The Reception of Aristotle's Metaphysics in Avicenna's Kitab al-Sifa'. A Milestone of Western Metaphysical Thought, Leiden: Brill 2006, (Appendix C contains an Overview of the Main Works by Avicenna on Metaphysics in Chronological Order).
Dimitri Gutas, Avicenna and the Aristotelian Tradition: Introduction to Reading Avicenna's Philosophical Works, Leiden, Brill 2014, second revised and expanded edition (first edition: 1988), including an inventory of Avicenna' Authentic Works.
Andreas Lammer: The Elements of Avicenna's Physics. Greek Sources and Arabic Innovations. Scientia graeco-arabica 20. Berlin / Boston: Walter de Gruyter, 2018.
Jon McGinnis and David C. Reisman (eds.) Interpreting Avicenna: Science and Philosophy in Medieval Islam: Proceedings of the Second Conference of the Avicenna Study Group, Leiden: Brill, 2004.
Michot, Jean R., La destinée de l'homme selon Avicenne, Louvain: Aedibus Peeters, 1986, .
Nader El-Bizri, The Phenomenological Quest between Avicenna and Heidegger, Binghamton, N.Y.: Global Publications SUNY, 2000 (reprinted by SUNY Press in 2014 with a new Preface).
Nader El-Bizri, "Avicenna and Essentialism," Review of Metaphysics, Vol. 54 (June 2001), pp. 753–778.
Nader El-Bizri, "Avicenna's De Anima between Aristotle and Husserl," in The Passions of the Soul in the Metamorphosis of Becoming, ed. Anna-Teresa Tymieniecka, Dordrecht: Kluwer, 2003, pp. 67–89.
Nader El-Bizri, "Being and Necessity: A Phenomenological Investigation of Avicenna's Metaphysics and Cosmology," in Islamic Philosophy and Occidental Phenomenology on the Perennial Issue of Microcosm and Macrocosm, ed. Anna-Teresa Tymieniecka, Dordrecht: Kluwer, 2006, pp. 243–261.
Nader El-Bizri, 'Ibn Sīnā's Ontology and the Question of Being', Ishrāq: Islamic Philosophy Yearbook 2 (2011), 222–237
Nader El-Bizri, 'Philosophising at the Margins of 'Sh'i Studies': Reflections on Ibn Sīnā's Ontology', in The Study of Sh'i Islam. History, Theology and Law, eds. F. Daftary and G. Miskinzoda (London: I.B. Tauris, 2014), pp. 585–597.
Reisman, David C. (ed.), Before and After Avicenna: Proceedings of the First Conference of the Avicenna Study Group, Leiden: Brill, 2003.
External links
Avicenna (Ibn-Sina) on the Subject and the Object of Metaphysics with a list of translations of the logical and philosophical works and an annotated bibliography
980s births
Year of birth unknown
1037 deaths
11th-century astronomers
11th-century Persian-language poets
11th-century philosophers
11th-century Iranian physicians
Alchemists of the medieval Islamic world
Aristotelian philosophers
Buyid viziers
Classical humanists
Muslim critics of atheism
Epistemologists
Iranian music theorists
Islamic philosophers
Transoxanian Islamic scholars
Logicians
People from Bukhara Region
Medieval Iranian pharmacologists
Music theorists of the medieval Islamic world
Ontologists
People from Khorasan
Medieval Iranian physicists
Philosophers of logic
Philosophers of mind
Philosophers of psychology
Philosophers of religion
Philosophers of science
Unani medicine
Iranian logicians
Iranian ethicists
Samanid officials
Philosophers of mathematics
Court physicians
Iranian courtiers | Avicenna | [
"Mathematics"
] | 11,223 | [] |
1,134 | https://en.wikipedia.org/wiki/Analysis | Analysis (: analyses) is the process of breaking a complex topic or substance into smaller parts in order to gain a better understanding of it. The technique has been applied in the study of mathematics and logic since before Aristotle (384–322 B.C.), though analysis as a formal concept is a relatively recent development.
The word comes from the Ancient Greek (analysis, "a breaking-up" or "an untying;" from ana- "up, throughout" and lysis "a loosening"). From it also comes the word's plural, analyses.
As a formal concept, the method has variously been ascribed to René Descartes (Discourse on the Method), and Galileo Galilei. It has also been ascribed to Isaac Newton, in the form of a practical method of physical discovery (which he did not name).
The converse of analysis is synthesis: putting the pieces back together again in a new or different whole.
Science and technology
Chemistry
The field of chemistry uses analysis in three ways: to identify the components of a particular chemical compound (qualitative analysis), to identify the proportions of components in a mixture (quantitative analysis), and to break down chemical processes and examine chemical reactions between elements of matter. For an example of its use, analysis of the concentration of elements is important in managing a nuclear reactor, so nuclear scientists will analyze neutron activation to develop discrete measurements within vast samples. A matrix can have a considerable effect on the way a chemical analysis is conducted and the quality of its results. Analysis can be done manually or with a device.
Types of Analysis
A) Qualitative Analysis: It is concerned with which components are in a given sample or compound.
Example: Precipitation reaction
B) Quantitative Analysis: It is to determine the quantity of individual component present in a given sample or compound.
Example: To find concentration by uv-spectrophotometer.
Isotopes
Chemists can use isotope analysis to assist analysts with issues in anthropology, archeology, food chemistry, forensics, geology, and a host of other questions of physical science. Analysts can discern the origins of natural and man-made isotopes in the study of environmental radioactivity.
Computer science
Requirements analysis – encompasses those tasks that go into determining the needs or conditions to meet for a new or altered product, taking account of the possibly conflicting requirements of the various stakeholders, such as beneficiaries or users.
Competitive analysis (online algorithm) – shows how online algorithms perform and demonstrates the power of randomization in algorithms
Lexical analysis – the process of processing an input sequence of characters and producing as output a sequence of symbols
Object-oriented analysis and design – à la Booch
Program analysis (computer science) – the process of automatically analysing the behavior of computer programs
Semantic analysis (computer science) – a pass by a compiler that adds semantical information to the parse tree and performs certain checks
Static code analysis – the analysis of computer software that is performed without actually executing programs built from that
Structured systems analysis and design methodology – à la Yourdon
Syntax analysis – a process in compilers that recognizes the structure of programming languages, also known as parsing
Worst-case execution time – determines the longest time that a piece of software can take to run
Engineering
Analysts in the field of engineering look at requirements, structures, mechanisms, systems and dimensions. Electrical engineers analyse systems in electronics. Life cycles and system failures are broken down and studied by engineers. It is also looking at different factors incorporated within the design.
Mathematics
Modern mathematical analysis is the study of infinite processes. It is the branch of mathematics that includes calculus. It can be applied in the study of classical concepts of mathematics, such as real numbers, complex variables, trigonometric functions, and algorithms, or of non-classical concepts like constructivism, harmonics, infinity, and vectors.
Florian Cajori explains in A History of Mathematics (1893) the difference between modern and ancient mathematical analysis, as distinct from logical analysis, as follows:
The terms synthesis and analysis are used in mathematics in a more special sense than in logic. In ancient mathematics they had a different meaning from what they now have. The oldest definition of mathematical analysis as opposed to synthesis is that given in [appended to] Euclid, XIII. 5, which in all probability was framed by Eudoxus: "Analysis is the obtaining of the thing sought by assuming it and so reasoning up to an admitted truth; synthesis is the obtaining of the thing sought by reasoning up to the inference and proof of it."
The analytic method is not conclusive, unless all operations involved in it are known to be reversible. To remove all doubt, the Greeks, as a rule, added to the analytic process a synthetic one, consisting of a reversion of all operations occurring in the analysis. Thus the aim of analysis was to aid in the discovery of synthetic proofs or solutions.
James Gow uses a similar argument as Cajori, with the following clarification, in his A Short History of Greek Mathematics (1884):
The synthetic proof proceeds by shewing that the proposed new truth involves certain admitted truths. An analytic proof begins by an assumption, upon which a synthetic reasoning is founded. The Greeks distinguished theoretic from problematic analysis. A theoretic analysis is of the following kind. To prove that A is B, assume first that A is B. If so, then, since B is C and C is D and D is E, therefore A is E. If this be known a falsity, A is not B. But if this be a known truth and all the intermediate propositions be convertible, then the reverse process, A is E, E is D, D is C, C is B, therefore A is B, constitutes a synthetic proof of the original theorem. Problematic analysis is applied in all cases where it is proposed to construct a figure which is assumed to satisfy a given condition. The problem is then converted into some theorem which is involved in the condition and which is proved synthetically, and the steps of this synthetic proof taken backwards are a synthetic solution of the problem.
Psychotherapy
Psychoanalysis – seeks to elucidate connections among unconscious components of patients' mental processes
Transactional analysis
Transactional analysis is used by therapists to try to gain a better understanding of the unconscious. It focuses on understanding and intervening human behavior.
Signal processing
Finite element analysis – a computer simulation technique used in engineering analysis
Independent component analysis
Link quality analysis – the analysis of signal quality
Path quality analysis
Fourier analysis
Statistics
In statistics, the term analysis may refer to any method used
for data analysis. Among the many such methods, some are:
Analysis of variance (ANOVA) – a collection of statistical models and their associated procedures which compare means by splitting the overall observed variance into different parts
Boolean analysis – a method to find deterministic dependencies between variables in a sample, mostly used in exploratory data analysis
Cluster analysis – techniques for finding groups (called clusters), based on some measure of proximity or similarity
Factor analysis – a method to construct models describing a data set of observed variables in terms of a smaller set of unobserved variables (called factors)
Meta-analysis – combines the results of several studies that address a set of related research hypotheses
Multivariate analysis – analysis of data involving several variables, such as by factor analysis, regression analysis, or principal component analysis
Principal component analysis – transformation of a sample of correlated variables into uncorrelated variables (called principal components), mostly used in exploratory data analysis
Regression analysis – techniques for analysing the relationships between several predictive variables and one or more outcomes in the data
Scale analysis (statistics) – methods to analyse survey data by scoring responses on a numeric scale
Sensitivity analysis – the study of how the variation in the output of a model depends on variations in the inputs
Sequential analysis – evaluation of sampled data as it is collected, until the criterion of a stopping rule is met
Spatial analysis – the study of entities using geometric or geographic properties
Time-series analysis – methods that attempt to understand a sequence of data points spaced apart at uniform time intervals
Business
Financial statement analysis – the analysis of the accounts and the economic prospects of a firm
Financial analysis – refers to an assessment of the viability, stability, and profitability of a business, sub-business or project
Gap analysis – involves the comparison of actual performance with potential or desired performance of an organization
Business analysis – involves identifying the needs and determining the solutions to business problems
Price analysis – involves the breakdown of a price to a unit figure
Market analysis – consists of suppliers and customers, and price is determined by the interaction of supply and demand
Sum-of-the-parts analysis – method of valuation of a multi-divisional company
Opportunity analysis – consists of customers trends within the industry, customer demand and experience determine purchasing behavior
Economics
Agroecosystem analysis
Input–output model if applied to a region, is called Regional Impact Multiplier System
Government
Intelligence
The field of intelligence employs analysts to break down and understand a wide array of questions. Intelligence agencies may use heuristics, inductive and deductive reasoning, social network analysis, dynamic network analysis, link analysis, and brainstorming to sort through problems they face. Military intelligence may explore issues through the use of game theory, Red Teaming, and wargaming. Signals intelligence applies cryptanalysis and frequency analysis to break codes and ciphers. Business intelligence applies theories of competitive intelligence analysis and competitor analysis to resolve questions in the marketplace. Law enforcement intelligence applies a number of theories in crime analysis.
Policy
Policy analysis – The use of statistical data to predict the effects of policy decisions made by governments and agencies
Policy analysis includes a systematic process to find the most efficient and effective option to address the current situation.
Qualitative analysis – The use of anecdotal evidence to predict the effects of policy decisions or, more generally, influence policy decisions
Humanities and social sciences
Linguistics
Linguistics explores individual languages and language in general. It breaks language down and analyses its component parts: theory, sounds and their meaning, utterance usage, word origins, the history of words, the meaning of words and word combinations, sentence construction, basic construction beyond the sentence level, stylistics, and conversation. It examines the above using statistics and modeling, and semantics. It analyses language in context of anthropology, biology, evolution, geography, history, neurology, psychology, and sociology. It also takes the applied approach, looking at individual language development and clinical issues.
Literature
Literary criticism is the analysis of literature. The focus can be as diverse as the analysis of Homer or Freud. While not all literary-critical methods are primarily analytical in nature, the main approach to the teaching of literature in the west since the mid-twentieth century, literary formal analysis or close reading, is. This method, rooted in the academic movement labelled The New Criticism, approaches texts – chiefly short poems such as sonnets, which by virtue of their small size and significant complexity lend themselves well to this type of analysis – as units of discourse that can be understood in themselves, without reference to biographical or historical frameworks. This method of analysis breaks up the text linguistically in a study of prosody (the formal analysis of meter) and phonic effects such as alliteration and rhyme, and cognitively in examination of the interplay of syntactic structures, figurative language, and other elements of the poem that work to produce its larger effects.
Music
Musical analysis – a process attempting to answer the question "How does this music work?"
Musical Analysis is a study of how the composers use the notes together to compose music. Those studying music will find differences with each composer's musical analysis, which differs depending on the culture and history of music studied. An analysis of music is meant to simplify the music for you.
Schenkerian analysis
Schenkerian analysis is a collection of music analysis that focuses on the production of the graphic representation. This includes both analytical procedure as well as the notational style. Simply put, it analyzes tonal music which includes all chords and tones within a composition.
Philosophy
Philosophical analysis – a general term for the techniques used by philosophers
Philosophical analysis refers to the clarification and composition of words put together and the entailed meaning behind them. Philosophical analysis dives deeper into the meaning of words and seeks to clarify that meaning by contrasting the various definitions. It is the study of reality, justification of claims, and the analysis of various concepts. Branches of philosophy include logic, justification, metaphysics, values and ethics. If questions can be answered empirically, meaning it can be answered by using the senses, then it is not considered philosophical. Non-philosophical questions also include events that happened in the past, or questions science or mathematics can answer.
Analysis is the name of a prominent journal in philosophy.
Other
Aura analysis – a pseudoscientific technique in which supporters of the method claim that the body's aura, or energy field is analysed
Bowling analysis – Analysis of the performance of cricket players
Lithic analysis – the analysis of stone tools using basic scientific techniques
Lithic analysis is most often used by archeologists in determining which types of tools were used at a given time period pertaining to current artifacts discovered.
Protocol analysis – a means for extracting persons' thoughts while they are performing a task
See also
Formal analysis
Metabolism in biology
Methodology
Scientific method
Synthesis (disambiguation) – list of terms related to synthesis, the converse of analysis
References
External links
Abstraction
Critical thinking skills
Emergence
Empiricism
Epistemological theories
Intelligence
Mathematical modeling
Metaphysics of mind
Methodology
Ontology
Philosophy of logic
Rationalism
Reasoning
Research methods
Scientific method
Theory of mind | Analysis | [
"Mathematics"
] | 2,790 | [
"Applied mathematics",
"Mathematical modeling"
] |
1,158 | https://en.wikipedia.org/wiki/Algebraic%20number | An algebraic number is a number that is a root of a non-zero polynomial in one variable with integer (or, equivalently, rational) coefficients. For example, the golden ratio, , is an algebraic number, because it is a root of the polynomial . That is, it is a value for x for which the polynomial evaluates to zero. As another example, the complex number is algebraic because it is a root of .
All integers and rational numbers are algebraic, as are all roots of integers. Real and complex numbers that are not algebraic, such as and , are called transcendental numbers.
The set of algebraic (complex) numbers is countably infinite and has measure zero in the Lebesgue measure as a subset of the uncountable complex numbers. In that sense, almost all complex numbers are transcendental. Similarly, the set of algebraic (real) numbers is countably infinite and has Lebesgue measure zero as a subset of the real numbers, and in that sense almost all real numbers are transcendental.
Examples
All rational numbers are algebraic. Any rational number, expressed as the quotient of an integer and a (non-zero) natural number , satisfies the above definition, because is the root of a non-zero polynomial, namely .
Quadratic irrational numbers, irrational solutions of a quadratic polynomial with integer coefficients , , and , are algebraic numbers. If the quadratic polynomial is monic (), the roots are further qualified as quadratic integers.
Gaussian integers, complex numbers for which both and are integers, are also quadratic integers. This is because and are the two roots of the quadratic .
A constructible number can be constructed from a given unit length using a straightedge and compass. It includes all quadratic irrational roots, all rational numbers, and all numbers that can be formed from these using the basic arithmetic operations and the extraction of square roots. (By designating cardinal directions for +1, −1, +, and −, complex numbers such as are considered constructible.)
Any expression formed from algebraic numbers using any combination of the basic arithmetic operations and extraction of th roots gives another algebraic number.
Polynomial roots that cannot be expressed in terms of the basic arithmetic operations and extraction of th roots (such as the roots of ). That happens with many but not all polynomials of degree 5 or higher.
Values of trigonometric functions of rational multiples of (except when undefined): for example, , , and satisfy . This polynomial is irreducible over the rationals and so the three cosines are conjugate algebraic numbers. Likewise, , , , and satisfy the irreducible polynomial , and so are conjugate algebraic integers. This is the equivalent of angles which, when measured in degrees, have rational numbers.
Some but not all irrational numbers are algebraic:
The numbers and are algebraic since they are roots of polynomials and , respectively.
The golden ratio is algebraic since it is a root of the polynomial .
The numbers and e are not algebraic numbers (see the Lindemann–Weierstrass theorem).
Properties
If a polynomial with rational coefficients is multiplied through by the least common denominator, the resulting polynomial with integer coefficients has the same roots. This shows that an algebraic number can be equivalently defined as a root of a polynomial with either integer or rational coefficients.
Given an algebraic number, there is a unique monic polynomial with rational coefficients of least degree that has the number as a root. This polynomial is called its minimal polynomial. If its minimal polynomial has degree , then the algebraic number is said to be of degree . For example, all rational numbers have degree 1, and an algebraic number of degree 2 is a quadratic irrational.
The algebraic numbers are dense in the reals. This follows from the fact they contain the rational numbers, which are dense in the reals themselves.
The set of algebraic numbers is countable, and therefore its Lebesgue measure as a subset of the complex numbers is 0 (essentially, the algebraic numbers take up no space in the complex numbers). That is to say, "almost all" real and complex numbers are transcendental.
All algebraic numbers are computable and therefore definable and arithmetical.
For real numbers and , the complex number is algebraic if and only if both and are algebraic.
Degree of simple extensions of the rationals as a criterion to algebraicity
For any , the simple extension of the rationals by , denoted by , is of finite degree if and only if is an algebraic number.
The condition of finite degree means that there is a finite set in such that ; that is, every member in can be written as for some rational numbers (note that the set is fixed).
Indeed, since the are themselves members of , each can be expressed as sums of products of rational numbers and powers of , and therefore this condition is equivalent to the requirement that for some finite , .
The latter condition is equivalent to , itself a member of , being expressible as for some rationals , so or, equivalently, is a root of ; that is, an algebraic number with a minimal polynomial of degree not larger than .
It can similarly be proven that for any finite set of algebraic numbers , ... , the field extension has a finite degree.
Field
The sum, difference, product, and quotient (if the denominator is nonzero) of two algebraic numbers is again algebraic:
For any two algebraic numbers , , this follows directly from the fact that the simple extension , for being either , , or (for ) , is a linear subspace of the finite-degree field extension , and therefore has a finite degree itself, from which it follows (as shown above) that is algebraic.
An alternative way of showing this is constructively, by using the resultant.
Algebraic numbers thus form a field (sometimes denoted by , but that usually denotes the adele ring).
Algebraic closure
Every root of a polynomial equation whose coefficients are algebraic numbers is again algebraic. That can be rephrased by saying that the field of algebraic numbers is algebraically closed. In fact, it is the smallest algebraically closed field containing the rationals and so it is called the algebraic closure of the rationals.
That the field of algebraic numbers is algebraically closed can be proven as follows: Let be a root of a polynomial with coefficients that are algebraic numbers , , ... . The field extension then has a finite degree with respect to . The simple extension then has a finite degree with respect to (since all powers of can be expressed by powers of up to ). Therefore, also has a finite degree with respect to . Since is a linear subspace of , it must also have a finite degree with respect to , so must be an algebraic number.
Related fields
Numbers defined by radicals
Any number that can be obtained from the integers using a finite number of additions, subtractions, multiplications, divisions, and taking (possibly complex) th roots where is a positive integer are algebraic. The converse, however, is not true: there are algebraic numbers that cannot be obtained in this manner. These numbers are roots of polynomials of degree 5 or higher, a result of Galois theory (see Quintic equations and the Abel–Ruffini theorem). For example, the equation:
has a unique real root, ≈ 1.1673, that cannot be expressed in terms of only radicals and arithmetic operations.
Closed-form number
Algebraic numbers are all numbers that can be defined explicitly or implicitly in terms of polynomials, starting from the rational numbers. One may generalize this to "closed-form numbers", which may be defined in various ways. Most broadly, all numbers that can be defined explicitly or implicitly in terms of polynomials, exponentials, and logarithms are called "elementary numbers", and these include the algebraic numbers, plus some transcendental numbers. Most narrowly, one may consider numbers explicitly defined in terms of polynomials, exponentials, and logarithms – this does not include all algebraic numbers, but does include some simple transcendental numbers such as or ln 2.
Algebraic integers
An algebraic integer is an algebraic number that is a root of a polynomial with integer coefficients with leading coefficient 1 (a monic polynomial). Examples of algebraic integers are and Therefore, the algebraic integers constitute a proper superset of the integers, as the latter are the roots of monic polynomials for all . In this sense, algebraic integers are to algebraic numbers what integers are to rational numbers.
The sum, difference and product of algebraic integers are again algebraic integers, which means that the algebraic integers form a ring. The name algebraic integer comes from the fact that the only rational numbers that are algebraic integers are the integers, and because the algebraic integers in any number field are in many ways analogous to the integers. If is a number field, its ring of integers is the subring of algebraic integers in , and is frequently denoted as . These are the prototypical examples of Dedekind domains.
Special classes
Algebraic solution
Gaussian integer
Eisenstein integer
Quadratic irrational number
Fundamental unit
Root of unity
Gaussian period
Pisot–Vijayaraghavan number
Salem number
Notes
References | Algebraic number | [
"Mathematics"
] | 1,876 | [
"Algebraic numbers",
"Mathematical objects",
"Numbers"
] |
1,160 | https://en.wikipedia.org/wiki/Automorphism | In mathematics, an automorphism is an isomorphism from a mathematical object to itself. It is, in some sense, a symmetry of the object, and a way of mapping the object to itself while preserving all of its structure. The set of all automorphisms of an object forms a group, called the automorphism group. It is, loosely speaking, the symmetry group of the object.
Definition
In an algebraic structure such as a group, a ring, or vector space, an automorphism is simply a bijective homomorphism of an object into itself. (The definition of a homomorphism depends on the type of algebraic structure; see, for example, group homomorphism, ring homomorphism, and linear operator.)
More generally, for an object in some category, an automorphism is a morphism of the object to itself that has an inverse morphism; that is, a morphism is an automorphism if there is a morphism such that where is the identity morphism of . For algebraic structures, the two definitions are equivalent; in this case, the identity morphism is simply the identity function, and is often called the trivial automorphism.
Automorphism group
The automorphisms of an object form a group under composition of morphisms, which is called the automorphism group of . This results straightforwardly from the definition of a category.
The automorphism group of an object in a category is often denoted , or simply Aut(X) if the category is clear from context.
Examples
In set theory, an arbitrary permutation of the elements of a set X is an automorphism. The automorphism group of X is also called the symmetric group on X.
In elementary arithmetic, the set of integers, Z, considered as a group under addition, has a unique nontrivial automorphism: negation. Considered as a ring, however, it has only the trivial automorphism. Generally speaking, negation is an automorphism of any abelian group, but not of a ring or field.
A group automorphism is a group isomorphism from a group to itself. Informally, it is a permutation of the group elements such that the structure remains unchanged. For every group G there is a natural group homomorphism G → Aut(G) whose image is the group Inn(G) of inner automorphisms and whose kernel is the center of G. Thus, if G has trivial center it can be embedded into its own automorphism group.
In linear algebra, an endomorphism of a vector space V is a linear operator V → V. An automorphism is an invertible linear operator on V. When the vector space is finite-dimensional, the automorphism group of V is the same as the general linear group, GL(V). (The algebraic structure of all endomorphisms of V is itself an algebra over the same base field as V, whose invertible elements precisely consist of GL(V).)
A field automorphism is a bijective ring homomorphism from a field to itself.
The field of the rational numbers has no other automorphism than the identity, since an automorphism must fix the additive identity and the multiplicative identity ; the sum of a finite number of must be fixed, as well as the additive inverses of these sums (that is, the automorphism fixes all integers); finally, since every rational number is the quotient of two integers, all rational numbers must be fixed by any automorphism.
The field of the real numbers has no automorphisms other than the identity. Indeed, the rational numbers must be fixed by every automorphism, per above; an automorphism must preserve inequalities since is equivalent to and the latter property is preserved by every automorphism; finally every real number must be fixed since it is the least upper bound of a sequence of rational numbers.
The field of the complex numbers has a unique nontrivial automorphism that fixes the real numbers. It is the complex conjugation, which maps to The axiom of choice implies the existence of uncountably many automorphisms that do not fix the real numbers.
The study of automorphisms of algebraic field extensions is the starting point and the main object of Galois theory.
The automorphism group of the quaternions (H) as a ring are the inner automorphisms, by the Skolem–Noether theorem: maps of the form . This group is isomorphic to SO(3), the group of rotations in 3-dimensional space.
The automorphism group of the octonions (O) is the exceptional Lie group G2.
In graph theory an automorphism of a graph is a permutation of the nodes that preserves edges and non-edges. In particular, if two nodes are joined by an edge, so are their images under the permutation.
In geometry, an automorphism may be called a motion of the space. Specialized terminology is also used:
In metric geometry an automorphism is a self-isometry. The automorphism group is also called the isometry group.
In the category of Riemann surfaces, an automorphism is a biholomorphic map (also called a conformal map), from a surface to itself. For example, the automorphisms of the Riemann sphere are Möbius transformations.
An automorphism of a differentiable manifold M is a diffeomorphism from M to itself. The automorphism group is sometimes denoted Diff(M).
In topology, morphisms between topological spaces are called continuous maps, and an automorphism of a topological space is a homeomorphism of the space to itself, or self-homeomorphism (see homeomorphism group). In this example it is not sufficient for a morphism to be bijective to be an isomorphism.
History
One of the earliest group automorphisms (automorphism of a group, not simply a group of automorphisms of points) was given by the Irish mathematician William Rowan Hamilton in 1856, in his icosian calculus, where he discovered an order two automorphism, writing:
so that is a new fifth root of unity, connected with the former fifth root by relations of perfect reciprocity.
Inner and outer automorphisms
In some categories—notably groups, rings, and Lie algebras—it is possible to separate automorphisms into two types, called "inner" and "outer" automorphisms.
In the case of groups, the inner automorphisms are the conjugations by the elements of the group itself. For each element a of a group G, conjugation by a is the operation given by (or a−1ga; usage varies). One can easily check that conjugation by a is a group automorphism. The inner automorphisms form a normal subgroup of Aut(G), denoted by Inn(G); this is called Goursat's lemma.
The other automorphisms are called outer automorphisms. The quotient group is usually denoted by Out(G); the non-trivial elements are the cosets that contain the outer automorphisms.
The same definition holds in any unital ring or algebra where a is any invertible element. For Lie algebras the definition is slightly different.
See also
Antiautomorphism
Automorphism (in Sudoku puzzles)
Characteristic subgroup
Endomorphism ring
Frobenius automorphism
Morphism
Order automorphism (in order theory).
Relation-preserving automorphism
Fractional Fourier transform
References
External links
Automorphism at Encyclopaedia of Mathematics
Morphisms
Abstract algebra
Symmetry | Automorphism | [
"Physics",
"Mathematics"
] | 1,582 | [
"Functions and mappings",
"Mathematical structures",
"Algebra",
"Mathematical objects",
"Category theory",
"Mathematical relations",
"Geometry",
"Abstract algebra",
"Symmetry",
"Morphisms"
] |
1,164 | https://en.wikipedia.org/wiki/Artificial%20intelligence | Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs.
High-profile applications of AI include advanced web search engines (e.g., Google Search); recommendation systems (used by YouTube, Amazon, and Netflix); virtual assistants (e.g., Google Assistant, Siri, and Alexa); autonomous vehicles (e.g., Waymo); generative and creative tools (e.g., ChatGPT and AI art); and superhuman play and analysis in strategy games (e.g., chess and Go). However, many AI applications are not perceived as AI: "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore."
Various subfields of AI research are centered around particular goals and the use of particular tools. The traditional goals of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception, and support for robotics. General intelligence—the ability to complete any task performed by a human on an at least equal level—is among the field's long-term goals. To reach these goals, AI researchers have adapted and integrated a wide range of techniques, including search and mathematical optimization, formal logic, artificial neural networks, and methods based on statistics, operations research, and economics. AI also draws upon psychology, linguistics, philosophy, neuroscience, and other fields.
Artificial intelligence was founded as an academic discipline in 1956, and the field went through multiple cycles of optimism throughout its history, followed by periods of disappointment and loss of funding, known as AI winters. Funding and interest vastly increased after 2012 when deep learning outperformed previous AI techniques. This growth accelerated further after 2017 with the transformer architecture, and by the early 2020s many billions of dollars were being invested in AI and the field experienced rapid ongoing progress in what has become known as the AI boom. The emergence of advanced generative AI in the midst of the AI boom and its ability to create and modify content exposed several unintended consequences and harms in the present and raised concerns about the risks of AI and its long-term effects in the future, prompting discussions about regulatory policies to ensure the safety and benefits of the technology.
Goals
The general problem of simulating (or creating) intelligence has been broken into subproblems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention and cover the scope of AI research.
Reasoning and problem-solving
Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions. By the late 1980s and 1990s, methods were developed for dealing with uncertain or incomplete information, employing concepts from probability and economics.
Many of these algorithms are insufficient for solving large reasoning problems because they experience a "combinatorial explosion": They become exponentially slower as the problems grow. Even humans rarely use the step-by-step deduction that early AI research could model. They solve most of their problems using fast, intuitive judgments. Accurate and efficient reasoning is an unsolved problem.
Knowledge representation
Knowledge representation and knowledge engineering allow AI programs to answer questions intelligently and make deductions about real-world facts. Formal knowledge representations are used in content-based indexing and retrieval, scene interpretation, clinical decision support, knowledge discovery (mining "interesting" and actionable inferences from large databases), and other areas.
A knowledge base is a body of knowledge represented in a form that can be used by a program. An ontology is the set of objects, relations, concepts, and properties used by a particular domain of knowledge. Knowledge bases need to represent things such as objects, properties, categories, and relations between objects; situations, events, states, and time; causes and effects; knowledge about knowledge (what we know about what other people know); default reasoning (things that humans assume are true until they are told differently and will remain true even when other facts are changing); and many other aspects and domains of knowledge.
Among the most difficult problems in knowledge representation are the breadth of commonsense knowledge (the set of atomic facts that the average person knows is enormous); and the sub-symbolic form of most commonsense knowledge (much of what people know is not represented as "facts" or "statements" that they could express verbally). There is also the difficulty of knowledge acquisition, the problem of obtaining knowledge for AI applications.
Planning and decision-making
An "agent" is anything that perceives and takes actions in the world. A rational agent has goals or preferences and takes actions to make them happen. In automated planning, the agent has a specific goal. In automated decision-making, the agent has preferences—there are some situations it would prefer to be in, and some situations it is trying to avoid. The decision-making agent assigns a number to each situation (called the "utility") that measures how much the agent prefers it. For each possible action, it can calculate the "expected utility": the utility of all possible outcomes of the action, weighted by the probability that the outcome will occur. It can then choose the action with the maximum expected utility.
In classical planning, the agent knows exactly what the effect of any action will be. In most real-world problems, however, the agent may not be certain about the situation they are in (it is "unknown" or "unobservable") and it may not know for certain what will happen after each possible action (it is not "deterministic"). It must choose an action by making a probabilistic guess and then reassess the situation to see if the action worked.
In some problems, the agent's preferences may be uncertain, especially if there are other agents or humans involved. These can be learned (e.g., with inverse reinforcement learning), or the agent can seek information to improve its preferences. Information value theory can be used to weigh the value of exploratory or experimental actions. The space of possible future actions and situations is typically intractably large, so the agents must take actions and evaluate situations while being uncertain of what the outcome will be.
A Markov decision process has a transition model that describes the probability that a particular action will change the state in a particular way and a reward function that supplies the utility of each state and the cost of each action. A policy associates a decision with each possible state. The policy could be calculated (e.g., by iteration), be heuristic, or it can be learned.
Game theory describes the rational behavior of multiple interacting agents and is used in AI programs that make decisions that involve other agents.
Learning
Machine learning is the study of programs that can improve their performance on a given task automatically. It has been a part of AI from the beginning.
There are several kinds of machine learning. Unsupervised learning analyzes a stream of data and finds patterns and makes predictions without any other guidance. Supervised learning requires labeling the training data with the expected answers, and comes in two main varieties: classification (where the program must learn to predict what category the input belongs in) and regression (where the program must deduce a numeric function based on numeric input).
In reinforcement learning, the agent is rewarded for good responses and punished for bad ones. The agent learns to choose responses that are classified as "good". Transfer learning is when the knowledge gained from one problem is applied to a new problem. Deep learning is a type of machine learning that runs inputs through biologically inspired artificial neural networks for all of these types of learning.
Computational learning theory can assess learners by computational complexity, by sample complexity (how much data is required), or by other notions of optimization.
Natural language processing
Natural language processing (NLP) allows programs to read, write and communicate in human languages such as English. Specific problems include speech recognition, speech synthesis, machine translation, information extraction, information retrieval and question answering.
Early work, based on Noam Chomsky's generative grammar and semantic networks, had difficulty with word-sense disambiguation unless restricted to small domains called "micro-worlds" (due to the common sense knowledge problem). Margaret Masterman believed that it was meaning and not grammar that was the key to understanding languages, and that thesauri and not dictionaries should be the basis of computational language structure.
Modern deep learning techniques for NLP include word embedding (representing words, typically as vectors encoding their meaning), transformers (a deep learning architecture using an attention mechanism), and others. In 2019, generative pre-trained transformer (or "GPT") language models began to generate coherent text, and by 2023, these models were able to get human-level scores on the bar exam, SAT test, GRE test, and many other real-world applications.
Perception
Machine perception is the ability to use input from sensors (such as cameras, microphones, wireless signals, active lidar, sonar, radar, and tactile sensors) to deduce aspects of the world. Computer vision is the ability to analyze visual input.
The field includes speech recognition, image classification, facial recognition, object recognition,object tracking, and robotic perception.
Social intelligence
Affective computing is a field that comprises systems that recognize, interpret, process, or simulate human feeling, emotion, and mood. For example, some virtual assistants are programmed to speak conversationally or even to banter humorously; it makes them appear more sensitive to the emotional dynamics of human interaction, or to otherwise facilitate human–computer interaction.
However, this tends to give naïve users an unrealistic conception of the intelligence of existing computer agents. Moderate successes related to affective computing include textual sentiment analysis and, more recently, multimodal sentiment analysis, wherein AI classifies the effects displayed by a videotaped subject.
General intelligence
A machine with artificial general intelligence should be able to solve a wide variety of problems with breadth and versatility similar to human intelligence.
Techniques
AI research uses a wide variety of techniques to accomplish the goals above.
Search and optimization
AI can solve many problems by intelligently searching through many possible solutions. There are two very different kinds of search used in AI: state space search and local search.
State space search
State space search searches through a tree of possible states to try to find a goal state. For example, planning algorithms search through trees of goals and subgoals, attempting to find a path to a target goal, a process called means-ends analysis.
Simple exhaustive searches are rarely sufficient for most real-world problems: the search space (the number of places to search) quickly grows to astronomical numbers. The result is a search that is too slow or never completes. "Heuristics" or "rules of thumb" can help prioritize choices that are more likely to reach a goal.
Adversarial search is used for game-playing programs, such as chess or Go. It searches through a tree of possible moves and countermoves, looking for a winning position.
Local search
Local search uses mathematical optimization to find a solution to a problem. It begins with some form of guess and refines it incrementally.
Gradient descent is a type of local search that optimizes a set of numerical parameters by incrementally adjusting them to minimize a loss function. Variants of gradient descent are commonly used to train neural networks, through the backpropagation algorithm.
Another type of local search is evolutionary computation, which aims to iteratively improve a set of candidate solutions by "mutating" and "recombining" them, selecting only the fittest to survive each generation.
Distributed search processes can coordinate via swarm intelligence algorithms. Two popular swarm algorithms used in search are particle swarm optimization (inspired by bird flocking) and ant colony optimization (inspired by ant trails).
Logic
Formal logic is used for reasoning and knowledge representation.
Formal logic comes in two main forms: propositional logic (which operates on statements that are true or false and uses logical connectives such as "and", "or", "not" and "implies") and predicate logic (which also operates on objects, predicates and relations and uses quantifiers such as "Every X is a Y" and "There are some Xs that are Ys").
Deductive reasoning in logic is the process of proving a new statement (conclusion) from other statements that are given and assumed to be true (the premises). Proofs can be structured as proof trees, in which nodes are labelled by sentences, and children nodes are connected to parent nodes by inference rules.
Given a problem and a set of premises, problem-solving reduces to searching for a proof tree whose root node is labelled by a solution of the problem and whose leaf nodes are labelled by premises or axioms. In the case of Horn clauses, problem-solving search can be performed by reasoning forwards from the premises or backwards from the problem. In the more general case of the clausal form of first-order logic, resolution is a single, axiom-free rule of inference, in which a problem is solved by proving a contradiction from premises that include the negation of the problem to be solved.
Inference in both Horn clause logic and first-order logic is undecidable, and therefore intractable. However, backward reasoning with Horn clauses, which underpins computation in the logic programming language Prolog, is Turing complete. Moreover, its efficiency is competitive with computation in other symbolic programming languages.
Fuzzy logic assigns a "degree of truth" between 0 and 1. It can therefore handle propositions that are vague and partially true.
Non-monotonic logics, including logic programming with negation as failure, are designed to handle default reasoning. Other specialized versions of logic have been developed to describe many complex domains.
Probabilistic methods for uncertain reasoning
Many problems in AI (including in reasoning, planning, learning, perception, and robotics) require the agent to operate with incomplete or uncertain information. AI researchers have devised a number of tools to solve these problems using methods from probability theory and economics. Precise mathematical tools have been developed that analyze how an agent can make choices and plan, using decision theory, decision analysis, and information value theory. These tools include models such as Markov decision processes, dynamic decision networks, game theory and mechanism design.
Bayesian networks are a tool that can be used for reasoning (using the Bayesian inference algorithm), learning (using the expectation–maximization algorithm), planning (using decision networks) and perception (using dynamic Bayesian networks).
Probabilistic algorithms can also be used for filtering, prediction, smoothing, and finding explanations for streams of data, thus helping perception systems analyze processes that occur over time (e.g., hidden Markov models or Kalman filters).
Classifiers and statistical learning methods
The simplest AI applications can be divided into two types: classifiers (e.g., "if shiny then diamond"), on one hand, and controllers (e.g., "if diamond then pick up"), on the other hand. Classifiers are functions that use pattern matching to determine the closest match. They can be fine-tuned based on chosen examples using supervised learning. Each pattern (also called an "observation") is labeled with a certain predefined class. All the observations combined with their class labels are known as a data set. When a new observation is received, that observation is classified based on previous experience.
There are many kinds of classifiers in use. The decision tree is the simplest and most widely used symbolic machine learning algorithm. K-nearest neighbor algorithm was the most widely used analogical AI until the mid-1990s, and Kernel methods such as the support vector machine (SVM) displaced k-nearest neighbor in the 1990s.
The naive Bayes classifier is reportedly the "most widely used learner" at Google, due in part to its scalability.
Neural networks are also used as classifiers.
Artificial neural networks
An artificial neural network is based on a collection of nodes also known as artificial neurons, which loosely model the neurons in a biological brain. It is trained to recognise patterns; once trained, it can recognise those patterns in fresh data. There is an input, at least one hidden layer of nodes and an output. Each node applies a function and once the weight crosses its specified threshold, the data is transmitted to the next layer. A network is typically called a deep neural network if it has at least 2 hidden layers.
Learning algorithms for neural networks use local search to choose the weights that will get the right output for each input during training. The most common training technique is the backpropagation algorithm. Neural networks learn to model complex relationships between inputs and outputs and find patterns in data. In theory, a neural network can learn any function.
In feedforward neural networks the signal passes in only one direction. Recurrent neural networks feed the output signal back into the input, which allows short-term memories of previous input events. Long short term memory is the most successful network architecture for recurrent networks. Perceptrons use only a single layer of neurons; deep learning uses multiple layers. Convolutional neural networks strengthen the connection between neurons that are "close" to each other—this is especially important in image processing, where a local set of neurons must identify an "edge" before the network can identify an object.
Deep learning
Deep learning uses several layers of neurons between the network's inputs and outputs. The multiple layers can progressively extract higher-level features from the raw input. For example, in image processing, lower layers may identify edges, while higher layers may identify the concepts relevant to a human such as digits, letters, or faces.
Deep learning has profoundly improved the performance of programs in many important subfields of artificial intelligence, including computer vision, speech recognition, natural language processing, image classification, and others. The reason that deep learning performs so well in so many applications is not known as of 2023. The sudden success of deep learning in 2012–2015 did not occur because of some new discovery or theoretical breakthrough (deep neural networks and backpropagation had been described by many people, as far back as the 1950s) but because of two factors: the incredible increase in computer power (including the hundred-fold increase in speed by switching to GPUs) and the availability of vast amounts of training data, especially the giant curated datasets used for benchmark testing, such as ImageNet.
GPT
Generative pre-trained transformers (GPT) are large language models (LLMs) that generate text based on the semantic relationships between words in sentences. Text-based GPT models are pretrained on a large corpus of text that can be from the Internet. The pretraining consists of predicting the next token (a token being usually a word, subword, or punctuation). Throughout this pretraining, GPT models accumulate knowledge about the world and can then generate human-like text by repeatedly predicting the next token. Typically, a subsequent training phase makes the model more truthful, useful, and harmless, usually with a technique called reinforcement learning from human feedback (RLHF). Current GPT models are prone to generating falsehoods called "hallucinations", although this can be reduced with RLHF and quality data. They are used in chatbots, which allow people to ask a question or request a task in simple text.
Current models and services include Gemini (formerly Bard), ChatGPT, Grok, Claude, Copilot, and LLaMA. Multimodal GPT models can process different types of data (modalities) such as images, videos, sound, and text.
Hardware and software
In the late 2010s, graphics processing units (GPUs) that were increasingly designed with AI-specific enhancements and used with specialized TensorFlow software had replaced previously used central processing unit (CPUs) as the dominant means for large-scale (commercial and academic) machine learning models' training. Specialized programming languages such as Prolog were used in early AI research, but general-purpose programming languages like Python have become predominant.
The transistor density in integrated circuits has been observed to roughly double every 18 months—a trend known as Moore's law, named after the Intel co-founder Gordon Moore, who first identified it. Improvements in GPUs have been even faster, a trend sometimes called Huang's law, named after Nvidia co-founder and CEO Jensen Huang.
Applications
AI and machine learning technology is used in most of the essential applications of the 2020s, including: search engines (such as Google Search), targeting online advertisements, recommendation systems (offered by Netflix, YouTube or Amazon), driving internet traffic, targeted advertising (AdSense, Facebook), virtual assistants (such as Siri or Alexa), autonomous vehicles (including drones, ADAS and self-driving cars), automatic language translation (Microsoft Translator, Google Translate), facial recognition (Apple's Face ID or Microsoft's DeepFace and Google's FaceNet) and image labeling (used by Facebook, Apple's iPhoto and TikTok). The deployment of AI may be overseen by a Chief automation officer (CAO).
Health and medicine
The application of AI in medicine and medical research has the potential to increase patient care and quality of life. Through the lens of the Hippocratic Oath, medical professionals are ethically compelled to use AI, if applications can more accurately diagnose and treat patients.
For medical research, AI is an important tool for processing and integrating big data. This is particularly important for organoid and tissue engineering development which use microscopy imaging as a key technique in fabrication. It has been suggested that AI can overcome discrepancies in funding allocated to different fields of research. New AI tools can deepen the understanding of biomedically relevant pathways. For example, AlphaFold 2 (2021) demonstrated the ability to approximate, in hours rather than months, the 3D structure of a protein. In 2023, it was reported that AI-guided drug discovery helped find a class of antibiotics capable of killing two different types of drug-resistant bacteria. In 2024, researchers used machine learning to accelerate the search for Parkinson's disease drug treatments. Their aim was to identify compounds that block the clumping, or aggregation, of alpha-synuclein (the protein that characterises Parkinson's disease). They were able to speed up the initial screening process ten-fold and reduce the cost by a thousand-fold.
Sexuality
Applications of AI in this domain include AI-enabled menstruation and fertility trackers that analyze user data to offer prediction, AI-integrated sex toys (e.g., teledildonics), AI-generated sexual education content, and AI agents that simulate sexual and romantic partners (e.g., Replika). AI is also used for the production of non-consensual deepfake pornography, raising significant ethical and legal concerns.
AI technologies have also been used to attempt to identify online gender-based violence and online sexual grooming of minors.
Games
Game playing programs have been used since the 1950s to demonstrate and test AI's most advanced techniques. Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov, on 11 May 1997. In 2011, in a Jeopardy! quiz show exhibition match, IBM's question answering system, Watson, defeated the two greatest Jeopardy! champions, Brad Rutter and Ken Jennings, by a significant margin. In March 2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the first computer Go-playing system to beat a professional Go player without handicaps. Then, in 2017, it defeated Ke Jie, who was the best Go player in the world. Other programs handle imperfect-information games, such as the poker-playing program Pluribus. DeepMind developed increasingly generalistic reinforcement learning models, such as with MuZero, which could be trained to play chess, Go, or Atari games. In 2019, DeepMind's AlphaStar achieved grandmaster level in StarCraft II, a particularly challenging real-time strategy game that involves incomplete knowledge of what happens on the map. In 2021, an AI agent competed in a PlayStation Gran Turismo competition, winning against four of the world's best Gran Turismo drivers using deep reinforcement learning. In 2024, Google DeepMind introduced SIMA, a type of AI capable of autonomously playing nine previously unseen open-world video games by observing screen output, as well as executing short, specific tasks in response to natural language instructions.
Mathematics
In mathematics, special forms of formal step-by-step reasoning are used. In contrast, LLMs such as GPT-4 Turbo, Gemini Ultra, Claude Opus, LLaMa-2 or Mistral Large are working with probabilistic models, which can produce wrong answers in the form of hallucinations. Therefore, they need not only a large database of mathematical problems to learn from but also methods such as supervised fine-tuning or trained classifiers with human-annotated data to improve answers for new problems and learn from corrections. A 2024 study showed that the performance of some language models for reasoning capabilities in solving math problems not included in their training data was low, even for problems with only minor deviations from trained data.
Alternatively, dedicated models for mathematical problem solving with higher precision for the outcome including proof of theorems have been developed such as Alpha Tensor, Alpha Geometry and Alpha Proof all from Google DeepMind, Llemma from eleuther or Julius.
When natural language is used to describe mathematical problems, converters transform such prompts into a formal language such as Lean to define mathematical tasks.
Some models have been developed to solve challenging problems and reach good results in benchmark tests, others to serve as educational tools in mathematics.
Finance
Finance is one of the fastest growing sectors where applied AI tools are being deployed: from retail online banking to investment advice and insurance, where automated "robot advisers" have been in use for some years.
World Pensions experts like Nicolas Firzli insist it may be too early to see the emergence of highly innovative AI-informed financial products and services: "the deployment of AI tools will simply further automatise things: destroying tens of thousands of jobs in banking, financial planning, and pension advice in the process, but I'm not sure it will unleash a new wave of [e.g., sophisticated] pension innovation."
Military
Various countries are deploying AI military applications. The main applications enhance command and control, communications, sensors, integration and interoperability. Research is targeting intelligence collection and analysis, logistics, cyber operations, information operations, and semiautonomous and autonomous vehicles. AI technologies enable coordination of sensors and effectors, threat detection and identification, marking of enemy positions, target acquisition, coordination and deconfliction of distributed Joint Fires between networked combat vehicles involving manned and unmanned teams.
AI has been used in military operations in Iraq, Syria, Israel and Ukraine.
Generative AI
Agents
Artificial intelligent (AI) agents are software entities designed to perceive their environment, make decisions, and take actions autonomously to achieve specific goals. These agents can interact with users, their environment, or other agents. AI agents are used in various applications, including virtual assistants, chatbots, autonomous vehicles, game-playing systems, and industrial robotics. AI agents operate within the constraints of their programming, available computational resources, and hardware limitations. This means they are restricted to performing tasks within their defined scope and have finite memory and processing capabilities. In real-world applications, AI agents often face time constraints for decision-making and action execution. Many AI agents incorporate learning algorithms, enabling them to improve their performance over time through experience or training. Using machine learning, AI agents can adapt to new situations and optimise their behaviour for their designated tasks.
Other industry-specific tasks
There are also thousands of successful AI applications used to solve specific problems for specific industries or institutions. In a 2017 survey, one in five companies reported having incorporated "AI" in some offerings or processes. A few examples are energy storage, medical diagnosis, military logistics, applications that predict the result of judicial decisions, foreign policy, or supply chain management.
AI applications for evacuation and disaster management are growing. AI has been used to investigate if and how people evacuated in large scale and small scale evacuations using historical data from GPS, videos or social media. Further, AI can provide real time information on the real time evacuation conditions.
In agriculture, AI has helped farmers identify areas that need irrigation, fertilization, pesticide treatments or increasing yield. Agronomists use AI to conduct research and development. AI has been used to predict the ripening time for crops such as tomatoes, monitor soil moisture, operate agricultural robots, conduct predictive analytics, classify livestock pig call emotions, automate greenhouses, detect diseases and pests, and save water.
Artificial intelligence is used in astronomy to analyze increasing amounts of available data and applications, mainly for "classification, regression, clustering, forecasting, generation, discovery, and the development of new scientific insights." For example, it is used for discovering exoplanets, forecasting solar activity, and distinguishing between signals and instrumental effects in gravitational wave astronomy. Additionally, it could be used for activities in space, such as space exploration, including the analysis of data from space missions, real-time science decisions of spacecraft, space debris avoidance, and more autonomous operation.
During the 2024 Indian elections, US$50 millions was spent on authorized AI-generated content, notably by creating deepfakes of allied (including sometimes deceased) politicians to better engage with voters, and by translating speeches to various local languages.
Ethics
AI has potential benefits and potential risks. AI may be able to advance science and find solutions for serious problems: Demis Hassabis of DeepMind hopes to "solve intelligence, and then use that to solve everything else". However, as the use of AI has become widespread, several unintended consequences and risks have been identified. In-production systems can sometimes not factor ethics and bias into their AI training processes, especially when the AI algorithms are inherently unexplainable in deep learning.
Risks and harm
Privacy and copyright
Machine learning algorithms require large amounts of data. The techniques used to acquire this data have raised concerns about privacy, surveillance and copyright.
AI-powered devices and services, such as virtual assistants and IoT products, continuously collect personal information, raising concerns about intrusive data gathering and unauthorized access by third parties. The loss of privacy is further exacerbated by AI's ability to process and combine vast amounts of data, potentially leading to a surveillance society where individual activities are constantly monitored and analyzed without adequate safeguards or transparency.
Sensitive user data collected may include online activity records, geolocation data, video, or audio. For example, in order to build speech recognition algorithms, Amazon has recorded millions of private conversations and allowed temporary workers to listen to and transcribe some of them. Opinions about this widespread surveillance range from those who see it as a necessary evil to those for whom it is clearly unethical and a violation of the right to privacy.
AI developers argue that this is the only way to deliver valuable applications and have developed several techniques that attempt to preserve privacy while still obtaining the data, such as data aggregation, de-identification and differential privacy. Since 2016, some privacy experts, such as Cynthia Dwork, have begun to view privacy in terms of fairness. Brian Christian wrote that experts have pivoted "from the question of 'what they know' to the question of 'what they're doing with it'."
Generative AI is often trained on unlicensed copyrighted works, including in domains such as images or computer code; the output is then used under the rationale of "fair use". Experts disagree about how well and under what circumstances this rationale will hold up in courts of law; relevant factors may include "the purpose and character of the use of the copyrighted work" and "the effect upon the potential market for the copyrighted work". Website owners who do not wish to have their content scraped can indicate it in a "robots.txt" file. In 2023, leading authors (including John Grisham and Jonathan Franzen) sued AI companies for using their work to train generative AI. Another discussed approach is to envision a separate sui generis system of protection for creations generated by AI to ensure fair attribution and compensation for human authors.
Dominance by tech giants
The commercial AI scene is dominated by Big Tech companies such as Alphabet Inc., Amazon, Apple Inc., Meta Platforms, and Microsoft. Some of these players already own the vast majority of existing cloud infrastructure and computing power from data centers, allowing them to entrench further in the marketplace.
Power needs and environmental impacts
In January 2024, the International Energy Agency (IEA) released Electricity 2024, Analysis and Forecast to 2026, forecasting electric power use. This is the first IEA report to make projections for data centers and power consumption for artificial intelligence and cryptocurrency. The report states that power demand for these uses might double by 2026, with additional electric power usage equal to electricity used by the whole Japanese nation.
Prodigious power consumption by AI is responsible for the growth of fossil fuels use, and might delay closings of obsolete, carbon-emitting coal energy facilities. There is a feverish rise in the construction of data centers throughout the US, making large technology firms (e.g., Microsoft, Meta, Google, Amazon) into voracious consumers of electric power. Projected electric consumption is so immense that there is concern that it will be fulfilled no matter the source. A ChatGPT search involves the use of 10 times the electrical energy as a Google search. The large firms are in haste to find power sources – from nuclear energy to geothermal to fusion. The tech firms argue that – in the long view – AI will be eventually kinder to the environment, but they need the energy now. AI makes the power grid more efficient and "intelligent", will assist in the growth of nuclear power, and track overall carbon emissions, according to technology firms.
A 2024 Goldman Sachs Research Paper, AI Data Centers and the Coming US Power Demand Surge, found "US power demand (is) likely to experience growth not seen in a generation...." and forecasts that, by 2030, US data centers will consume 8% of US power, as opposed to 3% in 2022, presaging growth for the electrical power generation industry by a variety of means. Data centers' need for more and more electrical power is such that they might max out the electrical grid. The Big Tech companies counter that AI can be used to maximize the utilization of the grid by all.
In 2024, the Wall Street Journal reported that big AI companies have begun negotiations with the US nuclear power providers to provide electricity to the data centers. In March 2024 Amazon purchased a Pennsylvania nuclear-powered data center for $650 Million (US). Nvidia CEO Jen-Hsun Huang said nuclear power is a good option for the data centers.
In September 2024, Microsoft announced an agreement with Constellation Energy to re-open the Three Mile Island nuclear power plant to provide Microsoft with 100% of all electric power produced by the plant for 20 years. Reopening the plant, which suffered a partial nuclear meltdown of its Unit 2 reactor in 1979, will require Constellation to get through strict regulatory processes which will include extensive safety scrutiny from the US Nuclear Regulatory Commission. If approved (this will be the first ever US re-commissioning of a nuclear plant), over 835 megawatts of power – enough for 800,000 homes – of energy will be produced. The cost for re-opening and upgrading is estimated at $1.6 billion (US) and is dependent on tax breaks for nuclear power contained in the 2022 US Inflation Reduction Act. The US government and the state of Michigan are investing almost $2 billion (US) to reopen the Palisades Nuclear reactor on Lake Michigan. Closed since 2022, the plant is planned to be reopened in October 2025. The Three Mile Island facility will be renamed the Crane Clean Energy Center after Chris Crane, a nuclear proponent and former CEO of Exelon who was responsible for Exelon spinoff of Constellation.
After the last approval in September 2023, Taiwan suspended the approval of data centers north of Taoyuan with a capacity of more than 5 MW in 2024, due to power supply shortages. Taiwan aims to phase out nuclear power by 2025. On the other hand, Singapore imposed a ban on the opening of data centers in 2019 due to electric power, but in 2022, lifted this ban.
Although most nuclear plants in Japan have been shut down after the 2011 Fukushima nuclear accident, according to an October 2024 Bloomberg article in Japanese, cloud gaming services company Ubitus, in which Nvidia has a stake, is looking for land in Japan near nuclear power plant for a new data center for generative AI. Ubitus CEO Wesley Kuo said nuclear power plants are the most efficient, cheap and stable power for AI.
On 1 November 2024, the Federal Energy Regulatory Commission (FERC) rejected an application submitted by Talen Energy for approval to supply some electricity from the nuclear power station Susquehanna to Amazon's data center.
According to the Commission Chairman Willie L. Phillips, it is a burden on the electricity grid as well as a significant cost shifting concern to households and other business sectors.
Misinformation
YouTube, Facebook and others use recommender systems to guide users to more content. These AI programs were given the goal of maximizing user engagement (that is, the only goal was to keep people watching). The AI learned that users tended to choose misinformation, conspiracy theories, and extreme partisan content, and, to keep them watching, the AI recommended more of it. Users also tended to watch more content on the same subject, so the AI led people into filter bubbles where they received multiple versions of the same misinformation. This convinced many users that the misinformation was true, and ultimately undermined trust in institutions, the media and the government. The AI program had correctly learned to maximize its goal, but the result was harmful to society. After the U.S. election in 2016, major technology companies took steps to mitigate the problem .
In 2022, generative AI began to create images, audio, video and text that are indistinguishable from real photographs, recordings, films, or human writing. It is possible for bad actors to use this technology to create massive amounts of misinformation or propaganda. AI pioneer Geoffrey Hinton expressed concern about AI enabling "authoritarian leaders to manipulate their electorates" on a large scale, among other risks.
Algorithmic bias and fairness
Machine learning applications will be biased if they learn from biased data. The developers may not be aware that the bias exists. Bias can be introduced by the way training data is selected and by the way a model is deployed. If a biased algorithm is used to make decisions that can seriously harm people (as it can in medicine, finance, recruitment, housing or policing) then the algorithm may cause discrimination. The field of fairness studies how to prevent harms from algorithmic biases.
On June 28, 2015, Google Photos's new image labeling feature mistakenly identified Jacky Alcine and a friend as "gorillas" because they were black. The system was trained on a dataset that contained very few images of black people, a problem called "sample size disparity". Google "fixed" this problem by preventing the system from labelling anything as a "gorilla". Eight years later, in 2023, Google Photos still could not identify a gorilla, and neither could similar products from Apple, Facebook, Microsoft and Amazon.
COMPAS is a commercial program widely used by U.S. courts to assess the likelihood of a defendant becoming a recidivist. In 2016, Julia Angwin at ProPublica discovered that COMPAS exhibited racial bias, despite the fact that the program was not told the races of the defendants. Although the error rate for both whites and blacks was calibrated equal at exactly 61%, the errors for each race were different—the system consistently overestimated the chance that a black person would re-offend and would underestimate the chance that a white person would not re-offend. In 2017, several researchers showed that it was mathematically impossible for COMPAS to accommodate all possible measures of fairness when the base rates of re-offense were different for whites and blacks in the data.
A program can make biased decisions even if the data does not explicitly mention a problematic feature (such as "race" or "gender"). The feature will correlate with other features (like "address", "shopping history" or "first name"), and the program will make the same decisions based on these features as it would on "race" or "gender". Moritz Hardt said "the most robust fact in this research area is that fairness through blindness doesn't work."
Criticism of COMPAS highlighted that machine learning models are designed to make "predictions" that are only valid if we assume that the future will resemble the past. If they are trained on data that includes the results of racist decisions in the past, machine learning models must predict that racist decisions will be made in the future. If an application then uses these predictions as recommendations, some of these "recommendations" will likely be racist. Thus, machine learning is not well suited to help make decisions in areas where there is hope that the future will be better than the past. It is descriptive rather than prescriptive.
Bias and unfairness may go undetected because the developers are overwhelmingly white and male: among AI engineers, about 4% are black and 20% are women.
There are various conflicting definitions and mathematical models of fairness. These notions depend on ethical assumptions, and are influenced by beliefs about society. One broad category is distributive fairness, which focuses on the outcomes, often identifying groups and seeking to compensate for statistical disparities. Representational fairness tries to ensure that AI systems do not reinforce negative stereotypes or render certain groups invisible. Procedural fairness focuses on the decision process rather than the outcome. The most relevant notions of fairness may depend on the context, notably the type of AI application and the stakeholders. The subjectivity in the notions of bias and fairness makes it difficult for companies to operationalize them. Having access to sensitive attributes such as race or gender is also considered by many AI ethicists to be necessary in order to compensate for biases, but it may conflict with anti-discrimination laws.
At its 2022 Conference on Fairness, Accountability, and Transparency (ACM FAccT 2022), the Association for Computing Machinery, in Seoul, South Korea, presented and published findings that recommend that until AI and robotics systems are demonstrated to be free of bias mistakes, they are unsafe, and the use of self-learning neural networks trained on vast, unregulated sources of flawed internet data should be curtailed.
Lack of transparency
Many AI systems are so complex that their designers cannot explain how they reach their decisions. Particularly with deep neural networks, in which there are a large amount of non-linear relationships between inputs and outputs. But some popular explainability techniques exist.
It is impossible to be certain that a program is operating correctly if no one knows how exactly it works. There have been many cases where a machine learning program passed rigorous tests, but nevertheless learned something different than what the programmers intended. For example, a system that could identify skin diseases better than medical professionals was found to actually have a strong tendency to classify images with a ruler as "cancerous", because pictures of malignancies typically include a ruler to show the scale. Another machine learning system designed to help effectively allocate medical resources was found to classify patients with asthma as being at "low risk" of dying from pneumonia. Having asthma is actually a severe risk factor, but since the patients having asthma would usually get much more medical care, they were relatively unlikely to die according to the training data. The correlation between asthma and low risk of dying from pneumonia was real, but misleading.
People who have been harmed by an algorithm's decision have a right to an explanation. Doctors, for example, are expected to clearly and completely explain to their colleagues the reasoning behind any decision they make. Early drafts of the European Union's General Data Protection Regulation in 2016 included an explicit statement that this right exists. Industry experts noted that this is an unsolved problem with no solution in sight. Regulators argued that nevertheless the harm is real: if the problem has no solution, the tools should not be used.
DARPA established the XAI ("Explainable Artificial Intelligence") program in 2014 to try to solve these problems.
Several approaches aim to address the transparency problem. SHAP enables to visualise the contribution of each feature to the output. LIME can locally approximate a model's outputs with a simpler, interpretable model. Multitask learning provides a large number of outputs in addition to the target classification. These other outputs can help developers deduce what the network has learned. Deconvolution, DeepDream and other generative methods can allow developers to see what different layers of a deep network for computer vision have learned, and produce output that can suggest what the network is learning. For generative pre-trained transformers, Anthropic developed a technique based on dictionary learning that associates patterns of neuron activations with human-understandable concepts.
Bad actors and weaponized AI
Artificial intelligence provides a number of tools that are useful to bad actors, such as authoritarian governments, terrorists, criminals or rogue states.
A lethal autonomous weapon is a machine that locates, selects and engages human targets without human supervision. Widely available AI tools can be used by bad actors to develop inexpensive autonomous weapons and, if produced at scale, they are potentially weapons of mass destruction. Even when used in conventional warfare, they currently cannot reliably choose targets and could potentially kill an innocent person. In 2014, 30 nations (including China) supported a ban on autonomous weapons under the United Nations' Convention on Certain Conventional Weapons, however the United States and others disagreed. By 2015, over fifty countries were reported to be researching battlefield robots.
AI tools make it easier for authoritarian governments to efficiently control their citizens in several ways. Face and voice recognition allow widespread surveillance. Machine learning, operating this data, can classify potential enemies of the state and prevent them from hiding. Recommendation systems can precisely target propaganda and misinformation for maximum effect. Deepfakes and generative AI aid in producing misinformation. Advanced AI can make authoritarian centralized decision making more competitive than liberal and decentralized systems such as markets. It lowers the cost and difficulty of digital warfare and advanced spyware. All these technologies have been available since 2020 or earlier—AI facial recognition systems are already being used for mass surveillance in China.
There many other ways that AI is expected to help bad actors, some of which can not be foreseen. For example, machine-learning AI is able to design tens of thousands of toxic molecules in a matter of hours.
Technological unemployment
Economists have frequently highlighted the risks of redundancies from AI, and speculated about unemployment if there is no adequate social policy for full employment.
In the past, technology has tended to increase rather than reduce total employment, but economists acknowledge that "we're in uncharted territory" with AI. A survey of economists showed disagreement about whether the increasing use of robots and AI will cause a substantial increase in long-term unemployment, but they generally agree that it could be a net benefit if productivity gains are redistributed. Risk estimates vary; for example, in the 2010s, Michael Osborne and Carl Benedikt Frey estimated 47% of U.S. jobs are at "high risk" of potential automation, while an OECD report classified only 9% of U.S. jobs as "high risk". The methodology of speculating about future employment levels has been criticised as lacking evidential foundation, and for implying that technology, rather than social policy, creates unemployment, as opposed to redundancies. In April 2023, it was reported that 70% of the jobs for Chinese video game illustrators had been eliminated by generative artificial intelligence.
Unlike previous waves of automation, many middle-class jobs may be eliminated by artificial intelligence; The Economist stated in 2015 that "the worry that AI could do to white-collar jobs what steam power did to blue-collar ones during the Industrial Revolution" is "worth taking seriously". Jobs at extreme risk range from paralegals to fast food cooks, while job demand is likely to increase for care-related professions ranging from personal healthcare to the clergy.
From the early days of the development of artificial intelligence, there have been arguments, for example, those put forward by Joseph Weizenbaum, about whether tasks that can be done by computers actually should be done by them, given the difference between computers and humans, and between quantitative calculation and qualitative, value-based judgement.
Existential risk
It has been argued AI will become so powerful that humanity may irreversibly lose control of it. This could, as physicist Stephen Hawking stated, "spell the end of the human race". This scenario has been common in science fiction, when a computer or robot suddenly develops a human-like "self-awareness" (or "sentience" or "consciousness") and becomes a malevolent character. These sci-fi scenarios are misleading in several ways.
First, AI does not require human-like sentience to be an existential risk. Modern AI programs are given specific goals and use learning and intelligence to achieve them. Philosopher Nick Bostrom argued that if one gives almost any goal to a sufficiently powerful AI, it may choose to destroy humanity to achieve it (he used the example of a paperclip factory manager). Stuart Russell gives the example of household robot that tries to find a way to kill its owner to prevent it from being unplugged, reasoning that "you can't fetch the coffee if you're dead." In order to be safe for humanity, a superintelligence would have to be genuinely aligned with humanity's morality and values so that it is "fundamentally on our side".
Second, Yuval Noah Harari argues that AI does not require a robot body or physical control to pose an existential risk. The essential parts of civilization are not physical. Things like ideologies, law, government, money and the economy are built on language; they exist because there are stories that billions of people believe. The current prevalence of misinformation suggests that an AI could use language to convince people to believe anything, even to take actions that are destructive.
The opinions amongst experts and industry insiders are mixed, with sizable fractions both concerned and unconcerned by risk from eventual superintelligent AI. Personalities such as Stephen Hawking, Bill Gates, and Elon Musk, as well as AI pioneers such as Yoshua Bengio, Stuart Russell, Demis Hassabis, and Sam Altman, have expressed concerns about existential risk from AI.
In May 2023, Geoffrey Hinton announced his resignation from Google in order to be able to "freely speak out about the risks of AI" without "considering how this impacts Google." He notably mentioned risks of an AI takeover, and stressed that in order to avoid the worst outcomes, establishing safety guidelines will require cooperation among those competing in use of AI.
In 2023, many leading AI experts endorsed the joint statement that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war".
Some other researchers were more optimistic. AI pioneer Jürgen Schmidhuber did not sign the joint statement, emphasising that in 95% of all cases, AI research is about making "human lives longer and healthier and easier." While the tools that are now being used to improve lives can also be used by bad actors, "they can also be used against the bad actors." Andrew Ng also argued that "it's a mistake to fall for the doomsday hype on AI—and that regulators who do will only benefit vested interests." Yann LeCun "scoffs at his peers' dystopian scenarios of supercharged misinformation and even, eventually, human extinction." In the early 2010s, experts argued that the risks are too distant in the future to warrant research or that humans will be valuable from the perspective of a superintelligent machine. However, after 2016, the study of current and future risks and possible solutions became a serious area of research.
Ethical machines and alignment
Friendly AI are machines that have been designed from the beginning to minimize risks and to make choices that benefit humans. Eliezer Yudkowsky, who coined the term, argues that developing friendly AI should be a higher research priority: it may require a large investment and it must be completed before AI becomes an existential risk.
Machines with intelligence have the potential to use their intelligence to make ethical decisions. The field of machine ethics provides machines with ethical principles and procedures for resolving ethical dilemmas.
The field of machine ethics is also called computational morality,
and was founded at an AAAI symposium in 2005.
Other approaches include Wendell Wallach's "artificial moral agents" and Stuart J. Russell's three principles for developing provably beneficial machines.
Open source
Active organizations in the AI open-source community include Hugging Face, Google, EleutherAI and Meta. Various AI models, such as Llama 2, Mistral or Stable Diffusion, have been made open-weight, meaning that their architecture and trained parameters (the "weights") are publicly available. Open-weight models can be freely fine-tuned, which allows companies to specialize them with their own data and for their own use-case. Open-weight models are useful for research and innovation but can also be misused. Since they can be fine-tuned, any built-in security measure, such as objecting to harmful requests, can be trained away until it becomes ineffective. Some researchers warn that future AI models may develop dangerous capabilities (such as the potential to drastically facilitate bioterrorism) and that once released on the Internet, they cannot be deleted everywhere if needed. They recommend pre-release audits and cost-benefit analyses.
Frameworks
Artificial Intelligence projects can have their ethical permissibility tested while designing, developing, and implementing an AI system. An AI framework such as the Care and Act Framework containing the SUM values—developed by the Alan Turing Institute tests projects in four main areas:
Respect the dignity of individual people
Connect with other people sincerely, openly, and inclusively
Care for the wellbeing of everyone
Protect social values, justice, and the public interest
Other developments in ethical frameworks include those decided upon during the Asilomar Conference, the Montreal Declaration for Responsible AI, and the IEEE's Ethics of Autonomous Systems initiative, among others; however, these principles do not go without their criticisms, especially regards to the people chosen contributes to these frameworks.
Promotion of the wellbeing of the people and communities that these technologies affect requires consideration of the social and ethical implications at all stages of AI system design, development and implementation, and collaboration between job roles such as data scientists, product managers, data engineers, domain experts, and delivery managers.
The UK AI Safety Institute released in 2024 a testing toolset called 'Inspect' for AI safety evaluations available under a MIT open-source licence which is freely available on GitHub and can be improved with third-party packages. It can be used to evaluate AI models in a range of areas including core knowledge, ability to reason, and autonomous capabilities.
Regulation
The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating AI; it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally. According to AI Index at Stanford, the annual number of AI-related laws passed in the 127 survey countries jumped from one passed in 2016 to 37 passed in 2022 alone. Between 2016 and 2020, more than 30 countries adopted dedicated strategies for AI. Most EU member states had released national AI strategies, as had Canada, China, India, Japan, Mauritius, the Russian Federation, Saudi Arabia, United Arab Emirates, U.S., and Vietnam. Others were in the process of elaborating their own AI strategy, including Bangladesh, Malaysia and Tunisia. The Global Partnership on Artificial Intelligence was launched in June 2020, stating a need for AI to be developed in accordance with human rights and democratic values, to ensure public confidence and trust in the technology. Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher published a joint statement in November 2021 calling for a government commission to regulate AI. In 2023, OpenAI leaders published recommendations for the governance of superintelligence, which they believe may happen in less than 10 years. In 2023, the United Nations also launched an advisory body to provide recommendations on AI governance; the body comprises technology company executives, governments officials and academics. In 2024, the Council of Europe created the first international legally binding treaty on AI, called the "Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law". It was adopted by the European Union, the United States, the United Kingdom, and other signatories.
In a 2022 Ipsos survey, attitudes towards AI varied greatly by country; 78% of Chinese citizens, but only 35% of Americans, agreed that "products and services using AI have more benefits than drawbacks". A 2023 Reuters/Ipsos poll found that 61% of Americans agree, and 22% disagree, that AI poses risks to humanity. In a 2023 Fox News poll, 35% of Americans thought it "very important", and an additional 41% thought it "somewhat important", for the federal government to regulate AI, versus 13% responding "not very important" and 8% responding "not at all important".
In November 2023, the first global AI Safety Summit was held in Bletchley Park in the UK to discuss the near and far term risks of AI and the possibility of mandatory and voluntary regulatory frameworks. 28 countries including the United States, China, and the European Union issued a declaration at the start of the summit, calling for international co-operation to manage the challenges and risks of artificial intelligence. In May 2024 at the AI Seoul Summit, 16 global AI tech companies agreed to safety commitments on the development of AI.
History
The study of mechanical or "formal" reasoning began with philosophers and mathematicians in antiquity. The study of logic led directly to Alan Turing's theory of computation, which suggested that a machine, by shuffling symbols as simple as "0" and "1", could simulate any conceivable form of mathematical reasoning. This, along with concurrent discoveries in cybernetics, information theory and neurobiology, led researchers to consider the possibility of building an "electronic brain". They developed several areas of research that would become part of AI, such as McCullouch and Pitts design for "artificial neurons" in 1943, and Turing's influential 1950 paper 'Computing Machinery and Intelligence', which introduced the Turing test and showed that "machine intelligence" was plausible.
The field of AI research was founded at a workshop at Dartmouth College in 1956. The attendees became the leaders of AI research in the 1960s. They and their students produced programs that the press described as "astonishing": computers were learning checkers strategies, solving word problems in algebra, proving logical theorems and speaking English. Artificial intelligence laboratories were set up at a number of British and U.S. universities in the latter 1950s and early 1960s.
Researchers in the 1960s and the 1970s were convinced that their methods would eventually succeed in creating a machine with general intelligence and considered this the goal of their field. In 1965 Herbert Simon predicted, "machines will be capable, within twenty years, of doing any work a man can do". In 1967 Marvin Minsky agreed, writing that "within a generation ... the problem of creating 'artificial intelligence' will substantially be solved". They had, however, underestimated the difficulty of the problem. In 1974, both the U.S. and British governments cut off exploratory research in response to the criticism of Sir James Lighthill and ongoing pressure from the U.S. Congress to fund more productive projects. Minsky's and Papert's book Perceptrons was understood as proving that artificial neural networks would never be useful for solving real-world tasks, thus discrediting the approach altogether. The "AI winter", a period when obtaining funding for AI projects was difficult, followed.
In the early 1980s, AI research was revived by the commercial success of expert systems, a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985, the market for AI had reached over a billion dollars. At the same time, Japan's fifth generation computer project inspired the U.S. and British governments to restore funding for academic research. However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting winter began.
Up to this point, most of AI's funding had gone to projects that used high-level symbols to represent mental objects like plans, goals, beliefs, and known facts. In the 1980s, some researchers began to doubt that this approach would be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition, and began to look into "sub-symbolic" approaches. Rodney Brooks rejected "representation" in general and focussed directly on engineering machines that move and survive. Judea Pearl, Lofti Zadeh, and others developed methods that handled incomplete and uncertain information by making reasonable guesses rather than precise logic. But the most important development was the revival of "connectionism", including neural network research, by Geoffrey Hinton and others. In 1990, Yann LeCun successfully showed that convolutional neural networks can recognize handwritten digits, the first of many successful applications of neural networks.
AI gradually restored its reputation in the late 1990s and early 21st century by exploiting formal mathematical methods and by finding specific solutions to specific problems. This "narrow" and "formal" focus allowed researchers to produce verifiable results and collaborate with other fields (such as statistics, economics and mathematics). By 2000, solutions developed by AI researchers were being widely used, although in the 1990s they were rarely described as "artificial intelligence" (a tendency known as the AI effect).
However, several academic researchers became concerned that AI was no longer pursuing its original goal of creating versatile, fully intelligent machines. Beginning around 2002, they founded the subfield of artificial general intelligence (or "AGI"), which had several well-funded institutions by the 2010s.
Deep learning began to dominate industry benchmarks in 2012 and was adopted throughout the field.
For many specific tasks, other methods were abandoned.
Deep learning's success was based on both hardware improvements (faster computers, graphics processing units, cloud computing) and access to large amounts of data (including curated datasets, such as ImageNet). Deep learning's success led to an enormous increase in interest and funding in AI. The amount of machine learning research (measured by total publications) increased by 50% in the years 2015–2019.
In 2016, issues of fairness and the misuse of technology were catapulted into center stage at machine learning conferences, publications vastly increased, funding became available, and many researchers re-focussed their careers on these issues. The alignment problem became a serious field of academic study.
In the late teens and early 2020s, AGI companies began to deliver programs that created enormous interest. In 2015, AlphaGo, developed by DeepMind, beat the world champion Go player. The program taught only the game's rules and developed a strategy by itself. GPT-3 is a large language model that was released in 2020 by OpenAI and is capable of generating high-quality human-like text. ChatGPT, launched on November 30, 2022, became the fastest-growing consumer software application in history, gaining over 100 million users in two months. It marked what is widely regarded as AI's breakout year, bringing it into the public consciousness. These programs, and others, inspired an aggressive AI boom, where large companies began investing billions of dollars in AI research. According to AI Impacts, about $50 billion annually was invested in "AI" around 2022 in the U.S. alone and about 20% of the new U.S. Computer Science PhD graduates have specialized in "AI". About 800,000 "AI"-related U.S. job openings existed in 2022. According to PitchBook research, 22% of newly funded startups in 2024 claimed to be AI companies.
Philosophy
Philosophical debates have historically sought to determine the nature of intelligence and how to make intelligent machines. Another major focus has been whether machines can be conscious, and the associated ethical implications. Many other topics in philosophy are relevant to AI, such as epistemology and free will. Rapid advancements have intensified public discussions on the philosophy and ethics of AI.
Defining artificial intelligence
Alan Turing wrote in 1950 "I propose to consider the question 'can machines think'?" He advised changing the question from whether a machine "thinks", to "whether or not it is possible for machinery to show intelligent behaviour". He devised the Turing test, which measures the ability of a machine to simulate human conversation. Since we can only observe the behavior of the machine, it does not matter if it is "actually" thinking or literally has a "mind". Turing notes that we can not determine these things about other people but "it is usual to have a polite convention that everyone thinks."
Russell and Norvig agree with Turing that intelligence must be defined in terms of external behavior, not internal structure. However, they are critical that the test requires the machine to imitate humans. "Aeronautical engineering texts", they wrote, "do not define the goal of their field as making 'machines that fly so exactly like pigeons that they can fool other pigeons. AI founder John McCarthy agreed, writing that "Artificial intelligence is not, by definition, simulation of human intelligence".
McCarthy defines intelligence as "the computational part of the ability to achieve goals in the world". Another AI founder, Marvin Minsky, similarly describes it as "the ability to solve hard problems". The leading AI textbook defines it as the study of agents that perceive their environment and take actions that maximize their chances of achieving defined goals. These definitions view intelligence in terms of well-defined problems with well-defined solutions, where both the difficulty of the problem and the performance of the program are direct measures of the "intelligence" of the machine—and no other philosophical discussion is required, or may not even be possible.
Another definition has been adopted by Google, a major practitioner in the field of AI. This definition stipulates the ability of systems to synthesize information as the manifestation of intelligence, similar to the way it is defined in biological intelligence.
Some authors have suggested in practice, that the definition of AI is vague and difficult to define, with contention as to whether classical algorithms should be categorised as AI, with many companies during the early 2020s AI boom using the term as a marketing buzzword, often even if they did "not actually use AI in a material way".
Evaluating approaches to AI
No established unifying theory or paradigm has guided AI research for most of its history. The unprecedented success of statistical machine learning in the 2010s eclipsed all other approaches (so much so that some sources, especially in the business world, use the term "artificial intelligence" to mean "machine learning with neural networks"). This approach is mostly sub-symbolic, soft and narrow. Critics argue that these questions may have to be revisited by future generations of AI researchers.
Symbolic AI and its limits
Symbolic AI (or "GOFAI") simulated the high-level conscious reasoning that people use when they solve puzzles, express legal reasoning and do mathematics. They were highly successful at "intelligent" tasks such as algebra or IQ tests. In the 1960s, Newell and Simon proposed the physical symbol systems hypothesis: "A physical symbol system has the necessary and sufficient means of general intelligent action."
However, the symbolic approach failed on many tasks that humans solve easily, such as learning, recognizing an object or commonsense reasoning. Moravec's paradox is the discovery that high-level "intelligent" tasks were easy for AI, but low level "instinctive" tasks were extremely difficult. Philosopher Hubert Dreyfus had argued since the 1960s that human expertise depends on unconscious instinct rather than conscious symbol manipulation, and on having a "feel" for the situation, rather than explicit symbolic knowledge. Although his arguments had been ridiculed and ignored when they were first presented, eventually, AI research came to agree with him.
The issue is not resolved: sub-symbolic reasoning can make many of the same inscrutable mistakes that human intuition does, such as algorithmic bias. Critics such as Noam Chomsky argue continuing research into symbolic AI will still be necessary to attain general intelligence, in part because sub-symbolic AI is a move away from explainable AI: it can be difficult or impossible to understand why a modern statistical AI program made a particular decision. The emerging field of neuro-symbolic artificial intelligence attempts to bridge the two approaches.
Neat vs. scruffy
"Neats" hope that intelligent behavior is described using simple, elegant principles (such as logic, optimization, or neural networks). "Scruffies" expect that it necessarily requires solving a large number of unrelated problems. Neats defend their programs with theoretical rigor, scruffies rely mainly on incremental testing to see if they work. This issue was actively discussed in the 1970s and 1980s, but eventually was seen as irrelevant. Modern AI has elements of both.
Soft vs. hard computing
Finding a provably correct or optimal solution is intractable for many important problems. Soft computing is a set of techniques, including genetic algorithms, fuzzy logic and neural networks, that are tolerant of imprecision, uncertainty, partial truth and approximation. Soft computing was introduced in the late 1980s and most successful AI programs in the 21st century are examples of soft computing with neural networks.
Narrow vs. general AI
AI researchers are divided as to whether to pursue the goals of artificial general intelligence and superintelligence directly or to solve as many specific problems as possible (narrow AI) in hopes these solutions will lead indirectly to the field's long-term goals. General intelligence is difficult to define and difficult to measure, and modern AI has had more verifiable successes by focusing on specific problems with specific solutions. The sub-field of artificial general intelligence studies this area exclusively.
Machine consciousness, sentience, and mind
The philosophy of mind does not know whether a machine can have a mind, consciousness and mental states, in the same sense that human beings do. This issue considers the internal experiences of the machine, rather than its external behavior. Mainstream AI research considers this issue irrelevant because it does not affect the goals of the field: to build machines that can solve problems using intelligence. Russell and Norvig add that "[t]he additional project of making a machine conscious in exactly the way humans are is not one that we are equipped to take on." However, the question has become central to the philosophy of mind. It is also typically the central question at issue in artificial intelligence in fiction.
Consciousness
David Chalmers identified two problems in understanding the mind, which he named the "hard" and "easy" problems of consciousness. The easy problem is understanding how the brain processes signals, makes plans and controls behavior. The hard problem is explaining how this feels or why it should feel like anything at all, assuming we are right in thinking that it truly does feel like something (Dennett's consciousness illusionism says this is an illusion). While human information processing is easy to explain, human subjective experience is difficult to explain. For example, it is easy to imagine a color-blind person who has learned to identify which objects in their field of view are red, but it is not clear what would be required for the person to know what red looks like.
Computationalism and functionalism
Computationalism is the position in the philosophy of mind that the human mind is an information processing system and that thinking is a form of computing. Computationalism argues that the relationship between mind and body is similar or identical to the relationship between software and hardware and thus may be a solution to the mind–body problem. This philosophical position was inspired by the work of AI researchers and cognitive scientists in the 1960s and was originally proposed by philosophers Jerry Fodor and Hilary Putnam.
Philosopher John Searle characterized this position as "strong AI": "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds." Searle challenges this claim with his Chinese room argument, which attempts to show that even a computer capable of perfectly simulating human behavior would not have a mind.
AI welfare and rights
It is difficult or impossible to reliably evaluate whether an advanced AI is sentient (has the ability to feel), and if so, to what degree. But if there is a significant chance that a given machine can feel and suffer, then it may be entitled to certain rights or welfare protection measures, similarly to animals. Sapience (a set of capacities related to high intelligence, such as discernment or self-awareness) may provide another moral basis for AI rights. Robot rights are also sometimes proposed as a practical way to integrate autonomous agents into society.
In 2017, the European Union considered granting "electronic personhood" to some of the most capable AI systems. Similarly to the legal status of companies, it would have conferred rights but also responsibilities. Critics argued in 2018 that granting rights to AI systems would downplay the importance of human rights, and that legislation should focus on user needs rather than speculative futuristic scenarios. They also noted that robots lacked the autonomy to take part to society on their own.
Progress in AI increased interest in the topic. Proponents of AI welfare and rights often argue that AI sentience, if it emerges, would be particularly easy to deny. They warn that this may be a moral blind spot analogous to slavery or factory farming, which could lead to large-scale suffering if sentient AI is created and carelessly exploited.
Future
Superintelligence and the singularity
A superintelligence is a hypothetical agent that would possess intelligence far surpassing that of the brightest and most gifted human mind. If research into artificial general intelligence produced sufficiently intelligent software, it might be able to reprogram and improve itself. The improved software would be even better at improving itself, leading to what I. J. Good called an "intelligence explosion" and Vernor Vinge called a "singularity".
However, technologies cannot improve exponentially indefinitely, and typically follow an S-shaped curve, slowing when they reach the physical limits of what the technology can do.
Transhumanism
Robot designer Hans Moravec, cyberneticist Kevin Warwick and inventor Ray Kurzweil have predicted that humans and machines may merge in the future into cyborgs that are more capable and powerful than either. This idea, called transhumanism, has roots in the writings of Aldous Huxley and Robert Ettinger.
Edward Fredkin argues that "artificial intelligence is the next step in evolution", an idea first proposed by Samuel Butler's "Darwin among the Machines" as far back as 1863, and expanded upon by George Dyson in his 1998 book Darwin Among the Machines: The Evolution of Global Intelligence.
Decomputing
Arguments for decomputing have been raised by Dan McQuillan (Resisting AI: An Anti-fascist Approach to Artificial Intelligence, 2022), meaning an opposition to the sweeping application and expansion of artificial intelligence. Similar to degrowth the approach criticizes AI as an outgrowth of the systemic issues and capitalist world we live in. Arguing that a different future is possible, in which distance between people is reduced and not increased to AI intermediaries.
In fiction
Thought-capable artificial beings have appeared as storytelling devices since antiquity, and have been a persistent theme in science fiction.
A common trope in these works began with Mary Shelley's Frankenstein, where a human creation becomes a threat to its masters. This includes such works as Arthur C. Clarke's and Stanley Kubrick's 2001: A Space Odyssey (both 1968), with HAL 9000, the murderous computer in charge of the Discovery One spaceship, as well as The Terminator (1984) and The Matrix (1999). In contrast, the rare loyal robots such as Gort from The Day the Earth Stood Still (1951) and Bishop from Aliens (1986) are less prominent in popular culture.
Isaac Asimov introduced the Three Laws of Robotics in many stories, most notably with the "Multivac" super-intelligent computer. Asimov's laws are often brought up during lay discussions of machine ethics; while almost all artificial intelligence researchers are familiar with Asimov's laws through popular culture, they generally consider the laws useless for many reasons, one of which is their ambiguity.
Several works use AI to force us to confront the fundamental question of what makes us human, showing us artificial beings that have the ability to feel, and thus to suffer. This appears in Karel Čapek's R.U.R., the films A.I. Artificial Intelligence and Ex Machina, as well as the novel Do Androids Dream of Electric Sheep?, by Philip K. Dick. Dick considers the idea that our understanding of human subjectivity is altered by technology created with artificial intelligence.
See also
Organoid intelligence – Use of brain cells and brain organoids for intelligent computing
Explanatory notes
References
AI textbooks
The two most widely used textbooks in 2023 (see the Open Syllabus):
The four most widely used AI textbooks in 2008:
.
Later edition:
Other textbooks:
History of AI
Other sources
AI & ML in Fusion
AI & ML in Fusion, video lecture
Presidential Address to the Association for the Advancement of Artificial Intelligence.
Later published as
Further reading
Autor, David H., "Why Are There Still So Many Jobs? The History and Future of Workplace Automation" (2015) 29(3) Journal of Economic Perspectives 3.
Boyle, James, The Line: AI and the Future of Personhood, MIT Press, 2024.
Cukier, Kenneth, "Ready for Robots? How to Think about the Future of AI", Foreign Affairs, vol. 98, no. 4 (July/August 2019), pp. 192–198. George Dyson, historian of computing, writes (in what might be called "Dyson's Law") that "Any system simple enough to be understandable will not be complicated enough to behave intelligently, while any system complicated enough to behave intelligently will be too complicated to understand." (p. 197.) Computer scientist Alex Pentland writes: "Current AI machine-learning algorithms are, at their core, dead simple stupid. They work, but they work by brute force." (p. 198.)
Gertner, Jon. (2023) "Wikipedia's Moment of Truth: Can the online encyclopedia help teach A.I. chatbots to get their facts right — without destroying itself in the process?" New York Times Magazine (July 18, 2023) online
Gleick, James, "The Fate of Free Will" (review of Kevin J. Mitchell, Free Agents: How Evolution Gave Us Free Will, Princeton University Press, 2023, 333 pp.), The New York Review of Books, vol. LXXI, no. 1 (18 January 2024), pp. 27–28, 30. "Agency is what distinguishes us from machines. For biological creatures, reason and purpose come from acting in the world and experiencing the consequences. Artificial intelligences – disembodied, strangers to blood, sweat, and tears – have no occasion for that." (p. 30.)
Halpern, Sue, "The Coming Tech Autocracy" (review of Verity Harding, AI Needs You: How We Can Change AI's Future and Save Our Own, Princeton University Press, 274 pp.; Gary Marcus, Taming Silicon Valley: How We Can Ensure That AI Works for Us, MIT Press, 235 pp.; Daniela Rus and Gregory Mone, The Mind's Mirror: Risk and Reward in the Age of AI, Norton, 280 pp.; Madhumita Murgia, Code Dependent: Living in the Shadow of AI, Henry Holt, 311 pp.), The New York Review of Books, vol. LXXI, no. 17 (7 November 2024), pp. 44–46. "'We can't realistically expect that those who hope to get rich from AI are going to have the interests of the rest of us close at heart,' ... writes [Gary Marcus]. 'We can't count on governments driven by campaign finance contributions [from tech companies] to push back.'... Marcus details the demands that citizens should make of their governments and the tech companies. They include transparency on how AI systems work; compensation for individuals if their data [are] used to train LLMs (large language model)s and the right to consent to this use; and the ability to hold tech companies liable for the harms they cause by eliminating Section 230, imposing cash penalties, and passing stricter product liability laws... Marcus also suggests... that a new, AI-specific federal agency, akin to the FDA, the FCC, or the FTC, might provide the most robust oversight.... [T]he Fordham law professor Chinmayi Sharma... suggests... establish[ing] a professional licensing regime for engineers that would function in a similar way to medical licenses, malpractice suits, and the Hippocratic oath in medicine. 'What if, like doctors,' she asks..., 'AI engineers also vowed to do no harm?'" (p. 46.)
Hughes-Castleberry, Kenna, "A Murder Mystery Puzzle: The literary puzzle Cain's Jawbone, which has stumped humans for decades, reveals the limitations of natural-language-processing algorithms", Scientific American, vol. 329, no. 4 (November 2023), pp. 81–82. "This murder mystery competition has revealed that although NLP (natural-language processing) models are capable of incredible feats, their abilities are very much limited by the amount of context they receive. This [...] could cause [difficulties] for researchers who hope to use them to do things such as analyze ancient languages. In some cases, there are few historical records on long-gone civilizations to serve as training data for such a purpose." (p. 82.)
Immerwahr, Daniel, "Your Lying Eyes: People now use A.I. to generate fake videos indistinguishable from real ones. How much does it matter?", The New Yorker, 20 November 2023, pp. 54–59. "If by 'deepfakes' we mean realistic videos produced using artificial intelligence that actually deceive people, then they barely exist. The fakes aren't deep, and the deeps aren't fake. [...] A.I.-generated videos are not, in general, operating in our media as counterfeited evidence. Their role better resembles that of cartoons, especially smutty ones." (p. 59.)
Johnston, John (2008) The Allure of Machinic Life: Cybernetics, Artificial Life, and the New AI, MIT Press.
Leffer, Lauren, "The Risks of Trusting AI: We must avoid humanizing machine-learning models used in scientific research", Scientific American, vol. 330, no. 6 (June 2024), pp. 80–81.
Lepore, Jill, "The Chit-Chatbot: Is talking with a machine a conversation?", The New Yorker, 7 October 2024, pp. 12–16.
Marcus, Gary, "Artificial Confidence: Even the newest, buzziest systems of artificial general intelligence are stymmied by the same old problems", Scientific American, vol. 327, no. 4 (October 2022), pp. 42–45.
Introduced DQN, which produced human-level performance on some Atari games.
Press, Eyal, "In Front of Their Faces: Does facial-recognition technology lead police to ignore contradictory evidence?", The New Yorker, 20 November 2023, pp. 20–26.
Roivainen, Eka, "AI's IQ: ChatGPT aced a [standard intelligence] test but showed that intelligence cannot be measured by IQ alone", Scientific American, vol. 329, no. 1 (July/August 2023), p. 7. "Despite its high IQ, ChatGPT fails at tasks that require real humanlike reasoning or an understanding of the physical and social world.... ChatGPT seemed unable to reason logically and tried to rely on its vast database of... facts derived from online texts."
Scharre, Paul, "Killer Apps: The Real Dangers of an AI Arms Race", Foreign Affairs, vol. 98, no. 3 (May/June 2019), pp. 135–144. "Today's AI technologies are powerful but unreliable. Rules-based systems cannot deal with circumstances their programmers did not anticipate. Learning systems are limited by the data on which they were trained. AI failures have already led to tragedy. Advanced autopilot features in cars, although they perform well in some circumstances, have driven cars without warning into trucks, concrete barriers, and parked cars. In the wrong situation, AI systems go from supersmart to superdumb in an instant. When an enemy is trying to manipulate and hack an AI system, the risks are even greater." (p. 140.)
Vaswani, Ashish, Noam Shazeer, Niki Parmar et al. "Attention is all you need." Advances in neural information processing systems 30 (2017). Seminal paper on transformers.
Vincent, James, "Horny Robot Baby Voice: James Vincent on AI chatbots", London Review of Books, vol. 46, no. 19 (10 October 2024), pp. 29–32. "[AI chatbot] programs are made possible by new technologies but rely on the timelelss human tendency to anthropomorphise." (p. 29.)
External links
Computational fields of study
Computational neuroscience
Cybernetics
Data science
Formal sciences
Intelligence by type | Artificial intelligence | [
"Technology"
] | 18,062 | [
"Computational fields of study",
"Computing and society"
] |
1,170 | https://en.wikipedia.org/wiki/Architect | An architect is a person who plans, designs, and oversees the construction of buildings. To practice architecture means to provide services in connection with the design of buildings and the space within the site surrounding the buildings that have human occupancy or use as their principal purpose. Etymologically, the term architect derives from the Latin , which derives from the Greek (-, chief + , builder), i.e., chief builder.
The professional requirements for architects vary from location to location. An architect's decisions affect public safety, and thus the architect must undergo specialised training consisting of advanced education and a practicum (or internship) for practical experience to earn a license to practice architecture. Practical, technical, and academic requirements for becoming an architect vary by jurisdiction though the formal study of architecture in academic institutions has played a pivotal role in the development of the profession.
Origins
Throughout ancient and medieval history, most architectural design and construction was carried out by artisans—such as stone masons and carpenters—who rose to the role of master builders. Until modern times, there was no clear distinction between architect and engineer. In Europe, the titles architect and engineer were primarily geographical variations that referred to the same person, often used interchangeably.
"Architect" derives from Greek (, "master builder," "chief ).
It is suggested that various developments in technology and mathematics allowed the development of the professional 'gentleman' architect, separate from the hands-on craftsman. Paper was not used in Europe for drawing until the 15th century but became increasingly available after 1500. Pencils were used for drawing by 1600. The availability of both paper and pencils allowed pre-construction drawings to be made by professionals. Concurrently, the introduction of linear perspective and innovations such as the use of different projections to describe a three-dimensional building in two dimensions, together with an increased understanding of dimensional accuracy, helped building designers communicate their ideas. However, development was gradual and slow-going. Until the 18th century, buildings continued to be designed and set out by craftsmen, with the exception of high-status projects.
Architecture
In most developed countries only those qualified with an appropriate license, certification, or registration with a relevant body (often a government) may legally practice architecture. Such licensure usually requires a university degree, successful completion of exams, and a training period. Representation of oneself as an architect through the use of terms and titles were restricted to licensed individuals by law, although in general, derivatives such as architectural designer were not legally protected.
To practice architecture implies the ability to practice independently of supervision. The term building design professional (or design professional), by contrast, is a much broader term that includes professionals who practice independently under an alternate profession, such as engineering professionals, or those who assist in the practice of architecture under the supervision of a licensed architect, such as intern architects. In many places, independent, non-licensed individuals may perform design services outside of professional restrictions, such as the design of houses or other smaller structures.
Practice
In the architectural profession, technical and environmental knowledge, design, and construction management require an understanding of business as well as design. However, design is the driving force throughout the project and beyond. An architect accepts a commission from a client. The commission might involve preparing feasibility reports, building audits, and designing a building or several buildings, structures, and the spaces among them. The architect participates in developing the requirements the client wants in the building. Throughout the project (planning to occupancy), the architect coordinates a design team. Structural, mechanical, and electrical engineers are hired by the client or architect, who must ensure that the work is coordinated to construct the design.
Design role
The architect, once hired by a client, is responsible for creating a design concept that meets the requirements of that client and provides a facility suitable to the required use. The architect must meet with and ask questions to the client, to ascertain all the requirements (and nuances) of the planned project.
Often, the full brief is not clear in the beginning. It involves a degree of risk in the design undertaking. The architect may make early proposals to the client which may rework the terms of the brief. The "program" (or brief) is essential to producing a project that meets all the needs of the owner. This becomes a guide for the architect in creating the design concept.
Design proposal(s) are generally expected to be both imaginative and pragmatic. Much depends upon the time, place, finance, culture, and available crafts and technology in which the design takes place. The extent and nature of these expectations will vary. Foresight is a prerequisite when designing buildings as it is a very complex and demanding undertaking.
Any design concept during the early stage of its generation must take into account a great number of issues and variables, including the qualities of the space(s), the end-use and life-cycle of these proposed spaces, connections, relations, and aspects between spaces, including how they are put together, and the impact of proposals on the immediate and wider locality. The selection of appropriate materials and technology must be considered, tested, and reviewed at an early stage in the design to ensure there are no setbacks (such as higher-than-expected costs) which could occur later in the project.
The site and its surrounding environment, as well as the culture and history of the place, will also influence the design. The design must also balance increasing concerns with environmental sustainability. The architect may introduce (intentionally or not), aspects of mathematics and architecture, new or current architectural theory, or references to architectural history.
A key part of the design is that the architect often must consult with engineers, surveyors, and other specialists throughout the design, ensuring that aspects such as structural supports and air conditioning elements are coordinated. The control and planning of construction costs are also part of these consultations. Coordination of the different aspects involves a high degree of specialized communication, including advanced computer technology such as building information modeling (BIM), computer-aided design (CAD), and cloud-based technologies. Finally, at all times, the architect must report back to the client, who may have reservations or recommendations which might introduce further variables into the design.
Architects also deal with local and federal jurisdictions regarding regulations and building codes. The architect might need to comply with local planning and zoning laws such as required setbacks, height limitations, parking requirements, transparency requirements (windows), and land use. Some jurisdictions require adherence to design and historic preservation guidelines. Health and safety risks form a vital part of the current design, and in some jurisdictions, design reports and records are required to include ongoing considerations of materials and contaminants, waste management and recycling, traffic control, and fire safety.
Means of design
Previously, architects employed drawings to illustrate and generate design proposals. While conceptual sketches are still widely used by architects, computer technology has now become the industry standard. Furthermore, design may include the use of photos, collages, prints, linocuts, 3D scanning technology, and other media in design production.
Increasingly, computer software is shaping how architects work. BIM technology allows for the creation of a virtual building that serves as an information database for the sharing of design and building information throughout the life-cycle of the building's design, construction, and maintenance. Virtual reality (VR) presentations are becoming more common for visualizing structural designs and interior spaces from the point-of-view perspective.
Environmental role
Since modern buildings are known to release carbon into the atmosphere, increasing controls are being placed on buildings and associated technology to reduce emissions, increase energy efficiency, and make use of renewable energy sources. Renewable energy sources may be designed into the proposed building by local or national renewable energy providers. As a result, the architect is required to remain abreast of current regulations that are continually being updated. Some new developments exhibit extremely low energy use or passive solar building design.
However, the architect is also increasingly being required to provide initiatives in a wider environmental sense. Examples of this include making provisions for low-energy transport, natural daylighting instead of artificial lighting, natural ventilation instead of air conditioning, pollution, and waste management, use of recycled materials, and employment of materials which can be easily recycled.
Construction role
As the design becomes more advanced and detailed, specifications and detail designs are made of all the elements and components of the building. Techniques in the production of a building are continually advancing which places a demand on the architect to ensure that he or she remains up to date with these advances.
Depending on the client's needs and the jurisdiction's requirements, the spectrum of the architect's services during each construction stage may be extensive (detailed document preparation and construction review) or less involved (such as allowing a contractor to exercise considerable design-build functions).
Architects typically put projects to tender on behalf of their clients, advise them on the award of the project to a general contractor, facilitate and administer a contract of agreement, which is often between the client and the contractor. This contract is legally binding and covers a wide range of aspects, including the insurance and commitments of all stakeholders, the status of the design documents, provisions for the architect's access, and procedures for the control of the works as they proceed. Depending on the type of contract used, provisions for further sub-contract tenders may be required. The architect may require that some elements be covered by a warranty which specifies the expected life and other aspects of the material, product, or work.
In most jurisdictions prior notification to the relevant authority must be given before commencement of the project, giving the local authority notice to carry out independent inspections. The architect will then review and inspect the progress of the work in coordination with the local authority.
The architect will typically review contractor shop drawings and other submittals, prepare and issue site instructions, and provide Certificates for Payment to the contractor (see also Design-bid-build) which is based on the work done as well as any materials and other goods purchased or hired in the future. In the United Kingdom and other countries, a quantity surveyor is often part of the team to provide cost consulting. With large, complex projects, an independent construction manager is sometimes hired to assist in the design and management of the construction.
In many jurisdictions mandatory certification or assurance of the completed work or part of the work is required. This demand for certification entails a high degree of risk; therefore, regular inspections of the work as it progresses on site is required to ensure that the design is in compliance itself as well as following all relevant statutes and permissions.
Alternate practice and specialisations
Recent decades have seen the rise of specialisations within the profession. Many architects and architectural firms focus on certain project types (e.g. healthcare, retail, public housing, and event management), technological expertise, or project delivery methods. Some architects specialise in building code, building envelope, sustainable design, technical writing, historic preservation(US) or conservation (UK), and accessibility.
Many architects elect to move into real-estate (property) development, corporate facilities planning, project management, construction management, chief sustainability officers interior design, city planning, user experience design, and design research.
Professional requirements
Although there are variations in each location, most of the world's architects are required to register with the appropriate jurisdiction. Architects are typically required to meet three common requirements: education, experience, and examination.
Basic educational requirement generally consist of a university in architecture. The experience requirement for degree candidates is usually satisfied by a practicum or internship (usually two to three years). Finally, a Registration Examination or a series of exams is required prior to licensure.
Professionals who engaged in the design and supervision of construction projects before the late 19th century were not necessarily trained in a separate architecture program in an academic setting. Instead, they often trained under established architects. Prior to modern times, there was no distinction between architects and engineers and the title used varied depending on geographical location. They often carried the title of master builder or surveyor after serving a number of years as an apprentice (such as Sir Christopher Wren). The formal study of architecture in academic institutions played a pivotal role in the development of the profession as a whole, serving as a focal point for advances in architectural technology and theory. The use of "Architect" or abbreviations such as "Ar." as a title attached to a person's name was regulated by law in some countries.
Fees
Architects' fee structure was typically based on a percentage of construction value, as a rate per unit area of the proposed construction, hourly rates, or a fixed lump sum fee. Combination of these structures were also common. Fixed fees were usually based on a project's allocated construction cost and could range between 4 and 12% of new construction cost for commercial and institutional projects, depending on the project's size and complexity. Residential projects ranged from 12 to 20%. Renovation projects typically commanded higher percentages such as 15–20%.
Overall billings for architectural firms range widely, depending on their location and economic climate. Billings have traditionally been dependent on local economic conditions, but with rapid globalization, this is becoming less of a factor for large international firms. Salaries could also vary depending on experience, position within the firm (i.e. staff architect, partner, or shareholder, etc.), and the size and location of the firm.
Professional organizations
A number of national professional organizations exist to promote career and business development in architecture.
The International Union of Architects (UIA)
The American Institute of Architects (AIA) US
Royal Institute of British Architects (RIBA) UK
Architects Registration Board (ARB) UK
The Australian Institute of Architects (AIA) Australia
The South African Institute of Architects (SAIA) South Africa
Association of Consultant Architects (ACA) UK
Association of Licensed Architects (ALA) US
The Consejo Profesional de Arquitectura y Urbanismo (CPAU) Argentina
Indian Institute of Architects (IIA) & Council of Architecture (COA) India
The National Organization of Minority Architects (NOMA) US
Prizes and awards
A wide variety of prizes is awarded by national professional associations and other bodies, recognizing accomplished architects, their buildings, structures, and professional careers.
The most lucrative award an architect can receive is the Pritzker Prize, sometimes termed the "Nobel Prize for architecture". The inaugural Pritzker Prize winner was Philip Johnson who was cited as having "50 years of imagination and vitality embodied in a myriad of museums, theatres libraries, houses gardens and corporate structures". The Pritzker Prize has been awarded for forty-two straight editions without interruption, and there are now 22 countries with at least one winning architect. Other prestigious architectural awards are the Royal Gold Medal, the AIA Gold Medal (US), AIA Gold Medal (Australia), and the Praemium Imperiale.
Architects in the UK who have made contributions to the profession through design excellence or architectural education or have in some other way advanced the profession might, until 1971, be elected Fellows of the Royal Institute of British Architects and can write FRIBA after their name if they feel so inclined. Those elected to chartered membership of the RIBA after 1971 may use the initials RIBA but cannot use the old ARIBA and FRIBA. An honorary fellow may use the initials Hon. FRIBA, and an international fellow may use the initials Int. FRIBA. Architects in the US who have made contributions to the profession through design excellence or architectural education or have in some other way advanced the profession are elected Fellows of the American Institute of Architects and can write FAIA after their name. Architects in Canada who have made outstanding contributions to the profession through contributions to research, scholarship, public service, or professional standing to the good of architecture in Canada or elsewhere may be recognized as Fellows of the Royal Architectural Institute of Canada and can write FRAIC after their name. In Hong Kong, those elected to chartered membership may use the initial HKIA, and those who have made a special contribution after nomination and election by the Hong Kong Institute of Architects (HKIA), may be elected as fellow members of HKIA and may use FHKIA after their name.
See also
References
Architecture occupations
Professional certification in architecture | Architect | [
"Engineering"
] | 3,297 | [
"Architects",
"Architecture occupations",
"Architecture"
] |
1,176 | https://en.wikipedia.org/wiki/Antisymmetric%20relation | In mathematics, a binary relation on a set is antisymmetric if there is no pair of distinct elements of each of which is related by to the other. More formally, is antisymmetric precisely if for all
or equivalently,
The definition of antisymmetry says nothing about whether actually holds or not for any . An antisymmetric relation on a set may be reflexive (that is, for all ), irreflexive (that is, for no ), or neither reflexive nor irreflexive. A relation is asymmetric if and only if it is both antisymmetric and irreflexive.
Examples
The divisibility relation on the natural numbers is an important example of an antisymmetric relation. In this context, antisymmetry means that the only way each of two numbers can be divisible by the other is if the two are, in fact, the same number; equivalently, if and are distinct and is a factor of then cannot be a factor of For example, 12 is divisible by 4, but 4 is not divisible by 12.
The usual order relation on the real numbers is antisymmetric: if for two real numbers and both inequalities and hold, then and must be equal. Similarly, the subset order on the subsets of any given set is antisymmetric: given two sets and if every element in also is in and every element in is also in then and must contain all the same elements and therefore be equal:
A real-life example of a relation that is typically antisymmetric is "paid the restaurant bill of" (understood as restricted to a given occasion). Typically, some people pay their own bills, while others pay for their spouses or friends. As long as no two people pay each other's bills, the relation is antisymmetric.
Properties
Partial and total orders are antisymmetric by definition. A relation can be both symmetric and antisymmetric (in this case, it must be coreflexive), and there are relations which are neither symmetric nor antisymmetric (for example, the "preys on" relation on biological species).
Antisymmetry is different from asymmetry: a relation is asymmetric if and only if it is antisymmetric and irreflexive.
See also
Symmetry in mathematics
References
nLab antisymmetric relation
Properties of binary relations | Antisymmetric relation | [
"Mathematics"
] | 512 | [
"Properties of binary relations",
"Mathematical relations",
"Binary relations"
] |
1,181 | https://en.wikipedia.org/wiki/Astrometry | Astrometry is a branch of astronomy that involves precise measurements of the positions and movements of stars and other celestial bodies. It provides the kinematics and physical origin of the Solar System and this galaxy, the Milky Way.
History
The history of astrometry is linked to the history of star catalogues, which gave astronomers reference points for objects in the sky so they could track their movements. This can be dated back to the ancient Greek astronomer Hipparchus, who around 190 BC used the catalogue of his predecessors Timocharis and Aristillus to discover Earth's precession. In doing so, he also developed the brightness scale still in use today. Hipparchus compiled a catalogue with at least 850 stars and their positions. Hipparchus's successor, Ptolemy, included a catalogue of 1,022 stars in his work the Almagest, giving their location, coordinates, and brightness.
In the 10th century, the Iranian astronomer Abd al-Rahman al-Sufi carried out observations on the stars and described their positions, magnitudes and star color; furthermore, he provided drawings for each constellation, which are depicted in his Book of Fixed Stars. Egyptian mathematician Ibn Yunus observed more than 10,000 entries for the Sun's position for many years using a large astrolabe with a diameter of nearly 1.4 metres. His observations on eclipses were still used centuries later in Canadian–American astronomer Simon Newcomb's investigations on the motion of the Moon, while his other observations of the motions of the planets Jupiter and Saturn inspired French scholar Laplace's Obliquity of the Ecliptic and Inequalities of Jupiter and Saturn. In the 15th century, the Timurid astronomer Ulugh Beg compiled the Zij-i-Sultani, in which he catalogued 1,019 stars. Like the earlier catalogs of Hipparchus and Ptolemy, Ulugh Beg's catalogue is estimated to have been precise to within approximately 20 minutes of arc.
In the 16th century, Danish astronomer Tycho Brahe used improved instruments, including large mural instruments, to measure star positions more accurately than previously, with a precision of 15–35 arcsec. Ottoman scholar Taqi al-Din measured the right ascension of the stars at the Constantinople Observatory of Taqi ad-Din using the "observational clock" he invented. When telescopes became commonplace, setting circles sped measurements
English astronomer James Bradley first tried to measure stellar parallaxes in 1729. The stellar movement proved too insignificant for his telescope, but he instead discovered the aberration of light and the nutation of the Earth's axis. His cataloguing of 3222 stars was refined in 1807 by German astronomer Friedrich Bessel, the father of modern astrometry. He made the first measurement of stellar parallax: 0.3 arcsec for the binary star 61 Cygni. In 1872, British astronomer William Huggins used spectroscopy to measure the radial velocity of several prominent stars, including Sirius.
Being very difficult to measure, only about 60 stellar parallaxes had been obtained by the end of the 19th century, mostly by use of the filar micrometer. Astrographs using astronomical photographic plates sped the process in the early 20th century. Automated plate-measuring machines and more sophisticated computer technology of the 1960s allowed more efficient compilation of star catalogues. Started in the late 19th century, the project Carte du Ciel to improve star mapping could not be finished but made photography a common technique for astrometry. In the 1980s, charge-coupled devices (CCDs) replaced photographic plates and reduced optical uncertainties to one milliarcsecond. This technology made astrometry less expensive, opening the field to an amateur audience.
In 1989, the European Space Agency's Hipparcos satellite took astrometry into orbit, where it could be less affected by mechanical forces of the Earth and optical distortions from its atmosphere. Operated from 1989 to 1993, Hipparcos measured large and small angles on the sky with much greater precision than any previous optical telescopes. During its 4-year run, the positions, parallaxes, and proper motions of 118,218 stars were determined with an unprecedented degree of accuracy. A new "Tycho catalog" drew together a database of 1,058,332 stars to within 20-30 mas (milliarcseconds). Additional catalogues were compiled for the 23,882 double and multiple stars and 11,597 variable stars also analyzed during the Hipparcos mission.
In 2013, the Gaia satellite was launched and improved the accuracy of Hipparcos.
The precision was improved by a factor of 100 and enabled the mapping of a billion stars.
Today, the catalogue most often used is USNO-B1.0, an all-sky catalogue that tracks proper motions, positions, magnitudes and other characteristics for over one billion stellar objects. During the past 50 years, 7,435 Schmidt camera plates were used to complete several sky surveys that make the data in USNO-B1.0 accurate to within 0.2 arcsec.
Applications
Apart from the fundamental function of providing astronomers with a reference frame to report their observations in, astrometry is also fundamental for fields like celestial mechanics, stellar dynamics and galactic astronomy. In observational astronomy, astrometric techniques help identify stellar objects by their unique motions. It is instrumental for keeping time, in that UTC is essentially the atomic time synchronized to Earth's rotation by means of exact astronomical observations. Astrometry is an important step in the cosmic distance ladder because it establishes parallax distance estimates for stars in the Milky Way.
Astrometry has also been used to support claims of extrasolar planet detection by measuring the displacement the proposed planets cause in their parent star's apparent position on the sky, due to their mutual orbit around the center of mass of the system. Astrometry is more accurate in space missions that are not affected by the distorting effects of the Earth's atmosphere. NASA's planned Space Interferometry Mission (SIM PlanetQuest) (now cancelled) was to utilize astrometric techniques to detect terrestrial planets orbiting 200 or so of the nearest solar-type stars. The European Space Agency's Gaia Mission, launched in 2013, applies astrometric techniques in its stellar census. In addition to the detection of exoplanets, it can also be used to determine their mass.
Astrometric measurements are used by astrophysicists to constrain certain models in celestial mechanics. By measuring the velocities of pulsars, it is possible to put a limit on the asymmetry of supernova explosions. Also, astrometric results are used to determine the distribution of dark matter in the galaxy.
Astronomers use astrometric techniques for the tracking of near-Earth objects. Astrometry is responsible for the detection of many record-breaking Solar System objects. To find such objects astrometrically, astronomers use telescopes to survey the sky and large-area cameras to take pictures at various determined intervals. By studying these images, they can detect Solar System objects by their movements relative to the background stars, which remain fixed. Once a movement per unit time is observed, astronomers compensate for the parallax caused by Earth's motion during this time and the heliocentric distance to this object is calculated. Using this distance and other photographs, more information about the object, including its orbital elements, can be obtained. Asteroid impact avoidance is among the purposes.
Quaoar and Sedna are two trans-Neptunian dwarf planets discovered in this way by Michael E. Brown and others at Caltech using the Palomar Observatory's Samuel Oschin telescope of and the Palomar-Quest large-area CCD camera. The ability of astronomers to track the positions and movements of such celestial bodies is crucial to the understanding of the Solar System and its interrelated past, present, and future with others in the Universe.
Statistics
A fundamental aspect of astrometry is error correction. Various factors introduce errors into the measurement of stellar positions, including atmospheric conditions, imperfections in the instruments and errors by the observer or the measuring instruments. Many of these errors can be reduced by various techniques, such as through instrument improvements and compensations to the data. The results are then analyzed using statistical methods to compute data estimates and error ranges.
Computer programs
XParallax viu (Free application for Windows)
Astrometrica (Application for Windows)
Astrometry.net (Online blind astrometry)
See also
References
Further reading
External links
MPC Guide to Minor Body Astrometry
Astrometry Department of the U.S. Naval Observatory
USNO Astrometric Catalog and related Products
SuperNOVAS high-precision astrometry library for C/C++.
Planet-Like Body Discovered at Fringes of Our Solar System (2004-03-15)
Mike Brown's Caltech Home Page
Scientific Paper describing Sedna's discovery
The Hipparcos Space Astrometry Mission — on ESA
Astronomical sub-disciplines
Astrological aspects
Measurement | Astrometry | [
"Astronomy"
] | 1,821 | [
"Astrometry",
"Astronomical sub-disciplines"
] |
1,187 | https://en.wikipedia.org/wiki/Alloy | An alloy is a mixture of chemical elements of which in most cases at least one is a metallic element, although it is also sometimes used for mixtures of elements; herein only metallic alloys are described. Most alloys are metallic and show good electrical conductivity, ductility, opacity, and luster, and may have properties that differ from those of the pure elements such as increased strength or hardness. In some cases, an alloy may reduce the overall cost of the material while preserving important properties. In other cases, the mixture imparts synergistic properties such as corrosion resistance or mechanical strength.
In an alloy, the atoms are joined by metallic bonding rather than by covalent bonds typically found in chemical compounds. The alloy constituents are usually measured by mass percentage for practical applications, and in atomic fraction for basic science studies. Alloys are usually classified as substitutional or interstitial alloys, depending on the atomic arrangement that forms the alloy. They can be further classified as homogeneous (consisting of a single phase), or heterogeneous (consisting of two or more phases) or intermetallic. An alloy may be a solid solution of metal elements (a single phase, where all metallic grains (crystals) are of the same composition) or a mixture of metallic phases (two or more solutions, forming a microstructure of different crystals within the metal).
Examples of alloys include red gold (gold and copper), white gold (gold and silver), sterling silver (silver and copper), steel or silicon steel (iron with non-metallic carbon or silicon respectively), solder, brass, pewter, duralumin, bronze, and amalgams.
Alloys are used in a wide variety of applications, from the steel alloys, used in everything from buildings to automobiles to surgical tools, to exotic titanium alloys used in the aerospace industry, to beryllium-copper alloys for non-sparking tools.
Characteristics
An alloy is a mixture of chemical elements, which forms an impure substance (admixture) that retains the characteristics of a metal. An alloy is distinct from an impure metal in that, with an alloy, the added elements are well controlled to produce desirable properties, while impure metals such as wrought iron are less controlled, but are often considered useful. Alloys are made by mixing two or more elements, at least one of which is a metal. This is usually called the primary metal or the base metal, and the name of this metal may also be the name of the alloy. The other constituents may or may not be metals but, when mixed with the molten base, they will be soluble and dissolve into the mixture.
The mechanical properties of alloys will often be quite different from those of its individual constituents. A metal that is normally very soft (malleable), such as aluminium, can be altered by alloying it with another soft metal, such as copper. Although both metals are very soft and ductile, the resulting aluminium alloy will have much greater strength. Adding a small amount of non-metallic carbon to iron trades its great ductility for the greater strength of an alloy called steel. Due to its very-high strength, but still substantial toughness, and its ability to be greatly altered by heat treatment, steel is one of the most useful and common alloys in modern use. By adding chromium to steel, its resistance to corrosion can be enhanced, creating stainless steel, while adding silicon will alter its electrical characteristics, producing silicon steel.
Like oil and water, a molten metal may not always mix with another element. For example, pure iron is almost completely insoluble with copper. Even when the constituents are soluble, each will usually have a saturation point, beyond which no more of the constituent can be added. Iron, for example, can hold a maximum of 6.67% carbon. Although the elements of an alloy usually must be soluble in the liquid state, they may not always be soluble in the solid state. If the metals remain soluble when solid, the alloy forms a solid solution, becoming a homogeneous structure consisting of identical crystals, called a phase. If as the mixture cools the constituents become insoluble, they may separate to form two or more different types of crystals, creating a heterogeneous microstructure of different phases, some with more of one constituent than the other. However, in other alloys, the insoluble elements may not separate until after crystallization occurs. If cooled very quickly, they first crystallize as a homogeneous phase, but they are supersaturated with the secondary constituents. As time passes, the atoms of these supersaturated alloys can separate from the crystal lattice, becoming more stable, and forming a second phase that serves to reinforce the crystals internally.
Some alloys, such as electrum—an alloy of silver and gold—occur naturally. Meteorites are sometimes made of naturally occurring alloys of iron and nickel, but are not native to the Earth. One of the first alloys made by humans was bronze, which is a mixture of the metals tin and copper. Bronze was an extremely useful alloy to the ancients, because it is much stronger and harder than either of its components. Steel was another common alloy. However, in ancient times, it could only be created as an accidental byproduct from the heating of iron ore in fires (smelting) during the manufacture of iron. Other ancient alloys include pewter, brass and pig iron. In the modern age, steel can be created in many forms. Carbon steel can be made by varying only the carbon content, producing soft alloys like mild steel or hard alloys like spring steel. Alloy steels can be made by adding other elements, such as chromium, molybdenum, vanadium or nickel, resulting in alloys such as high-speed steel or tool steel. Small amounts of manganese are usually alloyed with most modern steels because of its ability to remove unwanted impurities, like phosphorus, sulfur and oxygen, which can have detrimental effects on the alloy. However, most alloys were not created until the 1900s, such as various aluminium, titanium, nickel, and magnesium alloys. Some modern superalloys, such as incoloy, inconel, and hastelloy, may consist of a multitude of different elements.
An alloy is technically an impure metal, but when referring to alloys, the term impurities usually denotes undesirable elements. Such impurities are introduced from the base metals and alloying elements, but are removed during processing. For instance, sulfur is a common impurity in steel. Sulfur combines readily with iron to form iron sulfide, which is very brittle, creating weak spots in the steel. Lithium, sodium and calcium are common impurities in aluminium alloys, which can have adverse effects on the structural integrity of castings. Conversely, otherwise pure-metals that contain unwanted impurities are often called "impure metals" and are not usually referred to as alloys. Oxygen, present in the air, readily combines with most metals to form metal oxides; especially at higher temperatures encountered during alloying. Great care is often taken during the alloying process to remove excess impurities, using fluxes, chemical additives, or other methods of extractive metallurgy.
Theory
Alloying a metal is done by combining it with one or more other elements. The most common and oldest alloying process is performed by heating the base metal beyond its melting point and then dissolving the solutes into the molten liquid, which may be possible even if the melting point of the solute is far greater than that of the base. For example, in its liquid state, titanium is a very strong solvent capable of dissolving most metals and elements. In addition, it readily absorbs gases like oxygen and burns in the presence of nitrogen. This increases the chance of contamination from any contacting surface, and so must be melted in vacuum induction-heating and special, water-cooled, copper crucibles. However, some metals and solutes, such as iron and carbon, have very high melting-points and were impossible for ancient people to melt. Thus, alloying (in particular, interstitial alloying) may also be performed with one or more constituents in a gaseous state, such as found in a blast furnace to make pig iron (liquid-gas), nitriding, carbonitriding or other forms of case hardening (solid-gas), or the cementation process used to make blister steel (solid-gas). It may also be done with one, more, or all of the constituents in the solid state, such as found in ancient methods of pattern welding (solid-solid), shear steel (solid-solid), or crucible steel production (solid-liquid), mixing the elements via solid-state diffusion.
By adding another element to a metal, differences in the size of the atoms create internal stresses in the lattice of the metallic crystals; stresses that often enhance its properties. For example, the combination of carbon with iron produces steel, which is stronger than iron, its primary element. The electrical and thermal conductivity of alloys is usually lower than that of the pure metals. The physical properties, such as density, reactivity, Young's modulus of an alloy may not differ greatly from those of its base element, but engineering properties such as tensile strength, ductility, and shear strength may be substantially different from those of the constituent materials. This is sometimes a result of the sizes of the atoms in the alloy, because larger atoms exert a compressive force on neighboring atoms, and smaller atoms exert a tensile force on their neighbors, helping the alloy resist deformation. Sometimes alloys may exhibit marked differences in behavior even when small amounts of one element are present. For example, impurities in semiconducting ferromagnetic alloys lead to different properties, as first predicted by White, Hogan, Suhl, Tian Abrie and Nakamura.
Unlike pure metals, most alloys do not have a single melting point, but a melting range during which the material is a mixture of solid and liquid phases (a slush). The temperature at which melting begins is called the solidus, and the temperature when melting is just complete is called the liquidus. For many alloys there is a particular alloy proportion (in some cases more than one), called either a eutectic mixture or a peritectic composition, which gives the alloy a unique and low melting point, and no liquid/solid slush transition.
Heat treatment
Alloying elements are added to a base metal, to induce hardness, toughness, ductility, or other desired properties. Most metals and alloys can be work hardened by creating defects in their crystal structure. These defects are created during plastic deformation by hammering, bending, extruding, et cetera, and are permanent unless the metal is recrystallized. Otherwise, some alloys can also have their properties altered by heat treatment. Nearly all metals can be softened by annealing, which recrystallizes the alloy and repairs the defects, but not as many can be hardened by controlled heating and cooling. Many alloys of aluminium, copper, magnesium, titanium, and nickel can be strengthened to some degree by some method of heat treatment, but few respond to this to the same degree as does steel.
The base metal iron of the iron-carbon alloy known as steel, undergoes a change in the arrangement (allotropy) of the atoms of its crystal matrix at a certain temperature (usually between and , depending on carbon content). This allows the smaller carbon atoms to enter the interstices of the iron crystal. When this diffusion happens, the carbon atoms are said to be in solution in the iron, forming a particular single, homogeneous, crystalline phase called austenite. If the steel is cooled slowly, the carbon can diffuse out of the iron and it will gradually revert to its low temperature allotrope. During slow cooling, the carbon atoms will no longer be as soluble with the iron, and will be forced to precipitate out of solution, nucleating into a more concentrated form of iron carbide (Fe3C) in the spaces between the pure iron crystals. The steel then becomes heterogeneous, as it is formed of two phases, the iron-carbon phase called cementite (or carbide), and pure iron ferrite. Such a heat treatment produces a steel that is rather soft. If the steel is cooled quickly, however, the carbon atoms will not have time to diffuse and precipitate out as carbide, but will be trapped within the iron crystals. When rapidly cooled, a diffusionless (martensite) transformation occurs, in which the carbon atoms become trapped in solution. This causes the iron crystals to deform as the crystal structure tries to change to its low temperature state, leaving those crystals very hard but much less ductile (more brittle).
While the high strength of steel results when diffusion and precipitation is prevented (forming martensite), most heat-treatable alloys are precipitation hardening alloys, that depend on the diffusion of alloying elements to achieve their strength. When heated to form a solution and then cooled quickly, these alloys become much softer than normal, during the diffusionless transformation, but then harden as they age. The solutes in these alloys will precipitate over time, forming intermetallic phases, which are difficult to discern from the base metal. Unlike steel, in which the solid solution separates into different crystal phases (carbide and ferrite), precipitation hardening alloys form different phases within the same crystal. These intermetallic alloys appear homogeneous in crystal structure, but tend to behave heterogeneously, becoming hard and somewhat brittle.
In 1906, precipitation hardening alloys were discovered by Alfred Wilm. Precipitation hardening alloys, such as certain alloys of aluminium, titanium, and copper, are heat-treatable alloys that soften when quenched (cooled quickly), and then harden over time. Wilm had been searching for a way to harden aluminium alloys for use in machine-gun cartridge cases. Knowing that aluminium-copper alloys were heat-treatable to some degree, Wilm tried quenching a ternary alloy of aluminium, copper, and the addition of magnesium, but was initially disappointed with the results. However, when Wilm retested it the next day he discovered that the alloy increased in hardness when left to age at room temperature, and far exceeded his expectations. Although an explanation for the phenomenon was not provided until 1919, duralumin was one of the first "age hardening" alloys used, becoming the primary building material for the first Zeppelins, and was soon followed by many others. Because they often exhibit a combination of high strength and low weight, these alloys became widely used in many forms of industry, including the construction of modern aircraft.
Mechanisms
When a molten metal is mixed with another substance, there are two mechanisms that can cause an alloy to form, called atom exchange and the interstitial mechanism. The relative size of each element in the mix plays a primary role in determining which mechanism will occur. When the atoms are relatively similar in size, the atom exchange method usually happens, where some of the atoms composing the metallic crystals are substituted with atoms of the other constituent. This is called a substitutional alloy. Examples of substitutional alloys include bronze and brass, in which some of the copper atoms are substituted with either tin or zinc atoms respectively.
In the case of the interstitial mechanism, one atom is usually much smaller than the other and can not successfully substitute for the other type of atom in the crystals of the base metal. Instead, the smaller atoms become trapped in the interstitial sites between the atoms of the crystal matrix. This is referred to as an interstitial alloy. Steel is an example of an interstitial alloy, because the very small carbon atoms fit into interstices of the iron matrix.
Stainless steel is an example of a combination of interstitial and substitutional alloys, because the carbon atoms fit into the interstices, but some of the iron atoms are substituted by nickel and chromium atoms.
History and examples
Meteoric iron
The use of alloys by humans started with the use of meteoric iron, a naturally occurring alloy of nickel and iron. It is the main constituent of iron meteorites. As no metallurgic processes were used to separate iron from nickel, the alloy was used as it was. Meteoric iron could be forged from a red heat to make objects such as tools, weapons, and nails. In many cultures it was shaped by cold hammering into knives and arrowheads. They were often used as anvils. Meteoric iron was very rare and valuable, and difficult for ancient people to work.
Bronze and brass
Iron is usually found as iron ore on Earth, except for one deposit of native iron in Greenland, which was used by the Inuit. Native copper, however, was found worldwide, along with silver, gold, and platinum, which were also used to make tools, jewelry, and other objects since Neolithic times. Copper was the hardest of these metals, and the most widely distributed. It became one of the most important metals to the ancients. Around 10,000 years ago in the highlands of Anatolia (Turkey), humans learned to smelt metals such as copper and tin from ore. Around 2500 BC, people began alloying the two metals to form bronze, which was much harder than its ingredients. Tin was rare, however, being found mostly in Great Britain. In the Middle East, people began alloying copper with zinc to form brass. Ancient civilizations took into account the mixture and the various properties it produced, such as hardness, toughness and melting point, under various conditions of temperature and work hardening, developing much of the information contained in modern alloy phase diagrams. For example, arrowheads from the Chinese Qin dynasty (around 200 BC) were often constructed with a hard bronze-head, but a softer bronze-tang, combining the alloys to prevent both dulling and breaking during use.
Amalgams
Mercury has been smelted from cinnabar for thousands of years. Mercury dissolves many metals, such as gold, silver, and tin, to form amalgams (an alloy in a soft paste or liquid form at ambient temperature). Amalgams have been used since 200 BC in China for gilding objects such as armor and mirrors with precious metals. The ancient Romans often used mercury-tin amalgams for gilding their armor. The amalgam was applied as a paste and then heated until the mercury vaporized, leaving the gold, silver, or tin behind. Mercury was often used in mining, to extract precious metals like gold and silver from their ores.
Precious metals
Many ancient civilizations alloyed metals for purely aesthetic purposes. In ancient Egypt and Mycenae, gold was often alloyed with copper to produce red-gold, or iron to produce a bright burgundy-gold. Gold was often found alloyed with silver or other metals to produce various types of colored gold. These metals were also used to strengthen each other, for more practical purposes. Copper was often added to silver to make sterling silver, increasing its strength for use in dishes, silverware, and other practical items. Quite often, precious metals were alloyed with less valuable substances as a means to deceive buyers. Around 250 BC, Archimedes was commissioned by the King of Syracuse to find a way to check the purity of the gold in a crown, leading to the famous bath-house shouting of "Eureka!" upon the discovery of Archimedes' principle.
Pewter
The term pewter covers a variety of alloys consisting primarily of tin. As a pure metal, tin is much too soft to use for most practical purposes. However, during the Bronze Age, tin was a rare metal in many parts of Europe and the Mediterranean, so it was often valued higher than gold. To make jewellery, cutlery, or other objects from tin, workers usually alloyed it with other metals to increase strength and hardness. These metals were typically lead, antimony, bismuth or copper. These solutes were sometimes added individually in varying amounts, or added together, making a wide variety of objects, ranging from practical items such as dishes, surgical tools, candlesticks or funnels, to decorative items like ear rings and hair clips.
The earliest examples of pewter come from ancient Egypt, around 1450 BC. The use of pewter was widespread across Europe, from France to Norway and Britain (where most of the ancient tin was mined) to the Near East. The alloy was also used in China and the Far East, arriving in Japan around 800 AD, where it was used for making objects like ceremonial vessels, tea canisters, or chalices used in shinto shrines.
Iron
The first known smelting of iron began in Anatolia, around 1800 BC. Called the bloomery process, it produced very soft but ductile wrought iron. By 800 BC, iron-making technology had spread to Europe, arriving in Japan around 700 AD. Pig iron, a very hard but brittle alloy of iron and carbon, was being produced in China as early as 1200 BC, but did not arrive in Europe until the Middle Ages. Pig iron has a lower melting point than iron, and was used for making cast-iron. However, these metals found little practical use until the introduction of crucible steel around 300 BC. These steels were of poor quality, and the introduction of pattern welding, around the 1st century AD, sought to balance the extreme properties of the alloys by laminating them, to create a tougher metal. Around 700 AD, the Japanese began folding bloomery-steel and cast-iron in alternating layers to increase the strength of their swords, using clay fluxes to remove slag and impurities. This method of Japanese swordsmithing produced one of the purest steel-alloys of the ancient world.
While the use of iron started to become more widespread around 1200 BC, mainly because of interruptions in the trade routes for tin, the metal was much softer than bronze. However, very small amounts of steel, (an alloy of iron and around 1% carbon), was always a byproduct of the bloomery process. The ability to modify the hardness of steel by heat treatment had been known since 1100 BC, and the rare material was valued for the manufacture of tools and weapons. Because the ancients could not produce temperatures high enough to melt iron fully, the production of steel in decent quantities did not occur until the introduction of blister steel during the Middle Ages. This method introduced carbon by heating wrought iron in charcoal for long periods of time, but the absorption of carbon in this manner is extremely slow thus the penetration was not very deep, so the alloy was not homogeneous. In 1740, Benjamin Huntsman began melting blister steel in a crucible to even out the carbon content, creating the first process for the mass production of tool steel. Huntsman's process was used for manufacturing tool steel until the early 1900s.
The introduction of the blast furnace to Europe in the Middle Ages meant that people could produce pig iron in much higher volumes than wrought iron. Because pig iron could be melted, people began to develop processes to reduce carbon in liquid pig iron to create steel. Puddling had been used in China since the first century, and was introduced in Europe during the 1700s, where molten pig iron was stirred while exposed to the air, to remove the carbon by oxidation. In 1858, Henry Bessemer developed a process of steel-making by blowing hot air through liquid pig iron to reduce the carbon content. The Bessemer process led to the first large scale manufacture of steel.
Steel is an alloy of iron and carbon, but the term alloy steel usually only refers to steels that contain other elements— like vanadium, molybdenum, or cobalt—in amounts sufficient to alter the properties of the base steel. Since ancient times, when steel was used primarily for tools and weapons, the methods of producing and working the metal were often closely guarded secrets. Even long after the Age of Enlightenment, the steel industry was very competitive and manufacturers went through great lengths to keep their processes confidential, resisting any attempts to scientifically analyze the material for fear it would reveal their methods. For example, the people of Sheffield, a center of steel production in England, were known to routinely bar visitors and tourists from entering town to deter industrial espionage. Thus, almost no metallurgical information existed about steel until 1860. Because of this lack of understanding, steel was not generally considered an alloy until the decades between 1930 and 1970 (primarily due to the work of scientists like William Chandler Roberts-Austen, Adolf Martens, and Edgar Bain), so "alloy steel" became the popular term for ternary and quaternary steel-alloys.
After Benjamin Huntsman developed his crucible steel in 1740, he began experimenting with the addition of elements like manganese (in the form of a high-manganese pig-iron called spiegeleisen), which helped remove impurities such as phosphorus and oxygen; a process adopted by Bessemer and still used in modern steels (albeit in concentrations low enough to still be considered carbon steel). Afterward, many people began experimenting with various alloys of steel without much success. However, in 1882, Robert Hadfield, being a pioneer in steel metallurgy, took an interest and produced a steel alloy containing around 12% manganese. Called mangalloy, it exhibited extreme hardness and toughness, becoming the first commercially viable alloy-steel. Afterward, he created silicon steel, launching the search for other possible alloys of steel.
Robert Forester Mushet found that by adding tungsten to steel it could produce a very hard edge that would resist losing its hardness at high temperatures. "R. Mushet's special steel" (RMS) became the first high-speed steel. Mushet's steel was quickly replaced by tungsten carbide steel, developed by Taylor and White in 1900, in which they doubled the tungsten content and added small amounts of chromium and vanadium, producing a superior steel for use in lathes and machining tools. In 1903, the Wright brothers used a chromium-nickel steel to make the crankshaft for their airplane engine, while in 1908 Henry Ford began using vanadium steels for parts like crankshafts and valves in his Model T Ford, due to their higher strength and resistance to high temperatures. In 1912, the Krupp Ironworks in Germany developed a rust-resistant steel by adding 21% chromium and 7% nickel, producing the first stainless steel.
Others
Due to their high reactivity, most metals were not discovered until the 19th century. A method for extracting aluminium from bauxite was proposed by Humphry Davy in 1807, using an electric arc. Although his attempts were unsuccessful, by 1855 the first sales of pure aluminium reached the market. However, as extractive metallurgy was still in its infancy, most aluminium extraction-processes produced unintended alloys contaminated with other elements found in the ore; the most abundant of which was copper. These aluminium-copper alloys (at the time termed "aluminum bronze") preceded pure aluminium, offering greater strength and hardness over the soft, pure metal, and to a slight degree were found to be heat treatable. However, due to their softness and limited hardenability these alloys found little practical use, and were more of a novelty, until the Wright brothers used an aluminium alloy to construct the first airplane engine in 1903. During the time between 1865 and 1910, processes for extracting many other metals were discovered, such as chromium, vanadium, tungsten, iridium, cobalt, and molybdenum, and various alloys were developed.
Prior to 1910, research mainly consisted of private individuals tinkering in their own laboratories. However, as the aircraft and automotive industries began growing, research into alloys became an industrial effort in the years following 1910, as new magnesium alloys were developed for pistons and wheels in cars, and pot metal for levers and knobs, and aluminium alloys developed for airframes and aircraft skins were put into use. The Doehler Die Casting Co. of Toledo, Ohio were known for the production of Brastil, a high tensile corrosion resistant bronze alloy.
See also
Alloy broadening
CALPHAD
Ideal mixture
List of alloys
References
Bibliography
External links
Metallurgy
Chemistry | Alloy | [
"Chemistry",
"Materials_science",
"Engineering"
] | 5,855 | [
"Metallurgy",
"Materials science",
"Chemical mixtures",
"Alloys",
"nan"
] |
1,196 | https://en.wikipedia.org/wiki/Angle | In Euclidean geometry, an angle or plane angle is the figure formed by two rays, called the sides of the angle, sharing a common endpoint, called the vertex of the angle.
Two intersecting curves may also define an angle, which is the angle of the rays lying tangent to the respective curves at their point of intersection. Angles are also formed by the intersection of two planes; these are called dihedral angles.
In any case, the resulting angle lies in a plane (spanned by the two rays or perpendicular to the line of plane-plane intersection).
The magnitude of an angle is called an angular measure or simply "angle". Two different angles may have the same measure, as in an isosceles triangle. "Angle" also denotes the angular sector, the infinite region of the plane bounded by the sides of an angle.
Angle of rotation is a measure conventionally defined as the ratio of a circular arc length to its radius, and may be a negative number. In the case of an ordinary angle, the arc is centered at the vertex and delimited by the sides. In the case of an angle of rotation, the arc is centered at the center of the rotation and delimited by any other point and its image after the rotation.
History and etymology
The word angle comes from the Latin word , meaning "corner". Cognate words include the Greek () meaning "crooked, curved" and the English word "ankle". Both are connected with the Proto-Indo-European root *ank-, meaning "to bend" or "bow".
Euclid defines a plane angle as the inclination to each other, in a plane, of two lines that meet each other and do not lie straight with respect to each other. According to the Neoplatonic metaphysician Proclus, an angle must be either a quality, a quantity, or a relationship. The first concept, angle as quality, was used by Eudemus of Rhodes, who regarded an angle as a deviation from a straight line; the second, angle as quantity, by Carpus of Antioch, who regarded it as the interval or space between the intersecting lines; Euclid adopted the third: angle as a relationship.
Identifying angles
In mathematical expressions, it is common to use Greek letters (α, β, γ, θ, φ, . . . ) as variables denoting the size of some angle (the symbol is typically not used for this purpose to avoid confusion with the constant denoted by that symbol). Lower case Roman letters (a, b, c, . . . ) are also used. In contexts where this is not confusing, an angle may be denoted by the upper case Roman letter denoting its vertex. See the figures in this article for examples.
The three defining points may also identify angles in geometric figures. For example, the angle with vertex A formed by the rays AB and AC (that is, the half-lines from point A through points B and C) is denoted or . Where there is no risk of confusion, the angle may sometimes be referred to by a single vertex alone (in this case, "angle A").
In other ways, an angle denoted as, say, might refer to any of four angles: the clockwise angle from B to C about A, the anticlockwise angle from B to C about A, the clockwise angle from C to B about A, or the anticlockwise angle from C to B about A, where the direction in which the angle is measured determines its sign (see ). However, in many geometrical situations, it is evident from the context that the positive angle less than or equal to 180 degrees is meant, and in these cases, no ambiguity arises. Otherwise, to avoid ambiguity, specific conventions may be adopted so that, for instance, always refers to the anticlockwise (positive) angle from B to C about A and the anticlockwise (positive) angle from C to B about A.
Types
Individual angles
There is some common terminology for angles, whose measure is always non-negative (see ):
An angle equal to 0° or not turned is called a zero angle.
An angle smaller than a right angle (less than 90°) is called an acute angle ("acute" meaning "sharp").
An angle equal to turn (90° or radians) is called a right angle. Two lines that form a right angle are said to be normal, orthogonal, or perpendicular.
An angle larger than a right angle and smaller than a straight angle (between 90° and 180°) is called an obtuse angle ("obtuse" meaning "blunt").
An angle equal to turn (180° or radians) is called a straight angle.
An angle larger than a straight angle but less than 1 turn (between 180° and 360°) is called a reflex angle.
An angle equal to 1 turn (360° or 2 radians) is called a full angle, complete angle, round angle or perigon.
An angle that is not a multiple of a right angle is called an oblique angle.
The names, intervals, and measuring units are shown in the table below:
Vertical and angle pairs
When two straight lines intersect at a point, four angles are formed. Pairwise, these angles are named according to their location relative to each other.
A transversal is a line that intersects a pair of (often parallel) lines and is associated with exterior angles, interior angles, alternate exterior angles, alternate interior angles, corresponding angles, and consecutive interior angles.
Combining angle pairs
The angle addition postulate states that if B is in the interior of angle AOC, then
I.e., the measure of the angle AOC is the sum of the measure of angle AOB and the measure of angle BOC.
Three special angle pairs involve the summation of angles:
Polygon-related angles
An angle that is part of a simple polygon is called an interior angle if it lies on the inside of that simple polygon. A simple concave polygon has at least one interior angle, that is, a reflex angle. In Euclidean geometry, the measures of the interior angles of a triangle add up to radians, 180°, or turn; the measures of the interior angles of a simple convex quadrilateral add up to 2 radians, 360°, or 1 turn. In general, the measures of the interior angles of a simple convex polygon with n sides add up to (n − 2) radians, or (n − 2)180 degrees, (n − 2)2 right angles, or (n − 2) turn.
The supplement of an interior angle is called an exterior angle; that is, an interior angle and an exterior angle form a linear pair of angles. There are two exterior angles at each vertex of the polygon, each determined by extending one of the two sides of the polygon that meet at the vertex; these two angles are vertical and hence are equal. An exterior angle measures the amount of rotation one must make at a vertex to trace the polygon. If the corresponding interior angle is a reflex angle, the exterior angle should be considered negative. Even in a non-simple polygon, it may be possible to define the exterior angle. Still, one will have to pick an orientation of the plane (or surface) to decide the sign of the exterior angle measure. In Euclidean geometry, the sum of the exterior angles of a simple convex polygon, if only one of the two exterior angles is assumed at each vertex, will be one full turn (360°). The exterior angle here could be called a supplementary exterior angle. Exterior angles are commonly used in Logo Turtle programs when drawing regular polygons.
In a triangle, the bisectors of two exterior angles and the bisector of the other interior angle are concurrent (meet at a single point).
In a triangle, three intersection points, each of an external angle bisector with the opposite extended side, are collinear.
In a triangle, three intersection points, two between an interior angle bisector and the opposite side, and the third between the other exterior angle bisector and the opposite side extended are collinear.
Some authors use the name exterior angle of a simple polygon to mean the explement exterior angle (not supplement!) of the interior angle. This conflicts with the above usage.
Plane-related angles
The angle between two planes (such as two adjacent faces of a polyhedron) is called a dihedral angle. It may be defined as the acute angle between two lines normal to the planes.
The angle between a plane and an intersecting straight line is complementary to the angle between the intersecting line and the normal to the plane.
Measuring angles
The size of a geometric angle is usually characterized by the magnitude of the smallest rotation that maps one of the rays into the other. Angles of the same size are said to be equal congruent or equal in measure.
In some contexts, such as identifying a point on a circle or describing the orientation of an object in two dimensions relative to a reference orientation, angles that differ by an exact multiple of a full turn are effectively equivalent. In other contexts, such as identifying a point on a spiral curve or describing an object's cumulative rotation in two dimensions relative to a reference orientation, angles that differ by a non-zero multiple of a full turn are not equivalent.
To measure an angle θ, a circular arc centered at the vertex of the angle is drawn, e.g., with a pair of compasses. The ratio of the length s of the arc by the radius r of the circle is the number of radians in the angle:
Conventionally, in mathematics and the SI, the radian is treated as being equal to the dimensionless unit 1, thus being normally omitted.
The angle expressed by another angular unit may then be obtained by multiplying the angle by a suitable conversion constant of the form , where k is the measure of a complete turn expressed in the chosen unit (for example, for degrees or 400 grad for gradians):
The value of thus defined is independent of the size of the circle: if the length of the radius is changed, then the arc length changes in the same proportion, so the ratio s/r is unaltered.
Units
Throughout history, angles have been measured in various units. These are known as angular units, with the most contemporary units being the degree ( ° ), the radian (rad), and the gradian (grad), though many others have been used throughout history. Most units of angular measurement are defined such that one turn (i.e., the angle subtended by the circumference of a circle at its centre) is equal to n units, for some whole number n. Two exceptions are the radian (and its decimal submultiples) and the diameter part.
In the International System of Quantities, an angle is defined as a dimensionless quantity, and in particular, the radian unit is dimensionless. This convention impacts how angles are treated in dimensional analysis.
The following table lists some units used to represent angles.
Dimensional analysis
Signed angles
It is frequently helpful to impose a convention that allows positive and negative angular values to represent orientations and/or rotations in opposite directions or "sense" relative to some reference.
In a two-dimensional Cartesian coordinate system, an angle is typically defined by its two sides, with its vertex at the origin. The initial side is on the positive x-axis, while the other side or terminal side is defined by the measure from the initial side in radians, degrees, or turns, with positive angles representing rotations toward the positive y-axis and negative angles representing rotations toward the negative y-axis. When Cartesian coordinates are represented by standard position, defined by the x-axis rightward and the y-axis upward, positive rotations are anticlockwise, and negative cycles are clockwise.
In many contexts, an angle of −θ is effectively equivalent to an angle of "one full turn minus θ". For example, an orientation represented as −45° is effectively equal to an orientation defined as 360° − 45° or 315°. Although the final position is the same, a physical rotation (movement) of −45° is not the same as a rotation of 315° (for example, the rotation of a person holding a broom resting on a dusty floor would leave visually different traces of swept regions on the floor).
In three-dimensional geometry, "clockwise" and "anticlockwise" have no absolute meaning, so the direction of positive and negative angles must be defined in terms of an orientation, which is typically determined by a normal vector passing through the angle's vertex and perpendicular to the plane in which the rays of the angle lie.
In navigation, bearings or azimuth are measured relative to north. By convention, viewed from above, bearing angles are positive clockwise, so a bearing of 45° corresponds to a north-east orientation. Negative bearings are not used in navigation, so a north-west orientation corresponds to a bearing of 315°.
Equivalent angles
Angles that have the same measure (i.e., the same magnitude) are said to be equal or congruent. An angle is defined by its measure and is not dependent upon the lengths of the sides of the angle (e.g., all right angles are equal in measure).
Two angles that share terminal sides, but differ in size by an integer multiple of a turn, are called coterminal angles.
The reference angle (sometimes called related angle) for any angle θ in standard position is the positive acute angle between the terminal side of θ and the x-axis (positive or negative). Procedurally, the magnitude of the reference angle for a given angle may determined by taking the angle's magnitude modulo turn, 180°, or radians, then stopping if the angle is acute, otherwise taking the supplementary angle, 180° minus the reduced magnitude. For example, an angle of 30 degrees is already a reference angle, and an angle of 150 degrees also has a reference angle of 30 degrees (180° − 150°). Angles of 210° and 510° correspond to a reference angle of 30 degrees as well (210° mod 180° = 30°, 510° mod 180° = 150° whose supplementary angle is 30°).
Related quantities
For an angular unit, it is definitional that the angle addition postulate holds. Some quantities related to angles where the angle addition postulate does not hold include:
The slope or gradient is equal to the tangent of the angle; a gradient is often expressed as a percentage. For very small values (less than 5%), the slope of a line is approximately the measure in radians of its angle with the horizontal direction.
The spread between two lines is defined in rational geometry as the square of the sine of the angle between the lines. As the sine of an angle and the sine of its supplementary angle are the same, any angle of rotation that maps one of the lines into the other leads to the same value for the spread between the lines.
Although done rarely, one can report the direct results of trigonometric functions, such as the sine of the angle.
Angles between curves
The angle between a line and a curve (mixed angle) or between two intersecting curves (curvilinear angle) is defined to be the angle between the tangents at the point of intersection. Various names (now rarely, if ever, used) have been given to particular cases:—amphicyrtic (Gr. , on both sides, κυρτός, convex) or cissoidal (Gr. κισσός, ivy), biconvex; xystroidal or sistroidal (Gr. ξυστρίς, a tool for scraping), concavo-convex; amphicoelic (Gr. κοίλη, a hollow) or angulus lunularis, biconcave.
Bisecting and trisecting angles
The ancient Greek mathematicians knew how to bisect an angle (divide it into two angles of equal measure) using only a compass and straightedge but could only trisect certain angles. In 1837, Pierre Wantzel showed that this construction could not be performed for most angles.
Dot product and generalisations
In the Euclidean space, the angle θ between two Euclidean vectors u and v is related to their dot product and their lengths by the formula
This formula supplies an easy method to find the angle between two planes (or curved surfaces) from their normal vectors and between skew lines from their vector equations.
Inner product
To define angles in an abstract real inner product space, we replace the Euclidean dot product ( · ) by the inner product , i.e.
In a complex inner product space, the expression for the cosine above may give non-real values, so it is replaced with
or, more commonly, using the absolute value, with
The latter definition ignores the direction of the vectors. It thus describes the angle between one-dimensional subspaces and spanned by the vectors and correspondingly.
Angles between subspaces
The definition of the angle between one-dimensional subspaces and given by
in a Hilbert space can be extended to subspaces of finite dimensions. Given two subspaces , with , this leads to a definition of angles called canonical or principal angles between subspaces.
Angles in Riemannian geometry
In Riemannian geometry, the metric tensor is used to define the angle between two tangents. Where U and V are tangent vectors and gij are the components of the metric tensor G,
Hyperbolic angle
A hyperbolic angle is an argument of a hyperbolic function just as the circular angle is the argument of a circular function. The comparison can be visualized as the size of the openings of a hyperbolic sector and a circular sector since the areas of these sectors correspond to the angle magnitudes in each case. Unlike the circular angle, the hyperbolic angle is unbounded. When the circular and hyperbolic functions are viewed as infinite series in their angle argument, the circular ones are just alternating series forms of the hyperbolic functions. This comparison of the two series corresponding to functions of angles was described by Leonhard Euler in Introduction to the Analysis of the Infinite (1748).
Angles in geography and astronomy
In geography, the location of any point on the Earth can be identified using a geographic coordinate system. This system specifies the latitude and longitude of any location in terms of angles subtended at the center of the Earth, using the equator and (usually) the Greenwich meridian as references.
In astronomy, a given point on the celestial sphere (that is, the apparent position of an astronomical object) can be identified using any of several astronomical coordinate systems, where the references vary according to the particular system. Astronomers measure the angular separation of two stars by imagining two lines through the center of the Earth, each intersecting one of the stars. The angle between those lines and the angular separation between the two stars can be measured.
In both geography and astronomy, a sighting direction can be specified in terms of a vertical angle such as altitude /elevation with respect to the horizon as well as the azimuth with respect to north.
Astronomers also measure objects' apparent size as an angular diameter. For example, the full moon has an angular diameter of approximately 0.5° when viewed from Earth. One could say, "The Moon's diameter subtends an angle of half a degree." The small-angle formula can convert such an angular measurement into a distance/size ratio.
Other astronomical approximations include:
0.5° is the approximate diameter of the Sun and of the Moon as viewed from Earth.
1° is the approximate width of the little finger at arm's length.
10° is the approximate width of a closed fist at arm's length.
20° is the approximate width of a handspan at arm's length.
These measurements depend on the individual subject, and the above should be treated as rough rule of thumb approximations only.
In astronomy, right ascension and declination are usually measured in angular units, expressed in terms of time, based on a 24-hour day.
See also
Angle measuring instrument
Angles between flats
Angular statistics (mean, standard deviation)
Angle bisector
Angular acceleration
Angular diameter
Angular velocity
Argument (complex analysis)
Astrological aspect
Central angle
Clock angle problem
Decimal degrees
Dihedral angle
Exterior angle theorem
Golden angle
Great circle distance
Horn angle
Inscribed angle
Irrational angle
Phase (waves)
Protractor
Solid angle
Spherical angle
Subtended angle
Tangential angle
Transcendent angle
Trisection
Zenith angle
Notes
References
Bibliography
.
External links | Angle | [
"Physics"
] | 4,260 | [
"Geometric measurement",
"Scalar physical quantities",
"Physical quantities",
"Wikipedia categories named after physical quantities",
"Angle"
] |
1,198 | https://en.wikipedia.org/wiki/Acoustics | Acoustics is a branch of physics that deals with the study of mechanical waves in gases, liquids, and solids including topics such as vibration, sound, ultrasound and infrasound. A scientist who works in the field of acoustics is an acoustician while someone working in the field of acoustics technology may be called an acoustical engineer. The application of acoustics is present in almost all aspects of modern society with the most obvious being the audio and noise control industries.
Hearing is one of the most crucial means of survival in the animal world and speech is one of the most distinctive characteristics of human development and culture. Accordingly, the science of acoustics spreads across many facets of human society—music, medicine, architecture, industrial production, warfare and more. Likewise, animal species such as songbirds and frogs use sound and hearing as a key element of mating rituals or for marking territories. Art, craft, science and technology have provoked one another to advance the whole, as in many other fields of knowledge. Robert Bruce Lindsay's "Wheel of Acoustics" is a well-accepted overview of the various fields in acoustics.
History
Etymology
The word "acoustic" is derived from the Greek word ἀκουστικός (akoustikos), meaning "of or for hearing, ready to hear" and that from ἀκουστός (akoustos), "heard, audible", which in turn derives from the verb ἀκούω(akouo), "I hear".
The Latin synonym is "sonic", after which the term sonics used to be a synonym for acoustics and later a branch of acoustics. Frequencies above and below the audible range are called "ultrasonic" and "infrasonic", respectively.
Early research in acoustics
In the 6th century BC, the ancient Greek philosopher Pythagoras wanted to know why some combinations of musical sounds seemed more beautiful than others, and he found answers in terms of numerical ratios representing the harmonic overtone series on a string. He is reputed to have observed that when the lengths of vibrating strings are expressible as ratios of integers (e.g. 2 to 3, 3 to 4), the tones produced will be harmonious, and the smaller the integers the more harmonious the sounds. For example, a string of a certain length would sound particularly harmonious with a string of twice the length (other factors being equal). In modern parlance, if a string sounds the note C when plucked, a string twice as long will sound a C an octave lower. In one system of musical tuning, the tones in between are then given by 16:9 for D, 8:5 for E, 3:2 for F, 4:3 for G, 6:5 for A, and 16:15 for B, in ascending order.
Aristotle (384–322 BC) understood that sound consisted of compressions and rarefactions of air which "falls upon and strikes the air which is next to it...", a very good expression of the nature of wave motion. On Things Heard, generally ascribed to Strato of Lampsacus, states that the pitch is related to the frequency of vibrations of the air and to the speed of sound.
In about 20 BC, the Roman architect and engineer Vitruvius wrote a treatise on the acoustic properties of theaters including discussion of interference, echoes, and reverberation—the beginnings of architectural acoustics. In Book V of his (The Ten Books of Architecture) Vitruvius describes sound as a wave comparable to a water wave extended to three dimensions, which, when interrupted by obstructions, would flow back and break up following waves. He described the ascending seats in ancient theaters as designed to prevent this deterioration of sound and also recommended bronze vessels (echea) of appropriate sizes be placed in theaters to resonate with the fourth, fifth and so on, up to the double octave, in order to resonate with the more desirable, harmonious notes.
During the Islamic golden age, Abū Rayhān al-Bīrūnī (973–1048) is believed to have postulated that the speed of sound was much slower than the speed of light.
The physical understanding of acoustical processes advanced rapidly during and after the Scientific Revolution. Mainly Galileo Galilei (1564–1642) but also Marin Mersenne (1588–1648), independently, discovered the complete laws of vibrating strings (completing what Pythagoras and Pythagoreans had started 2000 years earlier). Galileo wrote "Waves are produced by the vibrations of a sonorous body, which spread through the air, bringing to the tympanum of the ear a stimulus which the mind interprets as sound", a remarkable statement that points to the beginnings of physiological and psychological acoustics. Experimental measurements of the speed of sound in air were carried out successfully between 1630 and 1680 by a number of investigators, prominently Mersenne. Meanwhile, Newton (1642–1727) derived the relationship for wave velocity in solids, a cornerstone of physical acoustics (Principia, 1687).
Age of Enlightenment and onward
Substantial progress in acoustics, resting on firmer mathematical and physical concepts, was made during the eighteenth century by Euler (1707–1783), Lagrange (1736–1813), and d'Alembert (1717–1783). During this era, continuum physics, or field theory, began to receive a definite mathematical structure. The wave equation emerged in a number of contexts, including the propagation of sound in air.
In the nineteenth century the major figures of mathematical acoustics were Helmholtz in Germany, who consolidated the field of physiological acoustics, and Lord Rayleigh in England, who combined the previous knowledge with his own copious contributions to the field in his monumental work The Theory of Sound (1877). Also in the 19th century, Wheatstone, Ohm, and Henry developed the analogy between electricity and acoustics.
The twentieth century saw a burgeoning of technological applications of the large body of scientific knowledge that was by then in place. The first such application was Sabine's groundbreaking work in architectural acoustics, and many others followed. Underwater acoustics was used for detecting submarines in the first World War. Sound recording and the telephone played important roles in a global transformation of society. Sound measurement and analysis reached new levels of accuracy and sophistication through the use of electronics and computing. The ultrasonic frequency range enabled wholly new kinds of application in medicine and industry. New kinds of transducers (generators and receivers of acoustic energy) were invented and put to use.
Definition
Acoustics is defined by ANSI/ASA S1.1-2013 as "(a) Science of sound, including its production, transmission, and effects, including biological and psychological effects. (b) Those qualities of a room that, together, determine its character with respect to auditory effects."
The study of acoustics revolves around the generation, propagation and reception of mechanical waves and vibrations.
The steps shown in the above diagram can be found in any acoustical event or process. There are many kinds of cause, both natural and volitional. There are many kinds of transduction process that convert energy from some other form into sonic energy, producing a sound wave. There is one fundamental equation that describes sound wave propagation, the acoustic wave equation, but the phenomena that emerge from it are varied and often complex. The wave carries energy throughout the propagating medium. Eventually this energy is transduced again into other forms, in ways that again may be natural and/or volitionally contrived. The final effect may be purely physical or it may reach far into the biological or volitional domains. The five basic steps are found equally well whether we are talking about an earthquake, a submarine using sonar to locate its foe, or a band playing in a rock concert.
The central stage in the acoustical process is wave propagation. This falls within the domain of physical acoustics. In fluids, sound propagates primarily as a pressure wave. In solids, mechanical waves can take many forms including longitudinal waves, transverse waves and surface waves.
Acoustics looks first at the pressure levels and frequencies in the sound wave and how the wave interacts with the environment. This interaction can be described as either a diffraction, interference or a reflection or a mix of the three. If several media are present, a refraction can also occur. Transduction processes are also of special importance to acoustics.
Fundamental concepts
Wave propagation: pressure levels
In fluids such as air and water, sound waves propagate as disturbances in the ambient pressure level. While this disturbance is usually small, it is still noticeable to the human ear. The smallest sound that a person can hear, known as the threshold of hearing, is nine orders of magnitude smaller than the ambient pressure. The loudness of these disturbances is related to the sound pressure level (SPL) which is measured on a logarithmic scale in decibels.
Wave propagation: frequency
Physicists and acoustic engineers tend to discuss sound pressure levels in terms of frequencies, partly because this is how our ears interpret sound. What we experience as "higher pitched" or "lower pitched" sounds are pressure vibrations having a higher or lower number of cycles per second. In a common technique of acoustic measurement, acoustic signals are sampled in time, and then presented in more meaningful forms such as octave bands or time frequency plots. Both of these popular methods are used to analyze sound and better understand the acoustic phenomenon.
The entire spectrum can be divided into three sections: audio, ultrasonic, and infrasonic. The audio range falls between 20 Hz and 20,000 Hz. This range is important because its frequencies can be detected by the human ear. This range has a number of applications, including speech communication and music. The ultrasonic range refers to the very high frequencies: 20,000 Hz and higher. This range has shorter wavelengths which allow better resolution in imaging technologies. Medical applications such as ultrasonography and elastography rely on the ultrasonic frequency range. On the other end of the spectrum, the lowest frequencies are known as the infrasonic range. These frequencies can be used to study geological phenomena such as earthquakes.
Analytic instruments such as the spectrum analyzer facilitate visualization and measurement of acoustic signals and their properties. The spectrogram produced by such an instrument is a graphical display of the time varying pressure level and frequency profiles which give a specific acoustic signal its defining character.
Transduction in acoustics
A transducer is a device for converting one form of energy into another. In an electroacoustic context, this means converting sound energy into electrical energy (or vice versa). Electroacoustic transducers include loudspeakers, microphones, particle velocity sensors, hydrophones and sonar projectors. These devices convert a sound wave to or from an electric signal. The most widely used transduction principles are electromagnetism, electrostatics and piezoelectricity.
The transducers in most common loudspeakers (e.g. woofers and tweeters), are electromagnetic devices that generate waves using a suspended diaphragm driven by an electromagnetic voice coil, sending off pressure waves. Electret microphones and condenser microphones employ electrostatics—as the sound wave strikes the microphone's diaphragm, it moves and induces a voltage change. The ultrasonic systems used in medical ultrasonography employ piezoelectric transducers. These are made from special ceramics in which mechanical vibrations and electrical fields are interlinked through a property of the material itself.
Acoustician
An acoustician is an expert in the science of sound.
Education
There are many types of acoustician, but they usually have a Bachelor's degree or higher qualification. Some possess a degree in acoustics, while others enter the discipline via studies in fields such as physics or engineering. Much work in acoustics requires a good grounding in Mathematics and science. Many acoustic scientists work in research and development. Some conduct basic research to advance our knowledge of the perception (e.g. hearing, psychoacoustics or neurophysiology) of speech, music and noise. Other acoustic scientists advance understanding of how sound is affected as it moves through environments, e.g. underwater acoustics, architectural acoustics or structural acoustics. Other areas of work are listed under subdisciplines below. Acoustic scientists work in government, university and private industry laboratories. Many go on to work in Acoustical Engineering. Some positions, such as Faculty (academic staff) require a Doctor of Philosophy.
Subdisciplines
Archaeoacoustics
Archaeoacoustics, also known as the archaeology of sound, is one of the only ways to experience the past with senses other than our eyes. Archaeoacoustics is studied by testing the acoustic properties of prehistoric sites, including caves. Iegor Rezkinoff, a sound archaeologist, studies the acoustic properties of caves through natural sounds like humming and whistling. Archaeological theories of acoustics are focused around ritualistic purposes as well as a way of echolocation in the caves. In archaeology, acoustic sounds and rituals directly correlate as specific sounds were meant to bring ritual participants closer to a spiritual awakening. Parallels can also be drawn between cave wall paintings and the acoustic properties of the cave; they are both dynamic. Because archaeoacoustics is a fairly new archaeological subject, acoustic sound is still being tested in these prehistoric sites today.
Aeroacoustics
Aeroacoustics is the study of noise generated by air movement, for instance via turbulence, and the movement of sound through the fluid air. This knowledge was applied in the 1920s and '30s to detect aircraft before radar was invented and is applied in acoustical engineering to study how to quieten aircraft. Aeroacoustics is important for understanding how wind musical instruments work.
Acoustic signal processing
Acoustic signal processing is the electronic manipulation of acoustic signals. Applications include: active noise control; design for hearing aids or cochlear implants; echo cancellation; music information retrieval, and perceptual coding (e.g. MP3 or Opus).
Architectural acoustics
Architectural acoustics (also known as building acoustics) involves the scientific understanding of how to achieve good sound within a building. It typically involves the study of speech intelligibility, speech privacy, music quality, and vibration reduction in the built environment. Commonly studied environments are hospitals, classrooms, dwellings, performance venues, recording and broadcasting studios. Focus considerations include room acoustics, airborne and impact transmission in building structures, airborne and structure-borne noise control, noise control of building systems and electroacoustic systems.
Bioacoustics
Bioacoustics is the scientific study of the hearing and calls of animal calls, as well as how animals are affected by the acoustic and sounds of their habitat.
Electroacoustics
This subdiscipline is concerned with the recording, manipulation and reproduction of audio using electronics. This might include products such as mobile phones, large scale public address systems or virtual reality systems in research laboratories.
Environmental noise and soundscapes
Environmental acoustics is the study of noise and vibrations, and their impact on structures, objects, humans, and animals.
The main aim of these studies is to reduce levels of environmental noise and vibration. Typical work and research within environmental acoustics concerns the development of models used in simulations, measurement techniques, noise mitigation strategies, and the development of standards and regulations. Research work now also has a focus on the positive use of sound in urban environments: soundscapes and tranquility.
Examples of noise and vibration sources include railways, road traffic, aircraft, industrial equipment and recreational activities.
Musical acoustics
Musical acoustics is the study of the physics of acoustic instruments; the audio signal processing used in electronic music; the computer analysis of music and composition, and the perception and cognitive neuroscience of music.
Psychoacoustics
Many studies have been conducted to identify the relationship between acoustics and cognition, or more commonly known as psychoacoustics, in which what one hears is a combination of perception and biological aspects. The information intercepted by the passage of sound waves through the ear is understood and interpreted through the brain, emphasizing the connection between the mind and acoustics. Psychological changes have been seen as brain waves slow down or speed up as a result of varying auditory stimulus which can in turn affect the way one thinks, feels, or even behaves. This correlation can be viewed in normal, everyday situations in which listening to an upbeat or uptempo song can cause one's foot to start tapping or a slower song can leave one feeling calm and serene. In a deeper biological look at the phenomenon of psychoacoustics, it was discovered that the central nervous system is activated by basic acoustical characteristics of music. By observing how the central nervous system, which includes the brain and spine, is influenced by acoustics, the pathway in which acoustic affects the mind, and essentially the body, is evident.
Speech
Acousticians study the production, processing and perception of speech. Speech recognition and Speech synthesis are two important areas of speech processing using computers. The subject also overlaps with the disciplines of physics, physiology, psychology, and linguistics.
Structural Vibration and Dynamics
Structural acoustics is the study of motions and interactions of mechanical systems with their environments and the methods of their measurement, analysis, and control. There are several sub-disciplines found within this regime:
Modal Analysis
Material characterization
Structural health monitoring
Acoustic Metamaterials
Friction Acoustics
Applications might include: ground vibrations from railways; vibration isolation to reduce vibration in operating theatres; studying how vibration can damage health (vibration white finger); vibration control to protect a building from earthquakes, or measuring how structure-borne sound moves through buildings.
Ultrasonics
Ultrasonics deals with sounds at frequencies too high to be heard by humans. Specialisms include medical ultrasonics (including medical ultrasonography), sonochemistry, ultrasonic testing, material characterisation and underwater acoustics (sonar).
Underwater acoustics
Underwater acoustics is the scientific study of natural and man-made sounds underwater. Applications include sonar to locate submarines, underwater communication by whales, climate change monitoring by measuring sea temperatures acoustically, sonic weapons, and marine bioacoustics.
Research
Professional societies
The Acoustical Society of America (ASA)
Australian Acoustical Society (AAS)
The European Acoustics Association (EAA)
Institute of Electrical and Electronics Engineers (IEEE)
Institute of Acoustics (IoA UK)
The Audio Engineering Society (AES)
American Society of Mechanical Engineers, Noise Control and Acoustics Division (ASME-NCAD)
International Commission for Acoustics (ICA)
American Institute of Aeronautics and Astronautics, Aeroacoustics (AIAA)
International Computer Music Association (ICMA)
Academic journals
Acoustics | An Open Access Journal from MDPI
Acoustics Today
Acta Acustica united with Acustica
Advances in Acoustics and Vibration
Applied Acoustics
Building Acoustics
IEEE Transacions on Ultrasonics, Ferroelectrics, and Frequency Control
Journal of the Acoustical Society of America (JASA)
Journal of the Acoustical Society of America, Express Letters (JASA-EL)
Journal of the Audio Engineering Society
Journal of Sound and Vibration (JSV)
Journal of Vibration and Acoustics American Society of Mechanical Engineers
MDPI Acoustics
Noise Control Engineering Journal
SAE International Journal of Vehicle Dynamics, Stability and NVH
Ultrasonics (journal)
Ultrasonics Sonochemistry
Wave Motion
Conferences
InterNoise
NoiseCon
Forum Acousticum
SAE Noise and Vibration Conference and Exhibition
See also
Outline of acoustics
Acoustic attenuation
Acoustic emission
Acoustic engineering
Acoustic impedance
Acoustic levitation
Acoustic location
Acoustic phonetics
Acoustic streaming
Acoustic tags
Acoustic thermometry
Acoustic wave
Audiology
Auditory illusion
Diffraction
Doppler effect
Fisheries acoustics
Friction acoustics
Helioseismology
Lamb wave
Linear elasticity
The Little Red Book of Acoustics (in the UK)
Longitudinal wave
Musicology
Music therapy
Noise pollution
Phonon
Picosecond ultrasonics
Rayleigh wave
Shock wave
Seismology
Sonification
Sonochemistry
Soundproofing
Soundscape
Sonic boom
Sonoluminescence
Surface acoustic wave
Thermoacoustics
Transverse wave
Wave equation
References
Further reading
External links
International Commission for Acoustics
European Acoustics Association
Acoustical Society of America
Institute of Noise Control Engineers
National Council of Acoustical Consultants
Institute of Acoustic in UK
Australian Acoustical Society (AAS)
Sound | Acoustics | [
"Physics"
] | 4,201 | [
"Classical mechanics",
"Acoustics"
] |
1,200 | https://en.wikipedia.org/wiki/Atomic%20physics | Atomic physics is the field of physics that studies atoms as an isolated system of electrons and an atomic nucleus. Atomic physics typically refers to the study of atomic structure and the interaction between atoms. It is primarily concerned with the way in which electrons are arranged around the nucleus and
the processes by which these arrangements change. This comprises ions, neutral atoms and, unless otherwise stated, it can be assumed that the term atom includes ions.
The term atomic physics can be associated with nuclear power and nuclear weapons, due to the synonymous use of atomic and nuclear in standard English. Physicists distinguish between atomic physics—which deals with the atom as a system consisting of a nucleus and electrons—and nuclear physics, which studies nuclear reactions and special properties of atomic nuclei.
As with many scientific fields, strict delineation can be highly contrived and atomic physics is often considered in the wider context of atomic, molecular, and optical physics. Physics research groups are usually so classified.
Isolated atoms
Atomic physics primarily considers atoms in isolation. Atomic models will consist of a single nucleus that may be surrounded by one or more bound electrons. It is not concerned with the formation of molecules (although much of the physics is identical), nor does it examine atoms in a solid state as condensed matter. It is concerned with processes such as ionization and excitation by photons or collisions with atomic particles.
While modelling atoms in isolation may not seem realistic, if one considers atoms in a gas or plasma then the time-scales for atom-atom interactions are huge in comparison to the atomic processes that are generally considered. This means that the individual atoms can be treated as if each were in isolation, as the vast majority of the time they are. By this consideration, atomic physics provides the underlying theory in plasma physics and atmospheric physics, even though both deal with very large numbers of atoms.
Electronic configuration
Electrons form notional shells around the nucleus. These are normally in a ground state but can be excited by the absorption of energy from light (photons), magnetic fields, or interaction with a colliding particle (typically ions or other electrons).
Electrons that populate a shell are said to be in a bound state. The energy necessary to remove an electron from its shell (taking it to infinity) is called the binding energy. Any quantity of energy absorbed by the electron in excess of this amount is converted to kinetic energy according to the conservation of energy. The atom is said to have undergone the process of ionization.
If the electron absorbs a quantity of energy less than the binding energy, it will be transferred to an excited state. After a certain time, the electron in an excited state will "jump" (undergo a transition) to a lower state. In a neutral atom, the system will emit a photon of the difference in energy, since energy is conserved.
If an inner electron has absorbed more than the binding energy (so that the atom ionizes), then a more outer electron may undergo a transition to fill the inner orbital. In this case, a visible photon or a characteristic X-ray is emitted, or a phenomenon known as the Auger effect may take place, where the released energy is transferred to another bound electron, causing it to go into the continuum. The Auger effect allows one to multiply ionize an atom with a single photon.
There are rather strict selection rules as to the electronic configurations that can be reached by excitation by light — however, there are no such rules for excitation by collision processes.
Bohr Model of the Atom
The Bohr model, proposed by Niels Bohr in 1913, is a revolutionary theory describing the structure of the hydrogen atom. It introduced the idea of quantized orbits for electrons, combining classical and quantum physics.
Key Postulates of the Bohr Model
1.Electrons Move in Circular Orbits:
• Electrons revolve around the nucleus in fixed, circular paths called orbits or energy levels.
•These orbits are stable and do not radiate energy.
2.Quantization of Angular Momentum:
•The angular momentum of an electron is quantized and given by:
L = m_e v r = n\hbar, \quad n = 1, 2, 3, \dots
where:
• m_e : Mass of the electron.
• v : Velocity of the electron.
• r : Radius of the orbit.
• \hbar : Reduced Planck’s constant ( \hbar = \frac{h}{2\pi} ).
•n : Principal quantum number, representing the orbit.
3.Energy Levels:
•Each orbit has a specific energy. The total energy of an electron in the nth orbit is:
E_n = -\frac{13.6}{n^2} \ \text{eV},
where 13.6 \ \text{eV} is the ground-state energy of the hydrogen atom.
4.Emission or Absorption of Energy:
•Electrons can transition between orbits by absorbing or emitting energy equal to the difference between the energy levels:
\Delta E = E_f - E_i = h\nu,
where:
•h : Planck’s constant.
• \nu : Frequency of emitted/absorbed radiation.
• E_f, E_i : Final and initial energy levels.
History and developments
One of the earliest steps towards atomic physics was the recognition that matter was composed
of atoms. It forms a part of the texts written in 6th century BC to 2nd century BC, such as those of Democritus or written by . This theory was later developed in the modern sense of the basic unit of a chemical element by the British chemist and physicist John Dalton in the 18th century. At this stage, it wasn't clear what atoms were, although they could be described and classified by their properties (in bulk). The invention of the periodic system of elements by Dmitri Mendeleev was another great step forward.
The true beginning of atomic physics is marked by the discovery of spectral lines and attempts to describe the phenomenon, most notably by Joseph von Fraunhofer. The study of these lines led to the Bohr atom model and to the birth of quantum mechanics. In seeking to explain atomic spectra, an entirely new mathematical model of matter was revealed. As far as atoms and their electron shells were concerned, not only did this yield a better overall description, i.e. the atomic orbital model, but it also provided a new theoretical basis for chemistry
(quantum chemistry) and spectroscopy.
Since the Second World War, both theoretical and experimental fields have advanced at a rapid pace. This can be attributed to progress in computing technology, which has allowed larger and more sophisticated models of atomic structure and associated collision processes. Similar technological advances in accelerators, detectors, magnetic field generation and lasers have greatly assisted experimental work.
Beyond the well-known phenomena wich can be describe with regular quantum mechanics chaotic processes can occour which need different descriptions.
Significant atomic physicists
See also
Particle physics
Isomeric shift
Atomism
Ionisation
Quantum Mechanics
Electron Correlation
Quantum Chemistry
Bound State
Bibliography
Sommerfeld, A. (1923) Atomic structure and spectral lines. (translated from German "Atombau und Spektrallinien" 1921), Dutton Publisher.
Smirnov, B.E. (2003) Physics of Atoms and Ions, Springer. ISBN 0-387-95550-X.
Szász, L. (1992) The Electronic Structure of Atoms, John Willey & Sons. ISBN 0-471-54280-6.
Bethe, H.A. & Salpeter E.E. (1957) Quantum Mechanics of One- and Two Electron Atoms. Springer.
Born, M. (1937) Atomic Physics. Blackie & Son Limited.
Cox, P.A. (1996) Introduction to Quantum Theory and Atomic Spectra. Oxford University Press. ISBN 0-19-855916
References
External links
MIT-Harvard Center for Ultracold Atoms
Stanford QFARM Initiative for Quantum Science & Enginneering
Joint Quantum Institute at University of Maryland and NIST
Atomic Physics on the Internet
JILA (Atomic Physics)
ORNL Physics Division
Atomic, molecular, and optical physics | Atomic physics | [
"Physics",
"Chemistry"
] | 1,662 | [
"Quantum mechanics",
"Atomic physics",
" molecular",
"Atomic",
" and optical physics"
] |
1,202 | https://en.wikipedia.org/wiki/Applet | In computing, an applet is any small application that performs one specific task that runs within the scope of a dedicated widget engine or a larger program, often as a plug-in. The term is frequently used to refer to a Java applet, a program written in the Java programming language that is designed to be placed on a web page. Applets are typical examples of transient and auxiliary applications that do not monopolize the user's attention. Applets are not full-featured application programs, and are intended to be easily accessible.
History
The word applet was first used in 1990 in PC Magazine. However, the concept of an applet, or more broadly a small interpreted program downloaded and executed by the user, dates at least to RFC 5 (1969) by Jeff Rulifson, which described the Decode-Encode Language, which was designed to allow remote use of the oN-Line System over ARPANET, by downloading small programs to enhance the interaction. This has been specifically credited as a forerunner of Java's downloadable programs in RFC 2555.
Applet as an extension of other software
In some cases, an applet does not run independently. These applets must run either in a container provided by a host program, through a plugin, or a variety of other applications including mobile devices that support the applet programming model.
Web-based applets
Applets were used to provide interactive features to web applications that historically could not be provided by HTML alone. They could capture mouse input and also had controls like buttons or check boxes. In response to the user action, an applet could change the provided graphic content. This made applets well suited for demonstration, visualization, and teaching. There were online applet collections for studying various subjects, from physics to heart physiology. Applets were also used to create online game collections that allowed players to compete against live opponents in real-time.
An applet could also be a text area only, providing, for instance, a cross-platform command-line interface to some remote system. If needed, an applet could leave the dedicated area and run as a separate window. However, applets had very little control over web page content outside the applet dedicated area, so they were less useful for improving the site appearance in general (while applets like news tickers or WYSIWYG editors are also known). Applets could also play media in formats that are not natively supported by the browser.
HTML pages could embed parameters that were passed to the applet. Hence, the same applet could appear differently depending on the parameters that were passed.
Examples of Web-based applets include:
QuickTime movies
Flash movies
Windows Media Player applets, used to display embedded video files in Internet Explorer (and other browsers that supported the plugin)
3D modeling display applets, used to rotate and zoom a model
Browser games that were applet-based, though some developed into fully functional applications that required installation.
Applet Vs. Subroutine
A larger application distinguishes its applets through several features:
Applets execute only on the "client" platform environment of a system, as contrasted from "Servlet". As such, an applet provides functionality or performance beyond the default capabilities of its container (the browser).
The container restricts applets' capabilities.
Applets are written in a language different from the scripting or HTML language that invokes it. The applet is written in a compiled language, whereas the scripting language of the container is an interpreted language, hence the greater performance or functionality of the applet. Unlike a subroutine, a complete web component can be implemented as an applet.
Java applets
A Java applet is a Java program that is launched from HTML and run in a web browser. It takes code from server and run in a web browser. It can provide web applications with interactive features that cannot be provided by HTML. Since Java's bytecode is platform-independent, Java applets can be executed by browsers running under many platforms, including Windows, Unix, macOS, and Linux. When a Java technology-enabled web browser processes a page that contains an applet, the applet's code is transferred to the client's system and executed by the browser's Java virtual machine. An HTML page references an applet either via the deprecated tag or via its replacement, the tag.
Security
Recent developments in the coding of applications, including mobile and embedded systems, have led to the awareness of the security of applets.
Open platform applets
Applets in an open platform environment should provide secure interactions between different applications. A compositional approach can be used to provide security for open platform applets. Advanced compositional verification methods have been developed for secure applet interactions.
Java applets
A Java applet contains different security models: unsigned Java applet security, signed Java applet security, and self-signed Java applet security.
Web-based applets
In an applet-enabled web browser, many methods can be used to provide applet security for malicious applets. A malicious applet can infect a computer system in many ways, including denial of service, invasion of privacy, and annoyance. A typical solution for malicious applets is to make the web browser to monitor applets' activities. This will result in a web browser that will enable the manual or automatic stopping of malicious applets.
See also
Application posture
Bookmarklet
Java applet
Widget engine
Abstract Window Toolkit
References
External links
Technology neologisms
Component-based software engineering
Java (programming language) libraries | Applet | [
"Technology"
] | 1,147 | [
"Component-based software engineering",
"Components"
] |
1,206 | https://en.wikipedia.org/wiki/Atomic%20orbital | In quantum mechanics, an atomic orbital () is a function describing the location and wave-like behavior of an electron in an atom. This function describes an electron's charge distribution around the atom's nucleus, and can be used to calculate the probability of finding an electron in a specific region around the nucleus.
Each orbital in an atom is characterized by a set of values of three quantum numbers , , and , which respectively correspond to electron's energy, its orbital angular momentum, and its orbital angular momentum projected along a chosen axis (magnetic quantum number). The orbitals with a well-defined magnetic quantum number are generally complex-valued. Real-valued orbitals can be formed as linear combinations of and orbitals, and are often labeled using associated harmonic polynomials (e.g., xy, ) which describe their angular structure.
An orbital can be occupied by a maximum of two electrons, each with its own projection of spin . The simple names s orbital, p orbital, d orbital, and f orbital refer to orbitals with angular momentum quantum number and respectively. These names, together with their n values, are used to describe electron configurations of atoms. They are derived from description by early spectroscopists of certain series of alkali metal spectroscopic lines as sharp, principal, diffuse, and fundamental. Orbitals for continue alphabetically (g, h, i, k, ...), omitting j because some languages do not distinguish between letters "i" and "j".
Atomic orbitals are basic building blocks of the atomic orbital model (or electron cloud or wave mechanics model), a modern framework for visualizing submicroscopic behavior of electrons in matter. In this model, the electron cloud of an atom may be seen as being built up (in approximation) in an electron configuration that is a product of simpler hydrogen-like atomic orbitals. The repeating periodicity of blocks of 2, 6, 10, and 14 elements within sections of periodic table arises naturally from total number of electrons that occupy a complete set of s, p, d, and f orbitals, respectively, though for higher values of quantum number , particularly when the atom bears a positive charge, energies of certain sub-shells become very similar and so, order in which they are said to be populated by electrons (e.g., Cr = [Ar]4s13d5 and Cr2+ = [Ar]3d4) can be rationalized only somewhat arbitrarily.
Electron properties
With the development of quantum mechanics and experimental findings (such as the two slit diffraction of electrons), it was found that the electrons orbiting a nucleus could not be fully described as particles, but needed to be explained by wave–particle duality. In this sense, electrons have the following properties:
Wave-like properties:
Electrons do not orbit a nucleus in the manner of a planet orbiting a star, but instead exist as standing waves. Thus the lowest possible energy an electron can take is similar to the fundamental frequency of a wave on a string. Higher energy states are similar to harmonics of that fundamental frequency.
The electrons are never in a single point location, though the probability of interacting with the electron at a single point can be found from the electron's wave function. The electron's charge acts like it is smeared out in space in a continuous distribution, proportional at any point to the squared magnitude of the electron's wave function.
Particle-like properties:
The number of electrons orbiting a nucleus can be only an integer.
Electrons jump between orbitals like particles. For example, if one photon strikes the electrons, only one electron changes state as a result.
Electrons retain particle-like properties such as: each wave state has the same electric charge as its electron particle. Each wave state has a single discrete spin (spin up or spin down) depending on its superposition.
Thus, electrons cannot be described simply as solid particles. An analogy might be that of a large and often oddly shaped "atmosphere" (the electron), distributed around a relatively tiny planet (the nucleus). Atomic orbitals exactly describe the shape of this "atmosphere" only when one electron is present. When more electrons are added, the additional electrons tend to more evenly fill in a volume of space around the nucleus so that the resulting collection ("electron cloud") tends toward a generally spherical zone of probability describing the electron's location, because of the uncertainty principle.
One should remember that these orbital 'states', as described here, are merely eigenstates of an electron in its orbit. An actual electron exists in a superposition of states, which is like a weighted average, but with complex number weights. So, for instance, an electron could be in a pure eigenstate (2, 1, 0), or a mixed state (2, 1, 0) + (2, 1, 1), or even the mixed state (2, 1, 0) + (2, 1, 1). For each eigenstate, a property has an eigenvalue. So, for the three states just mentioned, the value of is 2, and the value of is 1. For the second and third states, the value for is a superposition of 0 and 1. As a superposition of states, it is ambiguous—either exactly 0 or exactly 1—not an intermediate or average value like the fraction . A superposition of eigenstates (2, 1, 1) and (3, 2, 1) would have an ambiguous and , but would definitely be 1. Eigenstates make it easier to deal with the math. You can choose a different basis of eigenstates by superimposing eigenstates from any other basis (see Real orbitals below).
Formal quantum mechanical definition
Atomic orbitals may be defined more precisely in formal quantum mechanical language. They are approximate solutions to the Schrödinger equation for the electrons bound to the atom by the electric field of the atom's nucleus. Specifically, in quantum mechanics, the state of an atom, i.e., an eigenstate of the atomic Hamiltonian, is approximated by an expansion (see configuration interaction expansion and basis set) into linear combinations of anti-symmetrized products (Slater determinants) of one-electron functions. The spatial components of these one-electron functions are called atomic orbitals. (When one considers also their spin component, one speaks of atomic spin orbitals.) A state is actually a function of the coordinates of all the electrons, so that their motion is correlated, but this is often approximated by this independent-particle model of products of single electron wave functions. (The London dispersion force, for example, depends on the correlations of the motion of the electrons.)
In atomic physics, the atomic spectral lines correspond to transitions (quantum leaps) between quantum states of an atom. These states are labeled by a set of quantum numbers summarized in the term symbol and usually associated with particular electron configurations, i.e., by occupation schemes of atomic orbitals (for example, 1s2 2s2 2p6 for the ground state of neon-term symbol: 1S0).
This notation means that the corresponding Slater determinants have a clear higher weight in the configuration interaction expansion. The atomic orbital concept is therefore a key concept for visualizing the excitation process associated with a given transition. For example, one can say for a given transition that it corresponds to the excitation of an electron from an occupied orbital to a given unoccupied orbital. Nevertheless, one has to keep in mind that electrons are fermions ruled by the Pauli exclusion principle and cannot be distinguished from each other. Moreover, it sometimes happens that the configuration interaction expansion converges very slowly and that one cannot speak about simple one-determinant wave function at all. This is the case when electron correlation is large.
Fundamentally, an atomic orbital is a one-electron wave function, even though many electrons are not in one-electron atoms, and so the one-electron view is an approximation. When thinking about orbitals, we are often given an orbital visualization heavily influenced by the Hartree–Fock approximation, which is one way to reduce the complexities of molecular orbital theory.
Types of orbital
Atomic orbitals can be the hydrogen-like "orbitals" which are exact solutions to the Schrödinger equation for a hydrogen-like "atom" (i.e., atom with one electron). Alternatively, atomic orbitals refer to functions that depend on the coordinates of one electron (i.e., orbitals) but are used as starting points for approximating wave functions that depend on the simultaneous coordinates of all the electrons in an atom or molecule. The coordinate systems chosen for orbitals are usually spherical coordinates in atoms and Cartesian in polyatomic molecules. The advantage of spherical coordinates here is that an orbital wave function is a product of three factors each dependent on a single coordinate: . The angular factors of atomic orbitals generate s, p, d, etc. functions as real combinations of spherical harmonics (where and are quantum numbers). There are typically three mathematical forms for the radial functions which can be chosen as a starting point for the calculation of the properties of atoms and molecules with many electrons:
The hydrogen-like orbitals are derived from the exact solutions of the Schrödinger equation for one electron and a nucleus, for a hydrogen-like atom. The part of the function that depends on distance r from the nucleus has radial nodes and decays as .
The Slater-type orbital (STO) is a form without radial nodes but decays from the nucleus as does a hydrogen-like orbital.
The form of the Gaussian type orbital (Gaussians) has no radial nodes and decays as .
Although hydrogen-like orbitals are still used as pedagogical tools, the advent of computers has made STOs preferable for atoms and diatomic molecules since combinations of STOs can replace the nodes in hydrogen-like orbitals. Gaussians are typically used in molecules with three or more atoms. Although not as accurate by themselves as STOs, combinations of many Gaussians can attain the accuracy of hydrogen-like orbitals.
History
The term orbital was introduced by Robert S. Mulliken in 1932 as short for one-electron orbital wave function. Niels Bohr explained around 1913 that electrons might revolve around a compact nucleus with definite angular momentum. Bohr's model was an improvement on the 1911 explanations of Ernest Rutherford, that of the electron moving around a nucleus. Japanese physicist Hantaro Nagaoka published an orbit-based hypothesis for electron behavior as early as 1904. These theories were each built upon new observations starting with simple understanding and becoming more correct and complex. Explaining the behavior of these electron "orbits" was one of the driving forces behind the development of quantum mechanics.
Early models
With J. J. Thomson's discovery of the electron in 1897, it became clear that atoms were not the smallest building blocks of nature, but were rather composite particles. The newly discovered structure within atoms tempted many to imagine how the atom's constituent parts might interact with each other. Thomson theorized that multiple electrons revolve in orbit-like rings within a positively charged jelly-like substance, and between the electron's discovery and 1909, this "plum pudding model" was the most widely accepted explanation of atomic structure.
Shortly after Thomson's discovery, Hantaro Nagaoka predicted a different model for electronic structure. Unlike the plum pudding model, the positive charge in Nagaoka's "Saturnian Model" was concentrated into a central core, pulling the electrons into circular orbits reminiscent of Saturn's rings. Few people took notice of Nagaoka's work at the time, and Nagaoka himself recognized a fundamental defect in the theory even at its conception, namely that a classical charged object cannot sustain orbital motion because it is accelerating and therefore loses energy due to electromagnetic radiation. Nevertheless, the Saturnian model turned out to have more in common with modern theory than any of its contemporaries.
Bohr atom
In 1909, Ernest Rutherford discovered that the bulk of the atomic mass was tightly condensed into a nucleus, which was also found to be positively charged. It became clear from his analysis in 1911 that the plum pudding model could not explain atomic structure. In 1913, Rutherford's post-doctoral student, Niels Bohr, proposed a new model of the atom, wherein electrons orbited the nucleus with classical periods, but were permitted to have only discrete values of angular momentum, quantized in units ħ. This constraint automatically allowed only certain electron energies. The Bohr model of the atom fixed the problem of energy loss from radiation from a ground state (by declaring that there was no state below this), and more importantly explained the origin of spectral lines.
After Bohr's use of Einstein's explanation of the photoelectric effect to relate energy levels in atoms with the wavelength of emitted light, the connection between the structure of electrons in atoms and the emission and absorption spectra of atoms became an increasingly useful tool in the understanding of electrons in atoms. The most prominent feature of emission and absorption spectra (known experimentally since the middle of the 19th century), was that these atomic spectra contained discrete lines. The significance of the Bohr model was that it related the lines in emission and absorption spectra to the energy differences between the orbits that electrons could take around an atom. This was, however, not achieved by Bohr through giving the electrons some kind of wave-like properties, since the idea that electrons could behave as matter waves was not suggested until eleven years later. Still, the Bohr model's use of quantized angular momenta and therefore quantized energy levels was a significant step toward the understanding of electrons in atoms, and also a significant step towards the development of quantum mechanics in suggesting that quantized restraints must account for all discontinuous energy levels and spectra in atoms.
With de Broglie's suggestion of the existence of electron matter waves in 1924, and for a short time before the full 1926 Schrödinger equation treatment of hydrogen-like atoms, a Bohr electron "wavelength" could be seen to be a function of its momentum; so a Bohr orbiting electron was seen to orbit in a circle at a multiple of its half-wavelength. The Bohr model for a short time could be seen as a classical model with an additional constraint provided by the 'wavelength' argument. However, this period was immediately superseded by the full three-dimensional wave mechanics of 1926. In our current understanding of physics, the Bohr model is called a semi-classical model because of its quantization of angular momentum, not primarily because of its relationship with electron wavelength, which appeared in hindsight a dozen years after the Bohr model was proposed.
The Bohr model was able to explain the emission and absorption spectra of hydrogen. The energies of electrons in the n = 1, 2, 3, etc. states in the Bohr model match those of current physics. However, this did not explain similarities between different atoms, as expressed by the periodic table, such as the fact that helium (two electrons), neon (10 electrons), and argon (18 electrons) exhibit similar chemical inertness. Modern quantum mechanics explains this in terms of electron shells and subshells which can each hold a number of electrons determined by the Pauli exclusion principle. Thus the n = 1 state can hold one or two electrons, while the n = 2 state can hold up to eight electrons in 2s and 2p subshells. In helium, all n = 1 states are fully occupied; the same is true for n = 1 and n = 2 in neon. In argon, the 3s and 3p subshells are similarly fully occupied by eight electrons; quantum mechanics also allows a 3d subshell but this is at higher energy than the 3s and 3p in argon (contrary to the situation for hydrogen) and remains empty.
Modern conceptions and connections to the Heisenberg uncertainty principle
Immediately after Heisenberg discovered his uncertainty principle, Bohr noted that the existence of any sort of wave packet implies uncertainty in the wave frequency and wavelength, since a spread of frequencies is needed to create the packet itself. In quantum mechanics, where all particle momenta are associated with waves, it is the formation of such a wave packet which localizes the wave, and thus the particle, in space. In states where a quantum mechanical particle is bound, it must be localized as a wave packet, and the existence of the packet and its minimum size implies a spread and minimal value in particle wavelength, and thus also momentum and energy. In quantum mechanics, as a particle is localized to a smaller region in space, the associated compressed wave packet requires a larger and larger range of momenta, and thus larger kinetic energy. Thus the binding energy to contain or trap a particle in a smaller region of space increases without bound as the region of space grows smaller. Particles cannot be restricted to a geometric point in space, since this would require infinite particle momentum.
In chemistry, Erwin Schrödinger, Linus Pauling, Mulliken and others noted that the consequence of Heisenberg's relation was that the electron, as a wave packet, could not be considered to have an exact location in its orbital. Max Born suggested that the electron's position needed to be described by a probability distribution which was connected with finding the electron at some point in the wave-function which described its associated wave packet. The new quantum mechanics did not give exact results, but only the probabilities for the occurrence of a variety of possible such results. Heisenberg held that the path of a moving particle has no meaning if we cannot observe it, as we cannot with electrons in an atom.
In the quantum picture of Heisenberg, Schrödinger and others, the Bohr atom number n for each orbital became known as an n-sphere in a three-dimensional atom and was pictured as the most probable energy of the probability cloud of the electron's wave packet which surrounded the atom.
Orbital names
Orbital notation and subshells
Orbitals have been given names, which are usually given in the form:
where X is the energy level corresponding to the principal quantum number ; type is a lower-case letter denoting the shape or subshell of the orbital, corresponding to the angular momentum quantum number .
For example, the orbital 1s (pronounced as the individual numbers and letters: "'one' 'ess'") is the lowest energy level () and has an angular quantum number of , denoted as s. Orbitals with are denoted as p, d and f respectively.
The set of orbitals for a given n and is called a subshell, denoted
.
The superscript y shows the number of electrons in the subshell. For example, the notation 2p4 indicates that the 2p subshell of an atom contains 4 electrons. This subshell has 3 orbitals, each with n = 2 and = 1.
X-ray notation
There is also another, less common system still used in X-ray science known as X-ray notation, which is a continuation of the notations used before orbital theory was well understood. In this system, the principal quantum number is given a letter associated with it. For , the letters associated with those numbers are K, L, M, N, O, ... respectively.
Hydrogen-like orbitals
The simplest atomic orbitals are those that are calculated for systems with a single electron, such as the hydrogen atom. An atom of any other element ionized down to a single electron (He+, Li2+, etc.) is very similar to hydrogen, and the orbitals take the same form. In the Schrödinger equation for this system of one negative and one positive particle, the atomic orbitals are the eigenstates of the Hamiltonian operator for the energy. They can be obtained analytically, meaning that the resulting orbitals are products of a polynomial series, and exponential and trigonometric functions. (see hydrogen atom).
For atoms with two or more electrons, the governing equations can be solved only with the use of methods of iterative approximation. Orbitals of multi-electron atoms are qualitatively similar to those of hydrogen, and in the simplest models, they are taken to have the same form. For more rigorous and precise analysis, numerical approximations must be used.
A given (hydrogen-like) atomic orbital is identified by unique values of three quantum numbers: , , and . The rules restricting the values of the quantum numbers, and their energies (see below), explain the electron configuration of the atoms and the periodic table.
The stationary states (quantum states) of a hydrogen-like atom are its atomic orbitals. However, in general, an electron's behavior is not fully described by a single orbital. Electron states are best represented by time-depending "mixtures" (linear combinations) of multiple orbitals. See Linear combination of atomic orbitals molecular orbital method.
The quantum number first appeared in the Bohr model where it determines the radius of each circular electron orbit. In modern quantum mechanics however, determines the mean distance of the electron from the nucleus; all electrons with the same value of n lie at the same average distance. For this reason, orbitals with the same value of n are said to comprise a "shell". Orbitals with the same value of n and also the same value of are even more closely related, and are said to comprise a "subshell".
Quantum numbers
Because of the quantum mechanical nature of the electrons around a nucleus, atomic orbitals can be uniquely defined by a set of integers known as quantum numbers. These quantum numbers occur only in certain combinations of values, and their physical interpretation changes depending on whether real or complex versions of the atomic orbitals are employed.
Complex orbitals
In physics, the most common orbital descriptions are based on the solutions to the hydrogen atom, where orbitals are given by the product between a radial function and a pure spherical harmonic. The quantum numbers, together with the rules governing their possible values, are as follows:
The principal quantum number describes the energy of the electron and is always a positive integer. In fact, it can be any positive integer, but for reasons discussed below, large numbers are seldom encountered. Each atom has, in general, many orbitals associated with each value of n; these orbitals together are sometimes called electron shells.
The azimuthal quantum number describes the orbital angular momentum of each electron and is a non-negative integer. Within a shell where is some integer , ranges across all (integer) values satisfying the relation . For instance, the shell has only orbitals with , and the shell has only orbitals with , and . The set of orbitals associated with a particular value of are sometimes collectively called a subshell.
The magnetic quantum number, , describes the projection of the orbital angular momentum along a chosen axis. It determines the magnitude of the current circulating around that axis and the orbital contribution to the magnetic moment of an electron via the Ampèrian loop model. Within a subshell , obtains the integer values in the range .
The above results may be summarized in the following table. Each cell represents a subshell, and lists the values of available in that subshell. Empty cells represent subshells that do not exist.
Subshells are usually identified by their - and -values. is represented by its numerical value, but is represented by a letter as follows: 0 is represented by 's', 1 by 'p', 2 by 'd', 3 by 'f', and 4 by 'g'. For instance, one may speak of the subshell with and as a '2s subshell'.
Each electron also has angular momentum in the form of quantum mechanical spin given by spin s = . Its projection along a specified axis is given by the spin magnetic quantum number, ms, which can be + or −. These values are also called "spin up" or "spin down" respectively.
The Pauli exclusion principle states that no two electrons in an atom can have the same values of all four quantum numbers. If there are two electrons in an orbital with given values for three quantum numbers, (, , ), these two electrons must differ in their spin projection ms.
The above conventions imply a preferred axis (for example, the z direction in Cartesian coordinates), and they also imply a preferred direction along this preferred axis. Otherwise there would be no sense in distinguishing from . As such, the model is most useful when applied to physical systems that share these symmetries. The Stern–Gerlach experimentwhere an atom is exposed to a magnetic fieldprovides one such example.
Real orbitals
Instead of the complex orbitals described above, it is common, especially in the chemistry literature, to use real atomic orbitals. These real orbitals arise from simple linear combinations of complex orbitals. Using the Condon–Shortley phase convention, real orbitals are related to complex orbitals in the same way that the real spherical harmonics are related to complex spherical harmonics. Letting denote a complex orbital with quantum numbers , , and , the real orbitals may be defined by
If , with the radial part of the orbital, this definition is equivalent to where is the real spherical harmonic related to either the real or imaginary part of the complex spherical harmonic .
Real spherical harmonics are physically relevant when an atom is embedded in a crystalline solid, in which case there are multiple preferred symmetry axes but no single preferred direction. Real atomic orbitals are also more frequently encountered in introductory chemistry textbooks and shown in common orbital visualizations. In real hydrogen-like orbitals, quantum numbers and have the same interpretation and significance as their complex counterparts, but is no longer a good quantum number (but its absolute value is).
Some real orbitals are given specific names beyond the simple designation. Orbitals with quantum number are called orbitals. With this one can already assign names to complex orbitals such as ; the first symbol is the quantum number, the second character is the symbol for that particular quantum number and the subscript is the quantum number.
As an example of how the full orbital names are generated for real orbitals, one may calculate . From the table of spherical harmonics, with . Then
Likewise . As a more complicated example:
In all these cases we generate a Cartesian label for the orbital by examining, and abbreviating, the polynomial in appearing in the numerator. We ignore any terms in the polynomial except for the term with the highest exponent in .
We then use the abbreviated polynomial as a subscript label for the atomic state, using the same nomenclature as above to indicate the and quantum numbers.
The expression above all use the Condon–Shortley phase convention which is favored by quantum physicists. Other conventions exist for the phase of the spherical harmonics. Under these different conventions the and orbitals may appear, for example, as the sum and difference of and , contrary to what is shown above.
Below is a list of these Cartesian polynomial names for the atomic orbitals. There does not seem to be reference in the literature as to how to abbreviate the long Cartesian spherical harmonic polynomials for so there does not seem be consensus on the naming of orbitals or higher according to this nomenclature.
Shapes of orbitals
Simple pictures showing orbital shapes are intended to describe the angular forms of regions in space where the electrons occupying the orbital are likely to be found. The diagrams cannot show the entire region where an electron can be found, since according to quantum mechanics there is a non-zero probability of finding the electron (almost) anywhere in space. Instead the diagrams are approximate representations of boundary or contour surfaces where the probability density has a constant value, chosen so that there is a certain probability (for example 90%) of finding the electron within the contour. Although as the square of an absolute value is everywhere non-negative, the sign of the wave function is often indicated in each subregion of the orbital picture.
Sometimes the function is graphed to show its phases, rather than which shows probability density but has no phase (which is lost when taking absolute value, since is a complex number). orbital graphs tend to have less spherical, thinner lobes than graphs, but have the same number of lobes in the same places, and otherwise are recognizable. This article, to show wave function phase, shows mostly graphs.
The lobes can be seen as standing wave interference patterns between the two counter-rotating, ring-resonant traveling wave and modes; the projection of the orbital onto the xy plane has a resonant wavelength around the circumference. Although rarely shown, the traveling wave solutions can be seen as rotating banded tori; the bands represent phase information. For each there are two standing wave solutions and . If , the orbital is vertical, counter rotating information is unknown, and the orbital is z-axis symmetric. If there are no counter rotating modes. There are only radial modes and the shape is spherically symmetric.
Nodal planes and nodal spheres are surfaces on which the probability density vanishes. The number of nodal surfaces is controlled by the quantum numbers and . An orbital with azimuthal quantum number has radial nodal planes passing through the origin. For example, the s orbitals () are spherically symmetric and have no nodal planes, whereas the p orbitals () have a single nodal plane between the lobes. The number of nodal spheres equals , consistent with the restriction on the quantum numbers. The principal quantum number controls the total number of nodal surfaces which is . Loosely speaking, is energy, is analogous to eccentricity, and is orientation.
In general, determines size and energy of the orbital for a given nucleus; as increases, the size of the orbital increases. The higher nuclear charge of heavier elements causes their orbitals to contract by comparison to lighter ones, so that the size of the atom remains very roughly constant, even as the number of electrons increases.
Also in general terms, determines an orbital's shape, and its orientation. However, since some orbitals are described by equations in complex numbers, the shape sometimes depends on also. Together, the whole set of orbitals for a given and fill space as symmetrically as possible, though with increasingly complex sets of lobes and nodes.
The single s orbitals () are shaped like spheres. For it is roughly a solid ball (densest at center and fades outward exponentially), but for , each single s orbital is made of spherically symmetric surfaces which are nested shells (i.e., the "wave-structure" is radial, following a sinusoidal radial component as well). See illustration of a cross-section of these nested shells, at right. The s orbitals for all numbers are the only orbitals with an anti-node (a region of high wave function density) at the center of the nucleus. All other orbitals (p, d, f, etc.) have angular momentum, and thus avoid the nucleus (having a wave node at the nucleus). Recently, there has been an effort to experimentally image the 1s and 2p orbitals in a SrTiO3 crystal using scanning transmission electron microscopy with energy dispersive x-ray spectroscopy. Because the imaging was conducted using an electron beam, Coulombic beam-orbital interaction that is often termed as the impact parameter effect is included in the outcome (see the figure at right).
The shapes of p, d and f orbitals are described verbally here and shown graphically in the Orbitals table below. The three p orbitals for have the form of two ellipsoids with a point of tangency at the nucleus (the two-lobed shape is sometimes referred to as a "dumbbell"—there are two lobes pointing in opposite directions from each other). The three p orbitals in each shell are oriented at right angles to each other, as determined by their respective linear combination of values of . The overall result is a lobe pointing along each direction of the primary axes.
Four of the five d orbitals for look similar, each with four pear-shaped lobes, each lobe tangent at right angles to two others, and the centers of all four lying in one plane. Three of these planes are the xy-, xz-, and yz-planes—the lobes are between the pairs of primary axes—and the fourth has the center along the x and y axes themselves. The fifth and final d orbital consists of three regions of high probability density: a torus in between two pear-shaped regions placed symmetrically on its z axis. The overall total of 18 directional lobes point in every primary axis direction and between every pair.
There are seven f orbitals, each with shapes more complex than those of the d orbitals.
Additionally, as is the case with the s orbitals, individual p, d, f and g orbitals with values higher than the lowest possible value, exhibit an additional radial node structure which is reminiscent of harmonic waves of the same type, as compared with the lowest (or fundamental) mode of the wave. As with s orbitals, this phenomenon provides p, d, f, and g orbitals at the next higher possible value of (for example, 3p orbitals vs. the fundamental 2p), an additional node in each lobe. Still higher values of further increase the number of radial nodes, for each type of orbital.
The shapes of atomic orbitals in one-electron atom are related to 3-dimensional spherical harmonics. These shapes are not unique, and any linear combination is valid, like a transformation to cubic harmonics, in fact it is possible to generate sets where all the d's are the same shape, just like the and are the same shape.
Although individual orbitals are most often shown independent of each other, the orbitals coexist around the nucleus at the same time. Also, in 1927, Albrecht Unsöld proved that if one sums the electron density of all orbitals of a particular azimuthal quantum number of the same shell (e.g., all three 2p orbitals, or all five 3d orbitals) where each orbital is occupied by an electron or each is occupied by an electron pair, then all angular dependence disappears; that is, the resulting total density of all the atomic orbitals in that subshell (those with the same ) is spherical. This is known as Unsöld's theorem.
Orbitals table
This table shows the real hydrogen-like wave functions for all atomic orbitals up to 7s, and therefore covers the occupied orbitals in the ground state of all elements in the periodic table up to radium and some beyond. "ψ" graphs are shown with − and + wave function phases shown in two different colors (arbitrarily red and blue). The orbital is the same as the orbital, but the and are formed by taking linear combinations of the and orbitals (which is why they are listed under the label). Also, the and are not the same shape as the , since they are pure spherical harmonics.
* No elements with 6f, 7d or 7f electrons have been discovered yet.
† Elements with 7p electrons have been discovered, but their electronic configurations are only predicted – save the exceptional Lr, which fills 7p1 instead of 6d1.
‡ For the elements whose highest occupied orbital is a 6d orbital, only some electronic configurations have been confirmed. (Mt, Ds, Rg and Cn are still missing).
These are the real-valued orbitals commonly used in chemistry. Only the orbitals where are eigenstates of the orbital angular momentum operator, . The columns with are combinations of two eigenstates. See comparison in the following picture:
Qualitative understanding of shapes
The shapes of atomic orbitals can be qualitatively understood by considering the analogous case of standing waves on a circular drum. To see the analogy, the mean vibrational displacement of each bit of drum membrane from the equilibrium point over many cycles (a measure of average drum membrane velocity and momentum at that point) must be considered relative to that point's distance from the center of the drum head. If this displacement is taken as being analogous to the probability of finding an electron at a given distance from the nucleus, then it will be seen that the many modes of the vibrating disk form patterns that trace the various shapes of atomic orbitals. The basic reason for this correspondence lies in the fact that the distribution of kinetic energy and momentum in a matter-wave is predictive of where the particle associated with the wave will be. That is, the probability of finding an electron at a given place is also a function of the electron's average momentum at that point, since high electron momentum at a given position tends to "localize" the electron in that position, via the properties of electron wave-packets (see the Heisenberg uncertainty principle for details of the mechanism).
This relationship means that certain key features can be observed in both drum membrane modes and atomic orbitals. For example, in all of the modes analogous to s orbitals (the top row in the animated illustration below), it can be seen that the very center of the drum membrane vibrates most strongly, corresponding to the antinode in all s orbitals in an atom. This antinode means the electron is most likely to be at the physical position of the nucleus (which it passes straight through without scattering or striking it), since it is moving (on average) most rapidly at that point, giving it maximal momentum.
A mental "planetary orbit" picture closest to the behavior of electrons in s orbitals, all of which have no angular momentum, might perhaps be that of a Keplerian orbit with the orbital eccentricity of 1 but a finite major axis, not physically possible (because particles were to collide), but can be imagined as a limit of orbits with equal major axes but increasing eccentricity.
Below, a number of drum membrane vibration modes and the respective wave functions of the hydrogen atom are shown. A correspondence can be considered where the wave functions of a vibrating drum head are for a two-coordinate system and the wave functions for a vibrating sphere are three-coordinate .
None of the other sets of modes in a drum membrane have a central antinode, and in all of them the center of the drum does not move. These correspond to a node at the nucleus for all non-s orbitals in an atom. These orbitals all have some angular momentum, and in the planetary model, they correspond to particles in orbit with eccentricity less than 1.0, so that they do not pass straight through the center of the primary body, but keep somewhat away from it.
In addition, the drum modes analogous to p and d modes in an atom show spatial irregularity along the different radial directions from the center of the drum, whereas all of the modes analogous to s modes are perfectly symmetrical in radial direction. The non-radial-symmetry properties of non-s orbitals are necessary to localize a particle with angular momentum and a wave nature in an orbital where it must tend to stay away from the central attraction force, since any particle localized at the point of central attraction could have no angular momentum. For these modes, waves in the drum head tend to avoid the central point. Such features again emphasize that the shapes of atomic orbitals are a direct consequence of the wave nature of electrons.
Orbital energy
In atoms with one electron (hydrogen-like atom), the energy of an orbital (and, consequently, any electron in the orbital) is determined mainly by . The orbital has the lowest possible energy in the atom. Each successively higher value of has a higher energy, but the difference decreases as increases. For high , the energy becomes so high that the electron can easily escape the atom. In single electron atoms, all levels with different within a given are degenerate in the Schrödinger approximation, and have the same energy. This approximation is broken slightly in the solution to the Dirac equation (where energy depends on and another quantum number ), and by the effect of the magnetic field of the nucleus and quantum electrodynamics effects. The latter induce tiny binding energy differences especially for s electrons that go nearer the nucleus, since these feel a very slightly different nuclear charge, even in one-electron atoms; see Lamb shift.
In atoms with multiple electrons, the energy of an electron depends not only on its orbital, but also on its interactions with other electrons. These interactions depend on the detail of its spatial probability distribution, and so the energy levels of orbitals depend not only on but also on . Higher values of are associated with higher values of energy; for instance, the 2p state is higher than the 2s state. When , the increase in energy of the orbital becomes so large as to push the energy of orbital above the energy of the s orbital in the next higher shell; when the energy is pushed into the shell two steps higher. The filling of the 3d orbitals does not occur until the 4s orbitals have been filled.
The increase in energy for subshells of increasing angular momentum in larger atoms is due to electron–electron interaction effects, and it is specifically related to the ability of low angular momentum electrons to penetrate more effectively toward the nucleus, where they are subject to less screening from the charge of intervening electrons. Thus, in atoms with higher atomic number, the of electrons becomes more and more of a determining factor in their energy, and the principal quantum numbers of electrons becomes less and less important in their energy placement.
The energy sequence of the first 35 subshells (e.g., 1s, 2p, 3d, etc.) is given in the following table. Each cell represents a subshell with and given by its row and column indices, respectively. The number in the cell is the subshell's position in the sequence. For a linear listing of the subshells in terms of increasing energies in multielectron atoms, see the section below.
Note: empty cells indicate non-existent sublevels, while numbers in italics indicate sublevels that could (potentially) exist, but which do not hold electrons in any element currently known.
Electron placement and the periodic table
Several rules govern the placement of electrons in orbitals (electron configuration). The first dictates that no two electrons in an atom may have the same set of values of quantum numbers (this is the Pauli exclusion principle). These quantum numbers include the three that define orbitals, as well as the spin magnetic quantum number . Thus, two electrons may occupy a single orbital, so long as they have different values of . Because takes one of only two values ( or ), at most two electrons can occupy each orbital.
Additionally, an electron always tends to fall to the lowest possible energy state. It is possible for it to occupy any orbital so long as it does not violate the Pauli exclusion principle, but if lower-energy orbitals are available, this condition is unstable. The electron will eventually lose energy (by releasing a photon) and drop into the lower orbital. Thus, electrons fill orbitals in the order specified by the energy sequence given above.
This behavior is responsible for the structure of the periodic table. The table may be divided into several rows (called 'periods'), numbered starting with 1 at the top. The presently known elements occupy seven periods. If a certain period has number i, it consists of elements whose outermost electrons fall in the ith shell. Niels Bohr was the first to propose (1923) that the periodicity in the properties of the elements might be explained by the periodic filling of the electron energy levels, resulting in the electronic structure of the atom.
The periodic table may also be divided into several numbered rectangular 'blocks'. The elements belonging to a given block have this common feature: their highest-energy electrons all belong to the same -state (but the associated with that -state depends upon the period). For instance, the leftmost two columns constitute the 's-block'. The outermost electrons of Li and Be respectively belong to the 2s subshell, and those of Na and Mg to the 3s subshell.
The following is the order for filling the "subshell" orbitals, which also gives the order of the "blocks" in the periodic table:
1s, 2s, 2p, 3s, 3p, 4s, 3d, 4p, 5s, 4d, 5p, 6s, 4f, 5d, 6p, 7s, 5f, 6d, 7p
The "periodic" nature of the filling of orbitals, as well as emergence of the s, p, d, and f "blocks", is more obvious if this order of filling is given in matrix form, with increasing principal quantum numbers starting the new rows ("periods") in the matrix. Then, each subshell (composed of the first two quantum numbers) is repeated as many times as required for each pair of electrons it may contain. The result is a compressed periodic table, with each entry representing two successive elements:
Although this is the general order of orbital filling according to the Madelung rule, there are exceptions, and the actual electronic energies of each element are also dependent upon additional details of the atoms (see ).
The number of electrons in an electrically neutral atom increases with the atomic number. The electrons in the outermost shell, or valence electrons, tend to be responsible for an element's chemical behavior. Elements that contain the same number of valence electrons can be grouped together and display similar chemical properties.
Relativistic effects
For elements with high atomic number , the effects of relativity become more pronounced, and especially so for s electrons, which move at relativistic velocities as they penetrate the screening electrons near the core of high- atoms. This relativistic increase in momentum for high speed electrons causes a corresponding decrease in wavelength and contraction of 6s orbitals relative to 5d orbitals (by comparison to corresponding s and d electrons in lighter elements in the same column of the periodic table); this results in 6s valence electrons becoming lowered in energy.
Examples of significant physical outcomes of this effect include the lowered melting temperature of mercury (which results from 6s electrons not being available for metal bonding) and the golden color of gold and caesium.
In the Bohr model, an electron has a velocity given by , where is the atomic number, is the fine-structure constant, and is the speed of light. In non-relativistic quantum mechanics, therefore, any atom with an atomic number greater than 137 would require its 1s electrons to be traveling faster than the speed of light. Even in the Dirac equation, which accounts for relativistic effects, the wave function of the electron for atoms with is oscillatory and unbounded. The significance of element 137, also known as untriseptium, was first pointed out by the physicist Richard Feynman. Element 137 is sometimes informally called feynmanium (symbol Fy). However, Feynman's approximation fails to predict the exact critical value of due to the non-point-charge nature of the nucleus and very small orbital radius of inner electrons, resulting in a potential seen by inner electrons which is effectively less than . The critical value, which makes the atom unstable with regard to high-field breakdown of the vacuum and production of electron-positron pairs, does not occur until is about 173. These conditions are not seen except transiently in collisions of very heavy nuclei such as lead or uranium in accelerators, where such electron-positron production from these effects has been claimed to be observed.
There are no nodes in relativistic orbital densities, although individual components of the wave function will have nodes.
pp hybridization (conjectured)
In late period 8 elements, a hybrid of 8p3/2 and 9p1/2 is expected to exist, where "3/2" and "1/2" refer to the total angular momentum quantum number. This "pp" hybrid may be responsible for the p-block of the period due to properties similar to p subshells in ordinary valence shells. Energy levels of 8p3/2 and 9p1/2 come close due to relativistic spin–orbit effects; the 9s subshell should also participate, as these elements are expected to be analogous to the respective 5p elements indium through xenon.
Transitions between orbitals
Bound quantum states have discrete energy levels. When applied to atomic orbitals, this means that the energy differences between states are also discrete. A transition between these states (i.e., an electron absorbing or emitting a photon) can thus happen only if the photon has an energy corresponding with the exact energy difference between said states.
Consider two states of the hydrogen atom:
State , , and
State , , and
By quantum theory, state 1 has a fixed energy of , and state 2 has a fixed energy of . Now, what would happen if an electron in state 1 were to move to state 2? For this to happen, the electron would need to gain an energy of exactly . If the electron receives energy that is less than or greater than this value, it cannot jump from state 1 to state 2. Now, suppose we irradiate the atom with a broad-spectrum of light. Photons that reach the atom that have an energy of exactly will be absorbed by the electron in state 1, and that electron will jump to state 2. However, photons that are greater or lower in energy cannot be absorbed by the electron, because the electron can jump only to one of the orbitals, it cannot jump to a state between orbitals. The result is that only photons of a specific frequency will be absorbed by the atom. This creates a line in the spectrum, known as an absorption line, which corresponds to the energy difference between states 1 and 2.
The atomic orbital model thus predicts line spectra, which are observed experimentally. This is one of the main validations of the atomic orbital model.
The atomic orbital model is nevertheless an approximation to the full quantum theory, which only recognizes many electron states. The predictions of line spectra are qualitatively useful but are not quantitatively accurate for atoms and ions other than those containing only one electron.
See also
Atomic electron configuration table
Condensed matter physics
Electron configuration
Energy level
Hund's rules
Molecular orbital
Orbital overlap
Quantum chemistry
Quantum chemistry computer programs
Solid-state physics
Wave function collapse
Wiswesser's rule
References
External links
3D representation of hydrogenic orbitals
The Orbitron, a visualization of all common and uncommon atomic orbitals, from 1s to 7g
Grand table Still images of many orbitals
Atomic physics
Chemical bonding
Electron states
Quantum chemistry
Articles containing video clips | Atomic orbital | [
"Physics",
"Chemistry",
"Materials_science"
] | 10,497 | [
"Electron",
"Quantum chemistry",
"Quantum mechanics",
"Theoretical chemistry",
"Condensed matter physics",
"Atomic physics",
" molecular",
"Atomic",
"nan",
"Chemical bonding",
"Electron states",
" and optical physics"
] |
1,207 | https://en.wikipedia.org/wiki/Amino%20acid | Amino acids are organic compounds that contain both amino and carboxylic acid functional groups. Although over 500 amino acids exist in nature, by far the most important are the 22 α-amino acids incorporated into proteins. Only these 22 appear in the genetic code of life.
Amino acids can be classified according to the locations of the core structural functional groups (alpha- (α-), beta- (β-), gamma- (γ-) amino acids, etc.); other categories relate to polarity, ionization, and side-chain group type (aliphatic, acyclic, aromatic, polar, etc.). In the form of proteins, amino-acid residues form the second-largest component (water being the largest) of human muscles and other tissues. Beyond their role as residues in proteins, amino acids participate in a number of processes such as neurotransmitter transport and biosynthesis. It is thought that they played a key role in enabling life on Earth and its emergence.
Amino acids are formally named by the IUPAC-IUBMB Joint Commission on Biochemical Nomenclature in terms of the fictitious "neutral" structure shown in the illustration. For example, the systematic name of alanine is 2-aminopropanoic acid, based on the formula . The Commission justified this approach as follows:
The systematic names and formulas given refer to hypothetical forms in which amino groups are unprotonated and carboxyl groups are undissociated. This convention is useful to avoid various nomenclatural problems but should not be taken to imply that these structures represent an appreciable fraction of the amino-acid molecules.
History
The first few amino acids were discovered in the early 1800s. In 1806, French chemists Louis-Nicolas Vauquelin and Pierre Jean Robiquet isolated a compound from asparagus that was subsequently named asparagine, the first amino acid to be discovered. Cystine was discovered in 1810, although its monomer, cysteine, remained undiscovered until 1884. Glycine and leucine were discovered in 1820. The last of the 20 common amino acids to be discovered was threonine in 1935 by William Cumming Rose, who also determined the essential amino acids and established the minimum daily requirements of all amino acids for optimal growth.
The unity of the chemical category was recognized by Wurtz in 1865, but he gave no particular name to it. The first use of the term "amino acid" in the English language dates from 1898, while the German term, , was used earlier. Proteins were found to yield amino acids after enzymatic digestion or acid hydrolysis. In 1902, Emil Fischer and Franz Hofmeister independently proposed that proteins are formed from many amino acids, whereby bonds are formed between the amino group of one amino acid with the carboxyl group of another, resulting in a linear structure that Fischer termed "peptide".
General structure
2-, alpha-, or α-amino acids have the generic formula in most cases, where R is an organic substituent known as a "side chain".
Of the many hundreds of described amino acids, 22 are proteinogenic ("protein-building"). It is these 22 compounds that combine to give a vast array of peptides and proteins assembled by ribosomes. Non-proteinogenic or modified amino acids may arise from post-translational modification or during nonribosomal peptide synthesis.
Chirality
The carbon atom next to the carboxyl group is called the α–carbon. In proteinogenic amino acids, it bears the amine and the R group or side chain specific to each amino acid, as well as a hydrogen atom. With the exception of glycine, for which the side chain is also a hydrogen atom, the α–carbon is stereogenic. All chiral proteogenic amino acids have the L configuration. They are "left-handed" enantiomers, which refers to the stereoisomers of the alpha carbon.
A few D-amino acids ("right-handed") have been found in nature, e.g., in bacterial envelopes, as a neuromodulator (D-serine), and in some antibiotics. Rarely, D-amino acid residues are found in proteins, and are converted from the L-amino acid as a post-translational modification.
Side chains
Polar charged side chains
Five amino acids possess a charge at neutral pH. Often these side chains appear at the surfaces on proteins to enable their solubility in water, and side chains with opposite charges form important electrostatic contacts called salt bridges that maintain structures within a single protein or between interfacing proteins. Many proteins bind metal into their structures specifically, and these interactions are commonly mediated by charged side chains such as aspartate, glutamate and histidine. Under certain conditions, each ion-forming group can be charged, forming double salts.
The two negatively charged amino acids at neutral pH are aspartate (Asp, D) and glutamate (Glu, E). The anionic carboxylate groups behave as Brønsted bases in most circumstances. Enzymes in very low pH environments, like the aspartic protease pepsin in mammalian stomachs, may have catalytic aspartate or glutamate residues that act as Brønsted acids.
There are three amino acids with side chains that are cations at neutral pH: arginine (Arg, R), lysine (Lys, K) and histidine (His, H). Arginine has a charged guanidino group and lysine a charged alkyl amino group, and are fully protonated at pH 7. Histidine's imidazole group has a pKa of 6.0, and is only around 10% protonated at neutral pH. Because histidine is easily found in its basic and conjugate acid forms it often participates in catalytic proton transfers in enzyme reactions.
Polar uncharged side chains
The polar, uncharged amino acids serine (Ser, S), threonine (Thr, T), asparagine (Asn, N) and glutamine (Gln, Q) readily form hydrogen bonds with water and other amino acids. They do not ionize in normal conditions, a prominent exception being the catalytic serine in serine proteases. This is an example of severe perturbation, and is not characteristic of serine residues in general. Threonine has two chiral centers, not only the L (2S) chiral center at the α-carbon shared by all amino acids apart from achiral glycine, but also (3R) at the β-carbon. The full stereochemical specification is (2S,3R)-L-threonine.
Hydrophobic side chains
Nonpolar amino acid interactions are the primary driving force behind the processes that fold proteins into their functional three dimensional structures. None of these amino acids' side chains ionize easily, and therefore do not have pKas, with the exception of tyrosine (Tyr, Y). The hydroxyl of tyrosine can deprotonate at high pH forming the negatively charged phenolate. Because of this one could place tyrosine into the polar, uncharged amino acid category, but its very low solubility in water matches the characteristics of hydrophobic amino acids well.
Special case side chains
Several side chains are not described well by the charged, polar and hydrophobic categories. Glycine (Gly, G) could be considered a polar amino acid since its small size means that its solubility is largely determined by the amino and carboxylate groups. However, the lack of any side chain provides glycine with a unique flexibility among amino acids with large ramifications to protein folding. Cysteine (Cys, C) can also form hydrogen bonds readily, which would place it in the polar amino acid category, though it can often be found in protein structures forming covalent bonds, called disulphide bonds, with other cysteines. These bonds influence the folding and stability of proteins, and are essential in the formation of antibodies. Proline (Pro, P) has an alkyl side chain and could be considered hydrophobic, but because the side chain joins back onto the alpha amino group it becomes particularly inflexible when incorporated into proteins. Similar to glycine this influences protein structure in a way unique among amino acids. Selenocysteine (Sec, U) is a rare amino acid not directly encoded by DNA, but is incorporated into proteins via the ribosome. Selenocysteine has a lower redox potential compared to the similar cysteine, and participates in several unique enzymatic reactions. Pyrrolysine (Pyl, O) is another amino acid not encoded in DNA, but synthesized into protein by ribosomes. It is found in archaeal species where it participates in the catalytic activity of several methyltransferases.
β- and γ-amino acids
Amino acids with the structure , such as β-alanine, a component of carnosine and a few other peptides, are β-amino acids. Ones with the structure are γ-amino acids, and so on, where X and Y are two substituents (one of which is normally H).
Zwitterions
The common natural forms of amino acids have a zwitterionic structure, with ( in the case of proline) and functional groups attached to the same C atom, and are thus α-amino acids, and are the only ones found in proteins during translation in the ribosome.
In aqueous solution at pH close to neutrality, amino acids exist as zwitterions, i.e. as dipolar ions with both and in charged states, so the overall structure is . At physiological pH the so-called "neutral forms" are not present to any measurable degree. Although the two charges in the zwitterion structure add up to zero it is misleading to call a species with a net charge of zero "uncharged".
In strongly acidic conditions (pH below 3), the carboxylate group becomes protonated and the structure becomes an ammonio carboxylic acid, . This is relevant for enzymes like pepsin that are active in acidic environments such as the mammalian stomach and lysosomes, but does not significantly apply to intracellular enzymes. In highly basic conditions (pH greater than 10, not normally seen in physiological conditions), the ammonio group is deprotonated to give .
Although various definitions of acids and bases are used in chemistry, the only one that is useful for chemistry in aqueous solution is that of Brønsted: an acid is a species that can donate a proton to another species, and a base is one that can accept a proton. This criterion is used to label the groups in the above illustration. The carboxylate side chains of aspartate and glutamate residues are the principal Brønsted bases in proteins. Likewise, lysine, tyrosine and cysteine will typically act as a Brønsted acid. Histidine under these conditions can act both as a Brønsted acid and a base.
Isoelectric point
For amino acids with uncharged side-chains the zwitterion predominates at pH values between the two pKa values, but coexists in equilibrium with small amounts of net negative and net positive ions. At the midpoint between the two pKa values, the trace amount of net negative and trace of net positive ions balance, so that average net charge of all forms present is zero. This pH is known as the isoelectric point pI, so pI = (pKa1 + pKa2).
For amino acids with charged side chains, the pKa of the side chain is involved. Thus for aspartate or glutamate with negative side chains, the terminal amino group is essentially entirely in the charged form , but this positive charge needs to be balanced by the state with just one C-terminal carboxylate group is negatively charged. This occurs halfway between the two carboxylate pKa values: pI = (pKa1 + pKa(R)), where pKa(R) is the side chain pKa.
Similar considerations apply to other amino acids with ionizable side-chains, including not only glutamate (similar to aspartate), but also cysteine, histidine, lysine, tyrosine and arginine with positive side chains.
Amino acids have zero mobility in electrophoresis at their isoelectric point, although this behaviour is more usually exploited for peptides and proteins than single amino acids. Zwitterions have minimum solubility at their isoelectric point, and some amino acids (in particular, with nonpolar side chains) can be isolated by precipitation from water by adjusting the pH to the required isoelectric point.
Physicochemical properties
The 20 canonical amino acids can be classified according to their properties. Important factors are charge, hydrophilicity or hydrophobicity, size, and functional groups. These properties influence protein structure and protein–protein interactions. The water-soluble proteins tend to have their hydrophobic residues (Leu, Ile, Val, Phe, and Trp) buried in the middle of the protein, whereas hydrophilic side chains are exposed to the aqueous solvent. (In biochemistry, a residue refers to a specific monomer within the polymeric chain of a polysaccharide, protein or nucleic acid.) The integral membrane proteins tend to have outer rings of exposed hydrophobic amino acids that anchor them in the lipid bilayer. Some peripheral membrane proteins have a patch of hydrophobic amino acids on their surface that sticks to the membrane. In a similar fashion, proteins that have to bind to positively charged molecules have surfaces rich in negatively charged amino acids such as glutamate and aspartate, while proteins binding to negatively charged molecules have surfaces rich in positively charged amino acids like lysine and arginine. For example, lysine and arginine are present in large amounts in the low-complexity regions of nucleic-acid binding proteins. There are various hydrophobicity scales of amino acid residues.
Some amino acids have special properties. Cysteine can form covalent disulfide bonds to other cysteine residues. Proline forms a cycle to the polypeptide backbone, and glycine is more flexible than other amino acids.
Glycine and proline are strongly present within low complexity regions of both eukaryotic and prokaryotic proteins, whereas the opposite is the case with cysteine, phenylalanine, tryptophan, methionine, valine, leucine, isoleucine, which are highly reactive, or complex, or hydrophobic.
Many proteins undergo a range of posttranslational modifications, whereby additional chemical groups are attached to the amino acid residue side chains sometimes producing lipoproteins (that are hydrophobic), or glycoproteins (that are hydrophilic) allowing the protein to attach temporarily to a membrane. For example, a signaling protein can attach and then detach from a cell membrane, because it contains cysteine residues that can have the fatty acid palmitic acid added to them and subsequently removed.
Table of standard amino acid abbreviations and properties
Although one-letter symbols are included in the table, IUPAC–IUBMB recommend that "Use of the one-letter symbols should be restricted to the comparison of long sequences".
The one-letter notation was chosen by IUPAC-IUB based on the following rules:
Initial letters are used where there is no ambuiguity: C cysteine, H histidine, I isoleucine, M methionine, S serine, V valine,
Where arbitrary assignment is needed, the structurally simpler amino acids are given precedence: A Alanine, G glycine, L leucine, P proline, T threonine,
F PHenylalanine and R aRginine are assigned by being phonetically suggestive,
W tryptophan is assigned based on the double ring being visually suggestive to the bulky letter W,
K lysine and Y tyrosine are assigned as alphabetically nearest to their initials L and T (note that U was avoided for its similarity with V, while X was reserved for undetermined or atypical amino acids); for tyrosine the mnemonic tYrosine was also proposed,
D aspartate was assigned arbitrarily, with the proposed mnemonic asparDic acid; E glutamate was assigned in alphabetical sequence being larger by merely one methylene –CH2– group,
N asparagine was assigned arbitrarily, with the proposed mnemonic asparagiNe; Q glutamine was assigned in alphabetical sequence of those still available (note again that O was avoided due to similarity with D), with the proposed mnemonic Qlutamine.
Two additional amino acids are in some species coded for by codons that are usually interpreted as stop codons:
In addition to the specific amino acid codes, placeholders are used in cases where chemical or crystallographic analysis of a peptide or protein cannot conclusively determine the identity of a residue. They are also used to summarize conserved protein sequence motifs. The use of single letters to indicate sets of similar residues is similar to the use of abbreviation codes for degenerate bases.
Unk is sometimes used instead of Xaa, but is less standard.
Ter or * (from termination) is used in notation for mutations in proteins when a stop codon occurs. It corresponds to no amino acid at all.
In addition, many nonstandard amino acids have a specific code. For example, several peptide drugs, such as Bortezomib and MG132, are artificially synthesized and retain their protecting groups, which have specific codes. Bortezomib is Pyz–Phe–boroLeu, and MG132 is Z–Leu–Leu–Leu–al. To aid in the analysis of protein structure, photo-reactive amino acid analogs are available. These include photoleucine (pLeu) and photomethionine (pMet).
Occurrence and functions in biochemistry
Proteinogenic amino acids
Amino acids are the precursors to proteins. They join by condensation reactions to form short polymer chains called peptides or longer chains called either polypeptides or proteins. These chains are linear and unbranched, with each amino acid residue within the chain attached to two neighboring amino acids. In nature, the process of making proteins encoded by RNA genetic material is called translation and involves the step-by-step addition of amino acids to a growing protein chain by a ribozyme that is called a ribosome. The order in which the amino acids are added is read through the genetic code from an mRNA template, which is an RNA derived from one of the organism's genes.
Twenty-two amino acids are naturally incorporated into polypeptides and are called proteinogenic or natural amino acids. Of these, 20 are encoded by the universal genetic code. The remaining 2, selenocysteine and pyrrolysine, are incorporated into proteins by unique synthetic mechanisms. Selenocysteine is incorporated when the mRNA being translated includes a SECIS element, which causes the UGA codon to encode selenocysteine instead of a stop codon. Pyrrolysine is used by some methanogenic archaea in enzymes that they use to produce methane. It is coded for with the codon UAG, which is normally a stop codon in other organisms.
Several independent evolutionary studies have suggested that Gly, Ala, Asp, Val, Ser, Pro, Glu, Leu, Thr may belong to a group of amino acids that constituted the early genetic code, whereas Cys, Met, Tyr, Trp, His, Phe may belong to a group of amino acids that constituted later additions of the genetic code.
Standard vs nonstandard amino acids
The 20 amino acids that are encoded directly by the codons of the universal genetic code are called standard or canonical amino acids. A modified form of methionine (N-formylmethionine) is often incorporated in place of methionine as the initial amino acid of proteins in bacteria, mitochondria and plastids (including chloroplasts). Other amino acids are called nonstandard or non-canonical. Most of the nonstandard amino acids are also non-proteinogenic (i.e. they cannot be incorporated into proteins during translation), but two of them are proteinogenic, as they can be incorporated translationally into proteins by exploiting information not encoded in the universal genetic code.
The two nonstandard proteinogenic amino acids are selenocysteine (present in many non-eukaryotes as well as most eukaryotes, but not coded directly by DNA) and pyrrolysine (found only in some archaea and at least one bacterium). The incorporation of these nonstandard amino acids is rare. For example, 25 human proteins include selenocysteine in their primary structure, and the structurally characterized enzymes (selenoenzymes) employ selenocysteine as the catalytic moiety in their active sites. Pyrrolysine and selenocysteine are encoded via variant codons. For example, selenocysteine is encoded by stop codon and SECIS element.
N-formylmethionine (which is often the initial amino acid of proteins in bacteria, mitochondria, and chloroplasts) is generally considered as a form of methionine rather than as a separate proteinogenic amino acid. Codon–tRNA combinations not found in nature can also be used to "expand" the genetic code and form novel proteins known as alloproteins incorporating non-proteinogenic amino acids.
Non-proteinogenic amino acids
Aside from the 22 proteinogenic amino acids, many non-proteinogenic amino acids are known. Those either are not found in proteins (for example carnitine, GABA, levothyroxine) or are not produced directly and in isolation by standard cellular machinery. For example, hydroxyproline, is synthesised from proline. Another example is selenomethionine).
Non-proteinogenic amino acids that are found in proteins are formed by post-translational modification. Such modifications can also determine the localization of the protein, e.g., the addition of long hydrophobic groups can cause a protein to bind to a phospholipid membrane. Examples:
the carboxylation of glutamate allows for better binding of calcium cations,
Hydroxyproline, generated by hydroxylation of proline, is a major component of the connective tissue collagen.
Hypusine in the translation initiation factor EIF5A, contains a modification of lysine.
Some non-proteinogenic amino acids are not found in proteins. Examples include 2-aminoisobutyric acid and the neurotransmitter gamma-aminobutyric acid. Non-proteinogenic amino acids often occur as intermediates in the metabolic pathways for standard amino acids – for example, ornithine and citrulline occur in the urea cycle, part of amino acid catabolism (see below). A rare exception to the dominance of α-amino acids in biology is the β-amino acid beta alanine (3-aminopropanoic acid), which is used in plants and microorganisms in the synthesis of pantothenic acid (vitamin B5), a component of coenzyme A.
In mammalian nutrition
Amino acids are not typical component of food: animals eat proteins. The protein is broken down into amino acids in the process of digestion. They are then used to synthesize new proteins, other biomolecules, or are oxidized to urea and carbon dioxide as a source of energy. The oxidation pathway starts with the removal of the amino group by a transaminase; the amino group is then fed into the urea cycle. The other product of transamidation is a keto acid that enters the citric acid cycle. Glucogenic amino acids can also be converted into glucose, through gluconeogenesis.
Of the 20 standard amino acids, nine (His, Ile, Leu, Lys, Met, Phe, Thr, Trp and Val) are called essential amino acids because the human body cannot synthesize them from other compounds at the level needed for normal growth, so they must be obtained from food.
Semi-essential and conditionally essential amino acids, and juvenile requirements
In addition, cysteine, tyrosine, and arginine are considered semiessential amino acids, and taurine a semi-essential aminosulfonic acid in children. Some amino acids are conditionally essential for certain ages or medical conditions. Essential amino acids may also vary from species to species. The metabolic pathways that synthesize these monomers are not fully developed.
Non-protein functions
Many proteinogenic and non-proteinogenic amino acids have biological functions beyond being precursors to proteins and peptides.In humans, amino acids also have important roles in diverse biosynthetic pathways. Defenses against herbivores in plants sometimes employ amino acids. Examples:
Standard amino acids
Tryptophan is a precursor of the neurotransmitter serotonin.
Tyrosine (and its precursor phenylalanine) are precursors of the catecholamine neurotransmitters dopamine, epinephrine and norepinephrine and various trace amines.
Phenylalanine is a precursor of phenethylamine and tyrosine in humans. In plants, it is a precursor of various phenylpropanoids, which are important in plant metabolism.
Glycine is a precursor of porphyrins such as heme.
Arginine is a precursor of nitric oxide.
Ornithine and S-adenosylmethionine are precursors of polyamines.
Aspartate, glycine, and glutamine are precursors of nucleotides.
Roles for nonstandard amino acids
Carnitine is used in lipid transport.
gamma-aminobutyric acid is a neurotransmitter.
5-HTP (5-hydroxytryptophan) is used for experimental treatment of depression.
L-DOPA (L-dihydroxyphenylalanine) for Parkinson's treatment,
Eflornithine inhibits ornithine decarboxylase and used in the treatment of sleeping sickness.
Canavanine, an analogue of arginine found in many legumes is an antifeedant, protecting the plant from predators.
Mimosine found in some legumes, is another possible antifeedant. This compound is an analogue of tyrosine and can poison animals that graze on these plants.
However, not all of the functions of other abundant nonstandard amino acids are known.
Uses in industry
Animal feed
Amino acids are sometimes added to animal feed because some of the components of these feeds, such as soybeans, have low levels of some of the essential amino acids, especially of lysine, methionine, threonine, and tryptophan. Likewise amino acids are used to chelate metal cations in order to improve the absorption of minerals from feed supplements.
Food
The food industry is a major consumer of amino acids, especially glutamic acid, which is used as a flavor enhancer, and aspartame (aspartylphenylalanine 1-methyl ester), which is used as an artificial sweetener. Amino acids are sometimes added to food by manufacturers to alleviate symptoms of mineral deficiencies, such as anemia, by improving mineral absorption and reducing negative side effects from inorganic mineral supplementation.
Chemical building blocks
Amino acids are low-cost feedstocks used in chiral pool synthesis as enantiomerically pure building blocks.
Amino acids are used in the synthesis of some cosmetics.
Aspirational uses
Fertilizer
The chelating ability of amino acids is sometimes used in fertilizers to facilitate the delivery of minerals to plants in order to correct mineral deficiencies, such as iron chlorosis. These fertilizers are also used to prevent deficiencies from occurring and to improve the overall health of the plants.
Biodegradable plastics
Amino acids have been considered as components of biodegradable polymers, which have applications as environmentally friendly packaging and in medicine in drug delivery and the construction of prosthetic implants. An interesting example of such materials is polyaspartate, a water-soluble biodegradable polymer that may have applications in disposable diapers and agriculture. Due to its solubility and ability to chelate metal ions, polyaspartate is also being used as a biodegradable antiscaling agent and a corrosion inhibitor.
Synthesis
Chemical synthesis
The commercial production of amino acids usually relies on mutant bacteria that overproduce individual amino acids using glucose as a carbon source. Some amino acids are produced by enzymatic conversions of synthetic intermediates. 2-Aminothiazoline-4-carboxylic acid is an intermediate in one industrial synthesis of L-cysteine for example. Aspartic acid is produced by the addition of ammonia to fumarate using a lyase.
Biosynthesis
In plants, nitrogen is first assimilated into organic compounds in the form of glutamate, formed from alpha-ketoglutarate and ammonia in the mitochondrion. For other amino acids, plants use transaminases to move the amino group from glutamate to another alpha-keto acid. For example, aspartate aminotransferase converts glutamate and oxaloacetate to alpha-ketoglutarate and aspartate. Other organisms use transaminases for amino acid synthesis, too.
Nonstandard amino acids are usually formed through modifications to standard amino acids. For example, homocysteine is formed through the transsulfuration pathway or by the demethylation of methionine via the intermediate metabolite S-adenosylmethionine, while hydroxyproline is made by a post translational modification of proline.
Microorganisms and plants synthesize many uncommon amino acids. For example, some microbes make 2-aminoisobutyric acid and lanthionine, which is a sulfide-bridged derivative of alanine. Both of these amino acids are found in peptidic lantibiotics such as alamethicin. However, in plants, 1-aminocyclopropane-1-carboxylic acid is a small disubstituted cyclic amino acid that is an intermediate in the production of the plant hormone ethylene.
Primordial synthesis
The formation of amino acids and peptides is assumed to have preceded and perhaps induced the emergence of life on earth. Amino acids can form from simple precursors under various conditions. Surface-based chemical metabolism of amino acids and very small compounds may have led to the build-up of amino acids, coenzymes and phosphate-based small carbon molecules. Amino acids and similar building blocks could have been elaborated into proto-peptides, with peptides being considered key players in the origin of life.
In the famous Urey-Miller experiment, the passage of an electric arc through a mixture of methane, hydrogen, and ammonia produces a large number of amino acids. Since then, scientists have discovered a range of ways and components by which the potentially prebiotic formation and chemical evolution of peptides may have occurred, such as condensing agents, the design of self-replicating peptides and a number of non-enzymatic mechanisms by which amino acids could have emerged and elaborated into peptides. Several hypotheses invoke the Strecker synthesis whereby hydrogen cyanide, simple aldehydes, ammonia, and water produce amino acids.
According to a review, amino acids, and even peptides, "turn up fairly regularly in the various experimental broths that have been allowed to be cooked from simple chemicals. This is because nucleotides are far more difficult to synthesize chemically than amino acids." For a chronological order, it suggests that there must have been a 'protein world' or at least a 'polypeptide world', possibly later followed by the 'RNA world' and the 'DNA world'. Codon–amino acids mappings may be the biological information system at the primordial origin of life on Earth. While amino acids and consequently simple peptides must have formed under different experimentally probed geochemical scenarios, the transition from an abiotic world to the first life forms is to a large extent still unresolved.
Reactions
Amino acids undergo the reactions expected of the constituent functional groups.
Peptide bond formation
As both the amine and carboxylic acid groups of amino acids can react to form amide bonds, one amino acid molecule can react with another and become joined through an amide linkage. This polymerization of amino acids is what creates proteins. This condensation reaction yields the newly formed peptide bond and a molecule of water. In cells, this reaction does not occur directly; instead, the amino acid is first activated by attachment to a transfer RNA molecule through an ester bond. This aminoacyl-tRNA is produced in an ATP-dependent reaction carried out by an aminoacyl tRNA synthetase. This aminoacyl-tRNA is then a substrate for the ribosome, which catalyzes the attack of the amino group of the elongating protein chain on the ester bond. As a result of this mechanism, all proteins made by ribosomes are synthesized starting at their N-terminus and moving toward their C-terminus.
However, not all peptide bonds are formed in this way. In a few cases, peptides are synthesized by specific enzymes. For example, the tripeptide glutathione is an essential part of the defenses of cells against oxidative stress. This peptide is synthesized in two steps from free amino acids. In the first step, gamma-glutamylcysteine synthetase condenses cysteine and glutamate through a peptide bond formed between the side chain carboxyl of the glutamate (the gamma carbon of this side chain) and the amino group of the cysteine. This dipeptide is then condensed with glycine by glutathione synthetase to form glutathione.
In chemistry, peptides are synthesized by a variety of reactions. One of the most-used in solid-phase peptide synthesis uses the aromatic oxime derivatives of amino acids as activated units. These are added in sequence onto the growing peptide chain, which is attached to a solid resin support. Libraries of peptides are used in drug discovery through high-throughput screening.
The combination of functional groups allow amino acids to be effective polydentate ligands for metal–amino acid chelates.
The multiple side chains of amino acids can also undergo chemical reactions.
Catabolism
Degradation of an amino acid often involves deamination by moving its amino group to α-ketoglutarate, forming glutamate. This process involves transaminases, often the same as those used in amination during synthesis. In many vertebrates, the amino group is then removed through the urea cycle and is excreted in the form of urea. However, amino acid degradation can produce uric acid or ammonia instead. For example, serine dehydratase converts serine to pyruvate and ammonia. After removal of one or more amino groups, the remainder of the molecule can sometimes be used to synthesize new amino acids, or it can be used for energy by entering glycolysis or the citric acid cycle, as detailed in image at right.
Complexation
Amino acids are bidentate ligands, forming transition metal amino acid complexes.
Chemical analysis
The total nitrogen content of organic matter is mainly formed by the amino groups in proteins. The Total Kjeldahl Nitrogen (TKN) is a measure of nitrogen widely used in the analysis of (waste) water, soil, food, feed and organic matter in general. As the name suggests, the Kjeldahl method is applied. More sensitive methods are available.
See also
Amino acid dating
Beta-peptide
Degron
Erepsin
Homochirality
Hyperaminoacidemia
Leucines
Miller–Urey experiment
Nucleic acid sequence
RNA codon table
Notes
References
Further reading
External links
Nitrogen cycle
Zwitterions | Amino acid | [
"Physics",
"Chemistry"
] | 7,760 | [
"Biomolecules by chemical classification",
"Matter",
"Amino acids",
"Nitrogen cycle",
"Zwitterions",
"Metabolism",
"Ions"
] |
1,208 | https://en.wikipedia.org/wiki/Alan%20Turing | Alan Mathison Turing (; 23 June 1912 – 7 June 1954) was an English mathematician, computer scientist, logician, cryptanalyst, philosopher and theoretical biologist. He was highly influential in the development of theoretical computer science, providing a formalisation of the concepts of algorithm and computation with the Turing machine, which can be considered a model of a general-purpose computer. Turing is widely considered to be the father of theoretical computer science.
Born in London, Turing was raised in southern England. He graduated from King's College, Cambridge, and in 1938, earned a doctorate degree from Princeton University. During World War II, Turing worked for the Government Code and Cypher School at Bletchley Park, Britain's codebreaking centre that produced Ultra intelligence. He led Hut 8, the section responsible for German naval cryptanalysis. Turing devised techniques for speeding the breaking of German ciphers, including improvements to the pre-war Polish bomba method, an electromechanical machine that could find settings for the Enigma machine. He played a crucial role in cracking intercepted messages that enabled the Allies to defeat the Axis powers in many engagements, including the Battle of the Atlantic.
After the war, Turing worked at the National Physical Laboratory, where he designed the Automatic Computing Engine, one of the first designs for a stored-program computer. In 1948, Turing joined Max Newman's Computing Machine Laboratory at the Victoria University of Manchester, where he helped develop the Manchester computers and became interested in mathematical biology. Turing wrote on the chemical basis of morphogenesis and predicted oscillating chemical reactions such as the Belousov–Zhabotinsky reaction, first observed in the 1960s. Despite these accomplishments, he was never fully recognised during his lifetime because much of his work was covered by the Official Secrets Act.
In 1952, Turing was prosecuted for homosexual acts. He accepted hormone treatment, a procedure commonly referred to as chemical castration, as an alternative to prison. Turing died on 7 June 1954, aged 41, from cyanide poisoning. An inquest determined his death as suicide, but the evidence is also consistent with accidental poisoning.
Following a campaign in 2009, British prime minister Gordon Brown made an official public apology for "the appalling way [Turing] was treated". Queen Elizabeth II granted a pardon in 2013. The term "Alan Turing law" is used informally to refer to a 2017 law in the UK that retroactively pardoned men cautioned or convicted under historical legislation that outlawed homosexual acts.
Turing left an extensive legacy in mathematics and computing which has become widely recognised with statues and many things named after him, including an annual award for computing innovation. His portrait appears on the Bank of England £50 note, first released on 23 June 2021 to coincide with his birthday. The audience vote in a 2019 BBC series named Turing the greatest person of the 20th century.
Early life and education
Family
Turing was born in Maida Vale, London, while his father, Julius Mathison Turing, was on leave from his position with the Indian Civil Service (ICS) of the British Raj government at Chatrapur, then in the Madras Presidency and presently in Odisha state, in India. Turing's father was the son of a clergyman, the Rev. John Robert Turing, from a Scottish family of merchants that had been based in the Netherlands and included a baronet. Turing's mother, Julius's wife, was Ethel Sara Turing (), daughter of Edward Waller Stoney, chief engineer of the Madras Railways. The Stoneys were a Protestant Anglo-Irish gentry family from both County Tipperary and County Longford, while Ethel herself had spent much of her childhood in County Clare. Julius and Ethel married on 1 October 1907 at the Church of Ireland St. Bartholomew's Church on Clyde Road in Ballsbridge, Dublin.
Julius's work with the ICS brought the family to British India, where his grandfather had been a general in the Bengal Army. However, both Julius and Ethel wanted their children to be brought up in Britain, so they moved to Maida Vale, London, where Alan Turing was born on 23 June 1912, as recorded by a blue plaque on the outside of the house of his birth, later the Colonnade Hotel. Turing had an elder brother, John Ferrier Turing, father of Sir John Dermot Turing, 12th Baronet of the Turing baronets.
Turing's father's civil service commission was still active during Turing's childhood years, and his parents travelled between Hastings in the United Kingdom and India, leaving their two sons to stay with a retired Army couple. At Hastings, Turing stayed at Baston Lodge, Upper Maze Hill, St Leonards-on-Sea, now marked with a blue plaque. The plaque was unveiled on 23 June 2012, the centenary of Turing's birth.
Very early in life, Turing's parents purchased a house in Guildford in 1927, and Turing lived there during school holidays. The location is also marked with a blue plaque.
School
Turing's parents enrolled him at St Michael's, a primary school at 20 Charles Road, St Leonards-on-Sea, from the age of six to nine. The headmistress recognised his talent, noting that she "...had clever boys and hardworking boys, but Alan is a genius".
Between January 1922 and 1926, Turing was educated at Hazelhurst Preparatory School, an independent school in the village of Frant in Sussex (now East Sussex). In 1926, at the age of 13, he went on to Sherborne School, an independent boarding school in the market town of Sherborne in Dorset, where he boarded at Westcott House. The first day of term coincided with the 1926 General Strike, in Britain, but Turing was so determined to attend that he rode his bicycle unaccompanied from Southampton to Sherborne, stopping overnight at an inn.
Turing's natural inclination towards mathematics and science did not earn him respect from some of the teachers at Sherborne, whose definition of education placed more emphasis on the classics. His headmaster wrote to his parents: "I hope he will not fall between two stools. If he is to stay at public school, he must aim at becoming educated. If he is to be solely a Scientific Specialist, he is wasting his time at a public school". Despite this, Turing continued to show remarkable ability in the studies he loved, solving advanced problems in 1927 without having studied even elementary calculus. In 1928, aged 16, Turing encountered Albert Einstein's work; not only did he grasp it, but it is possible that he managed to deduce Einstein's questioning of Newton's laws of motion from a text in which this was never made explicit.
Christopher Morcom
At Sherborne, Turing formed a significant friendship with fellow pupil Christopher Collan Morcom (13 July 1911 – 13 February 1930), who has been described as Turing's first love. Their relationship provided inspiration in Turing's future endeavours, but it was cut short by Morcom's death, in February 1930, from complications of bovine tuberculosis, contracted after drinking infected cow's milk some years previously.
The event caused Turing great sorrow. He coped with his grief by working that much harder on the topics of science and mathematics that he had shared with Morcom. In a letter to Morcom's mother, Frances Isobel Morcom (née Swan), Turing wrote:
Turing's relationship with Morcom's mother continued long after Morcom's death, with her sending gifts to Turing, and him sending letters, typically on Morcom's birthday. A day before the third anniversary of Morcom's death (13 February 1933), he wrote to Mrs. Morcom:
Some have speculated that Morcom's death was the cause of Turing's atheism and materialism. Apparently, at this point in his life he still believed in such concepts as a spirit, independent of the body and surviving death. In a later letter, also written to Morcom's mother, Turing wrote:
University and work on computability
After graduating from Sherborne, Turing applied for several Cambridge colleges scholarships, including Trinity and King's, eventually earning an £80 per annum scholarship (equivalent to about £4,300 as of 2023) to study at the latter. There, Turing studied the undergraduate course in Schedule B (that is, a three-year Parts I and II, of the Mathematical Tripos, with extra courses at the end of the third year, as Part III only emerged as a separate degree in 1934) from February 1931 to November 1934 at King's College, Cambridge, where he was awarded first-class honours in mathematics. His dissertation, On the Gaussian error function, written during his senior year and delivered in November 1934 (with a deadline date of 6 December) proved a version of the central limit theorem. It was finally accepted on 16 March 1935. By spring of that same year, Turing started his master's course (Part III)—which he completed in 1937—and, at the same time, he published his first paper, a one-page article called Equivalence of left and right almost periodicity (sent on 23 April), featured in the tenth volume of the Journal of the London Mathematical Society. Later that year, Turing was elected a Fellow of King's College on the strength of his dissertation where he served as a lecturer. However, and, unknown to Turing, this version of the theorem he proved in his paper, had already been proven, in 1922, by Jarl Waldemar Lindeberg. Despite this, the committee found Turing's methods original and so regarded the work worthy of consideration for the fellowship. Abram Besicovitch's report for the committee went so far as to say that if Turing's work had been published before Lindeberg's, it would have been "an important event in the mathematical literature of that year".
Between the springs of 1935 and 1936, at the same time as Alonzo Church, Turing worked on the decidability of problems, starting from Gödel's incompleteness theorems. In mid-April 1936, Turing sent Max Newman the first draft typescript of his investigations. That same month, Church published his An Unsolvable Problem of Elementary Number Theory, with similar conclusions to Turing's then-yet unpublished work. Finally, on 28 May of that year, he finished and delivered his 36-page paper for publication called "On Computable Numbers, with an Application to the Entscheidungsproblem". It was published in the Proceedings of the London Mathematical Society journal in two parts, the first on 30 November and the second on 23 December. In this paper, Turing reformulated Kurt Gödel's 1931 results on the limits of proof and computation, replacing Gödel's universal arithmetic-based formal language with the formal and simple hypothetical devices that became known as Turing machines. The Entscheidungsproblem (decision problem) was originally posed by German mathematician David Hilbert in 1928. Turing proved that his "universal computing machine" would be capable of performing any conceivable mathematical computation if it were representable as an algorithm. He went on to prove that there was no solution to the decision problem by first showing that the halting problem for Turing machines is undecidable: it is not possible to decide algorithmically whether a Turing machine will ever halt. This paper has been called "easily the most influential math paper in history".
Although Turing's proof was published shortly after Church's equivalent proof using his lambda calculus, Turing's approach is considerably more accessible and intuitive than Church's. It also included a notion of a 'Universal Machine' (now known as a universal Turing machine), with the idea that such a machine could perform the tasks of any other computation machine (as indeed could Church's lambda calculus). According to the Church–Turing thesis, Turing machines and the lambda calculus are capable of computing anything that is computable. John von Neumann acknowledged that the central concept of the modern computer was due to Turing's paper. To this day, Turing machines are a central object of study in theory of computation.
From September 1936 to July 1938, Turing spent most of his time studying under Church at Princeton University, in the second year as a Jane Eliza Procter Visiting Fellow. In addition to his purely mathematical work, he studied cryptology and also built three of four stages of an electro-mechanical binary multiplier. In June 1938, he obtained his PhD from the Department of Mathematics at Princeton; his dissertation, Systems of Logic Based on Ordinals, introduced the concept of ordinal logic and the notion of relative computing, in which Turing machines are augmented with so-called oracles, allowing the study of problems that cannot be solved by Turing machines. John von Neumann wanted to hire him as his postdoctoral assistant, but he went back to the United Kingdom.
Career and research
When Turing returned to Cambridge, he attended lectures given in 1939 by Ludwig Wittgenstein about the foundations of mathematics. The lectures have been reconstructed verbatim, including interjections from Turing and other students, from students' notes. Turing and Wittgenstein argued and disagreed, with Turing defending formalism and Wittgenstein propounding his view that mathematics does not discover any absolute truths, but rather invents them.
Cryptanalysis
During the Second World War, Turing was a leading participant in the breaking of German ciphers at Bletchley Park. The historian and wartime codebreaker Asa Briggs has said, "You needed exceptional talent, you needed genius at Bletchley and Turing's was that genius."
From September 1938, Turing worked part-time with the Government Code and Cypher School (GC&CS), the British codebreaking organisation. He concentrated on cryptanalysis of the Enigma cipher machine used by Nazi Germany, together with Dilly Knox, a senior GC&CS codebreaker. Soon after the July 1939 meeting near Warsaw at which the Polish Cipher Bureau gave the British and French details of the wiring of Enigma machine's rotors and their method of decrypting Enigma machine's messages, Turing and Knox developed a broader solution. The Polish method relied on an insecure indicator procedure that the Germans were likely to change, which they in fact did in May 1940. Turing's approach was more general, using crib-based decryption for which he produced the functional specification of the bombe (an improvement on the Polish Bomba).
On 4 September 1939, the day after the UK declared war on Germany, Turing reported to Bletchley Park, the wartime station of GC&CS. Like all others who came to Bletchley, he was required to sign the Official Secrets Act, in which he agreed not to disclose anything about his work at Bletchley, with severe legal penalties for violating the Act.
Specifying the bombe was the first of five major cryptanalytical advances that Turing made during the war. The others were: deducing the indicator procedure used by the German navy; developing a statistical procedure dubbed Banburismus for making much more efficient use of the bombes; developing a procedure dubbed Turingery for working out the cam settings of the wheels of the Lorenz SZ 40/42 (Tunny) cipher machine and, towards the end of the war, the development of a portable secure voice scrambler at Hanslope Park that was codenamed Delilah.
By using statistical techniques to optimise the trial of different possibilities in the code breaking process, Turing made an innovative contribution to the subject. He wrote two papers discussing mathematical approaches, titled The Applications of Probability to Cryptography and Paper on Statistics of Repetitions, which were of such value to GC&CS and its successor GCHQ that they were not released to the UK National Archives until April 2012, shortly before the centenary of his birth. A GCHQ mathematician, "who identified himself only as Richard," said at the time that the fact that the contents had been restricted under the Official Secrets Act for some 70 years demonstrated their importance, and their relevance to post-war cryptanalysis:
Turing had a reputation for eccentricity at Bletchley Park. He was known to his colleagues as "Prof" and his treatise on Enigma was known as the "Prof's Book". According to historian Ronald Lewin, Jack Good, a cryptanalyst who worked with Turing, said of his colleague:
Peter Hilton recounted his experience working with Turing in Hut 8 in his "Reminiscences of Bletchley Park" from A Century of Mathematics in America:
Hilton echoed similar thoughts in the Nova PBS documentary Decoding Nazi Secrets.
While working at Bletchley, Turing, who was a talented long-distance runner, occasionally ran the to London when he was needed for meetings, and he was capable of world-class marathon standards. Turing tried out for the 1948 British Olympic team, but he was hampered by an injury. His tryout time for the marathon was only 11 minutes slower than British silver medallist Thomas Richards' Olympic race time of 2 hours 35 minutes. He was Walton Athletic Club's best runner, a fact discovered when he passed the group while running alone. When asked why he ran so hard in training he replied:
Due to the problems of counterfactual history, it is hard to estimate the precise effect Ultra intelligence had on the war. However, official war historian Harry Hinsley estimated that this work shortened the war in Europe by more than two years and saved over 14 million lives.
At the end of the war, a memo was sent to all those who had worked at Bletchley Park, reminding them that the code of silence dictated by the Official Secrets Act did not end with the war but would continue indefinitely. Thus, even though Turing was appointed an Officer of the Order of the British Empire (OBE) in 1946 by King George VI for his wartime services, his work remained secret for many years.
Bombe
Within weeks of arriving at Bletchley Park, Turing had specified an electromechanical machine called the bombe, which could break Enigma more effectively than the Polish bomba kryptologiczna, from which its name was derived. The bombe, with an enhancement suggested by mathematician Gordon Welchman, became one of the primary tools, and the major automated one, used to attack Enigma-enciphered messages.
The bombe searched for possible correct settings used for an Enigma message (i.e., rotor order, rotor settings and plugboard settings) using a suitable crib: a fragment of probable plaintext. For each possible setting of the rotors (which had on the order of 1019 states, or 1022 states for the four-rotor U-boat variant), the bombe performed a chain of logical deductions based on the crib, implemented electromechanically.
The bombe detected when a contradiction had occurred and ruled out that setting, moving on to the next. Most of the possible settings would cause contradictions and be discarded, leaving only a few to be investigated in detail. A contradiction would occur when an enciphered letter would be turned back into the same plaintext letter, which was impossible with the Enigma. The first bombe was installed on 18 March 1940.
Action This Day
By late 1941, Turing and his fellow cryptanalysts Gordon Welchman, Hugh Alexander and Stuart Milner-Barry were frustrated. Building on the work of the Poles, they had set up a good working system for decrypting Enigma signals, but their limited staff and bombes meant they could not translate all the signals. In the summer, they had considerable success, and shipping losses had fallen to under 100,000 tons a month; however, they badly needed more resources to keep abreast of German adjustments. They had tried to get more people and fund more bombes through the proper channels, but had failed.
On 28 October they wrote directly to Winston Churchill explaining their difficulties, with Turing as the first named. They emphasised how small their need was compared with the vast expenditure of men and money by the forces and compared with the level of assistance they could offer to the forces. As Andrew Hodges, biographer of Turing, later wrote, "This letter had an electric effect." Churchill wrote a memo to General Ismay, which read: "ACTION THIS DAY. Make sure they have all they want on extreme priority and report to me that this has been done." On 18 November, the chief of the secret service reported that every possible measure was being taken. The cryptographers at Bletchley Park did not know of the Prime Minister's response, but as Milner-Barry recalled, "All that we did notice was that almost from that day the rough ways began miraculously to be made smooth." More than two hundred bombes were in operation by the end of the war.
Hut 8 and the naval Enigma
Turing decided to tackle the particularly difficult problem of cracking the German naval use of Enigma "because no one else was doing anything about it and I could have it to myself". In December 1939, Turing solved the essential part of the naval indicator system, which was more complex than the indicator systems used by the other services.
That same night, he also conceived of the idea of Banburismus, a sequential statistical technique (what Abraham Wald later called sequential analysis) to assist in breaking the naval Enigma, "though I was not sure that it would work in practice, and was not, in fact, sure until some days had actually broken". For this, he invented a measure of weight of evidence that he called the ban. Banburismus could rule out certain sequences of the Enigma rotors, substantially reducing the time needed to test settings on the bombes. Later this sequential process of accumulating sufficient weight of evidence using decibans (one tenth of a ban) was used in cryptanalysis of the Lorenz cipher.
Turing travelled to the United States in November 1942 and worked with US Navy cryptanalysts on the naval Enigma and bombe construction in Washington. He also visited their Computing Machine Laboratory in Dayton, Ohio.
Turing's reaction to the American bombe design was far from enthusiastic:
During this trip, he also assisted at Bell Labs with the development of secure speech devices. He returned to Bletchley Park in March 1943. During his absence, Hugh Alexander had officially assumed the position of head of Hut 8, although Alexander had been de facto head for some time (Turing having little interest in the day-to-day running of the section). Turing became a general consultant for cryptanalysis at Bletchley Park.
Alexander wrote of Turing's contribution:
Turingery
In July 1942, Turing devised a technique termed Turingery (or jokingly Turingismus) for use against the Lorenz cipher messages produced by the Germans' new Geheimschreiber (secret writer) machine. This was a teleprinter rotor cipher attachment codenamed Tunny at Bletchley Park. Turingery was a method of wheel-breaking, i.e., a procedure for working out the cam settings of Tunny's wheels. He also introduced the Tunny team to Tommy Flowers who, under the guidance of Max Newman, went on to build the Colossus computer, the world's first programmable digital electronic computer, which replaced a simpler prior machine (the Heath Robinson), and whose superior speed allowed the statistical decryption techniques to be applied usefully to the messages. Some have mistakenly said that Turing was a key figure in the design of the Colossus computer. Turingery and the statistical approach of Banburismus undoubtedly fed into the thinking about cryptanalysis of the Lorenz cipher, but he was not directly involved in the Colossus development.
Delilah
Following his work at Bell Labs in the US, Turing pursued the idea of electronic enciphering of speech in the telephone system. In the latter part of the war, he moved to work for the Secret Service's Radio Security Service (later HMGCC) at Hanslope Park. At the park, he further developed his knowledge of electronics with the assistance of REME officer Donald Bayley. Together they undertook the design and construction of a portable secure voice communications machine codenamed Delilah. The machine was intended for different applications, but it lacked the capability for use with long-distance radio transmissions. In any case, Delilah was completed too late to be used during the war. Though the system worked fully, with Turing demonstrating it to officials by encrypting and decrypting a recording of a Winston Churchill speech, Delilah was not adopted for use. Turing also consulted with Bell Labs on the development of SIGSALY, a secure voice system that was used in the later years of the war.
Early computers and the Turing test
Between 1945 and 1947, Turing lived in Hampton, London, while he worked on the design of the ACE (Automatic Computing Engine) at the National Physical Laboratory (NPL). He presented a paper on 19 February 1946, which was the first detailed design of a stored-program computer. Von Neumann's incomplete First Draft of a Report on the EDVAC had predated Turing's paper, but it was much less detailed and, according to John R. Womersley, Superintendent of the NPL Mathematics Division, it "contains a number of ideas which are Dr. Turing's own".
Although ACE was a feasible design, the effect of the Official Secrets Act surrounding the wartime work at Bletchley Park made it impossible for Turing to explain the basis of his analysis of how a computer installation involving human operators would work. This led to delays in starting the project and he became disillusioned. In late 1947 he returned to Cambridge for a sabbatical year during which he produced a seminal work on Intelligent Machinery that was not published in his lifetime. While he was at Cambridge, the Pilot ACE was being built in his absence. It executed its first program on 10 May 1950, and a number of later computers around the world owe much to it, including the English Electric DEUCE and the American Bendix G-15. The full version of Turing's ACE was not built until after his death.
According to the memoirs of the German computer pioneer Heinz Billing from the Max Planck Institute for Physics, published by Genscher, Düsseldorf, there was a meeting between Turing and Konrad Zuse. It took place in Göttingen in 1947. The interrogation had the form of a colloquium. Participants were Womersley, Turing, Porter from England and a few German researchers like Zuse, Walther, and Billing (for more details see Herbert Bruderer, Konrad Zuse und die Schweiz).
In 1948, Turing was appointed reader in the Mathematics Department at the Victoria University of Manchester. He lived at "Copper Folly", 43 Adlington Road, in Wilmslow. A year later, he became deputy director of the Computing Machine Laboratory, where he worked on software for one of the earliest stored-program computers—the Manchester Mark 1. Turing wrote the first version of the Programmer's Manual for this machine, and was recruited by Ferranti as a consultant in the development of their commercialised machine, the Ferranti Mark 1. He continued to be paid consultancy fees by Ferranti until his death. During this time, he continued to do more abstract work in mathematics, and in "Computing Machinery and Intelligence" (Mind, October 1950), Turing addressed the problem of artificial intelligence, and proposed an experiment that became known as the Turing test, an attempt to define a standard for a machine to be called "intelligent". The idea was that a computer could be said to "think" if a human interrogator could not tell it apart, through conversation, from a human being. In the paper, Turing suggested that rather than building a program to simulate the adult mind, it would be better to produce a simpler one to simulate a child's mind and then to subject it to a course of education. A reversed form of the Turing test is widely used on the Internet; the CAPTCHA test is intended to determine whether the user is a human or a computer.
In 1948, Turing, working with his former undergraduate colleague, D.G. Champernowne, began writing a chess program for a computer that did not yet exist. By 1950, the program was completed and dubbed the Turochamp. In 1952, he tried to implement it on a Ferranti Mark 1, but lacking enough power, the computer was unable to execute the program. Instead, Turing "ran" the program by flipping through the pages of the algorithm and carrying out its instructions on a chessboard, taking about half an hour per move. The game was recorded. According to Garry Kasparov, Turing's program "played a recognizable game of chess". The program lost to Turing's colleague Alick Glennie, although it is said that it won a game against Champernowne's wife, Isabel.
His Turing test was a significant, characteristically provocative, and lasting contribution to the debate regarding artificial intelligence, which continues after more than half a century.
Pattern formation and mathematical biology
When Turing was 39 years old in 1951, he turned to mathematical biology, finally publishing his masterpiece "The Chemical Basis of Morphogenesis" in January 1952. He was interested in morphogenesis, the development of patterns and shapes in biological organisms. He suggested that a system of chemicals reacting with each other and diffusing across space, termed a reaction–diffusion system, could account for "the main phenomena of morphogenesis". He used systems of partial differential equations to model catalytic chemical reactions. For example, if a catalyst A is required for a certain chemical reaction to take place, and if the reaction produced more of the catalyst A, then we say that the reaction is autocatalytic, and there is positive feedback that can be modelled by nonlinear differential equations. Turing discovered that patterns could be created if the chemical reaction not only produced catalyst A, but also produced an inhibitor B that slowed down the production of A. If A and B then diffused through the container at different rates, then you could have some regions where A dominated and some where B did. To calculate the extent of this, Turing would have needed a powerful computer, but these were not so freely available in 1951, so he had to use linear approximations to solve the equations by hand. These calculations gave the right qualitative results, and produced, for example, a uniform mixture that oddly enough had regularly spaced fixed red spots. The Russian biochemist Boris Belousov had performed experiments with similar results, but could not get his papers published because of the contemporary prejudice that any such thing violated the second law of thermodynamics. Belousov was not aware of Turing's paper in the Philosophical Transactions of the Royal Society.
Although published before the structure and role of DNA was understood, Turing's work on morphogenesis remains relevant today and is considered a seminal piece of work in mathematical biology. One of the early applications of Turing's paper was the work by James Murray explaining spots and stripes on the fur of cats, large and small. Further research in the area suggests that Turing's work can partially explain the growth of "feathers, hair follicles, the branching pattern of lungs, and even the left-right asymmetry that puts the heart on the left side of the chest". In 2012, Sheth, et al. found that in mice, removal of Hox genes causes an increase in the number of digits without an increase in the overall size of the limb, suggesting that Hox genes control digit formation by tuning the wavelength of a Turing-type mechanism. Later papers were not available until Collected Works of A. M. Turing was published in 1992.
A study conducted in 2023 confirmed Turing's mathematical model hypothesis. Presented by the American Physical Society, the experiment involved growing chia seeds in even layers within trays, later adjusting the available moisture. Researchers experimentally tweaked the factors which appear in the Turing equations, and, as a result, patterns resembling those seen in natural environments emerged. This is believed to be the first time that experiments with living vegetation have verified Turing's mathematical insight.
Personal life
Treasure
In the 1940s, Turing became worried about losing his savings in the event of a German invasion. In order to protect it, he bought two silver bars weighing and worth £250 (in 2022, £8,000 adjusted for inflation, £48,000 at spot price) and buried them in a wood near Bletchley Park. Upon returning to dig them up, Turing found that he was unable to break his own code describing where exactly he had hidden them. This, along with the fact that the area had been renovated, meant that he never regained the silver.
Engagement
In 1941, Turing proposed marriage to Hut 8 colleague Joan Clarke, a fellow mathematician and cryptanalyst, but their engagement was short-lived. After admitting his homosexuality to his fiancée, who was reportedly "unfazed" by the revelation, Turing decided that he could not go through with the marriage.
Homosexuality and indecency conviction
In December 1951, Turing met Arnold Murray, a 19-year-old unemployed man. Turing was walking along Manchester's Oxford Road when he met Murray just outside the Regal Cinema and invited him to lunch. The two agreed to meet again and in January 1952 began an intimate relationship. On 23 January, Turing's house in Wilmslow was burgled. Murray told Turing that he and the burglar were acquainted, and Turing reported the crime to the police. During the investigation, he acknowledged a sexual relationship with Murray. Homosexual acts were criminal offences in the United Kingdom at that time, and both men were charged with "gross indecency" under Section 11 of the Criminal Law Amendment Act 1885. Initial committal proceedings for the trial were held on 27 February during which Turing's solicitor "reserved his defence", i.e., did not argue or provide evidence against the allegations. The proceedings were held at the Sessions House in Knutsford.
Turing was later convinced by the advice of his brother and his own solicitor, and he entered a plea of guilty. The case, Regina v. Turing and Murray, was brought to trial on 31 March 1952. Turing was convicted and given a choice between imprisonment and probation. His probation would be conditional on his agreement to undergo hormonal physical changes designed to reduce libido, known as "chemical castration". He accepted the option of injections of what was then called stilboestrol (now known as diethylstilbestrol or DES), a synthetic oestrogen; this feminization of his body was continued for the course of one year. The treatment rendered Turing impotent and caused breast tissue to form. In a letter, Turing wrote that "no doubt I shall emerge from it all a different man, but quite who I've not found out". Murray was given a conditional discharge.
Turing's conviction led to the removal of his security clearance and barred him from continuing with his cryptographic consultancy for the Government Communications Headquarters (GCHQ), the British signals intelligence agency that had evolved from GC&CS in 1946, though he kept his academic job. His trial took place only months after the defection to the Soviet Union of Guy Burgess and Donald Maclean in summer 1951 after which the Foreign Office started to consider anyone known to be homosexual as a potential security risk.
Turing was denied entry into the United States after his conviction in 1952, but was free to visit other European countries. In the summer of 1952 he visited Norway which was more tolerant of homosexuals. Among the various men he met there was one named Kjell Carlson. Kjell intended to visit Turing in the UK but the authorities intercepted Kjell's postcard detailing his travel arrangements and were able to intercept and deport him before the two could meet. It was also during this time that Turing started consulting a psychiatrist, Dr Franz Greenbaum, with whom he got on well and who subsequently became a family friend.
Death
On 8 June 1954, at his house at 43 Adlington Road, Wilmslow, Turing's housekeeper found him dead. A post mortem was held that evening, which determined that he had died the previous day at age 41 with cyanide poisoning cited as the cause of death. When his body was discovered, an apple lay half-eaten beside his bed, and although the apple was not tested for cyanide, it was speculated that this was the means by which Turing had consumed a fatal dose.
Turing's brother, John, identified the body the following day and took the advice given by Dr. Greenbaum to accept the verdict of the inquest, as there was little prospect of establishing that the death was accidental. The inquest was held the following day, which determined the cause of death to be suicide. Turing's remains were cremated at Woking Crematorium just two days later on 12 June 1954, with just his mother, brother, and Lyn Newman attending, and his ashes were scattered in the gardens of the crematorium, just as his father's had been. Turing's mother was on holiday in Italy at the time of his death and returned home after the inquest. She never accepted the verdict of suicide.
Philosopher Jack Copeland has questioned various aspects of the coroner's historical verdict. He suggested an alternative explanation for the cause of Turing's death: the accidental inhalation of cyanide fumes from an apparatus used to electroplate gold onto spoons. The potassium cyanide was used to dissolve the gold. Turing had such an apparatus set up in his tiny spare room. Copeland noted that the autopsy findings were more consistent with inhalation than with ingestion of the poison. Turing also habitually ate an apple before going to bed, and it was not unusual for the apple to be discarded half-eaten. Furthermore, Turing had reportedly borne his legal setbacks and hormone treatment (which had been discontinued a year previously) "with good humour" and had shown no sign of despondency before his death. He even set down a list of tasks that he intended to complete upon returning to his office after the holiday weekend. Turing's mother believed that the ingestion was accidental, resulting from her son's careless storage of laboratory chemicals.
Turing's biographer Andrew Hodges theorised that Turing deliberately left the nature of his death ambiguous in order to shield his mother from the knowledge that he had killed himself. Doubts on the suicide thesis have been also cast by John W. Dawson Jr. who, in his review of Hodges' book, recalls "Turing's vulnerable position in the Cold War political climate" and points out that "Turing was found dead by a maid, who discovered him 'lying neatly in his bed'—hardly what one would expect of "a man fighting for life against the suffocation induced by cyanide poisoning." Turing had given no hint of suicidal inclinations to his friends and had made no effort to put his affairs in order.
Hodges and a later biographer, David Leavitt, have both speculated that Turing was re-enacting a scene from the Walt Disney film Snow White and the Seven Dwarfs (1937), his favourite fairy tale. Both men noted that (in Leavitt's words) he took "an especially keen pleasure in the scene where the Wicked Queen immerses her apple in the poisonous brew".
It has also been suggested that Turing's belief in fortune-telling may have caused his depressed mood. As a youth, Turing had been told by a fortune-teller that he would be a genius. In mid-May 1954, shortly before his death, Turing again decided to consult a fortune-teller during a day-trip to St Annes-on-Sea with the Greenbaum family. According to the Greenbaums' daughter, Barbara:
Government apology and pardon
In August 2009, British programmer John Graham-Cumming started a petition urging the British government to apologise for Turing's prosecution as a homosexual. The petition received more than 30,000 signatures. The prime minister, Gordon Brown, acknowledged the petition, releasing a statement on 10 September 2009 apologising and describing the treatment of Turing as "appalling":
In December 2011, William Jones and his member of Parliament, John Leech, created an e-petition requesting that the British government pardon Turing for his conviction of "gross indecency":
The petition gathered over 37,000 signatures, and was submitted to Parliament by the Manchester MP John Leech but the request was discouraged by Justice Minister Lord McNally, who said:
John Leech, the MP for Manchester Withington (2005–15), submitted several bills to Parliament and led a high-profile campaign to secure the pardon. Leech made the case in the House of Commons that Turing's contribution to the war made him a national hero and that it was "ultimately just embarrassing" that the conviction still stood. Leech continued to take the bill through Parliament and campaigned for several years, gaining the public support of numerous leading scientists, including Stephen Hawking. At the British premiere of a film based on Turing's life, The Imitation Game, the producers thanked Leech for bringing the topic to public attention and securing Turing's pardon. Leech is now regularly described as the "architect" of Turing's pardon and subsequently the Alan Turing Law which went on to secure pardons for 75,000 other men and women convicted of similar crimes.
On 26 July 2012, a bill was introduced in the House of Lords to grant a statutory pardon to Turing for offences under section 11 of the Criminal Law Amendment Act 1885, of which he was convicted on 31 March 1952. Late in the year in a letter to The Daily Telegraph, the physicist Stephen Hawking and 10 other signatories including the Astronomer Royal Lord Rees, President of the Royal Society Sir Paul Nurse, Lady Trumpington (who worked for Turing during the war) and Lord Sharkey (the bill's sponsor) called on Prime Minister David Cameron to act on the pardon request. The government indicated it would support the bill, and it passed its third reading in the House of Lords in October.
At the bill's second reading in the House of Commons on 29 November 2013, Conservative MP Christopher Chope objected to the bill, delaying its passage. The bill was due to return to the House of Commons on 28 February 2014, but before the bill could be debated in the House of Commons, the government elected to proceed under the royal prerogative of mercy. On 24 December 2013, Queen Elizabeth II signed a pardon for Turing's conviction for "gross indecency", with immediate effect. Announcing the pardon, Lord Chancellor Chris Grayling said Turing deserved to be "remembered and recognised for his fantastic contribution to the war effort" and not for his later criminal conviction. The Queen pronounced Turing pardoned in August 2014. It was only the fourth royal pardon granted since the conclusion of the Second World War. Pardons are normally granted only when the person is technically innocent, and a request has been made by the family or other interested party; neither condition was met in regard to Turing's conviction.
In September 2016, the government announced its intention to expand this retroactive exoneration to other men convicted of similar historical indecency offences, in what was described as an "Alan Turing law". The Alan Turing law is now an informal term for the law in the United Kingdom, contained in the Policing and Crime Act 2017, which serves as an amnesty law to retroactively pardon men who were cautioned or convicted under historical legislation that outlawed homosexual acts. The law applies in England and Wales.
On 19 July 2023, following an apology to LGBT veterans from the UK Government, Defence Secretary Ben Wallace suggested Turing should be honoured with a permanent statue on the fourth plinth of Trafalgar Square, describing Turing as "probably the greatest war hero, in my book, of the Second World War, [whose] achievements shortened the war, saved thousands of lives, helped defeat the Nazis. And his story is a sad story of a society and how it treated him."
Publications
See also
Legacy of Alan Turing
List of things named after Alan Turing
References
Notes
Citations
Works cited
in
Further reading
Articles
Books
(originally published in 1983); basis of the film The Imitation Game
Turing's mother, who survived him by many years, wrote this 157-page biography of her son, glorifying his life. It was published in 1959, and so could not cover his war work. Scarcely 300 copies were sold (Sara Turing to Lyn Newman, 1967, Library of St John's College, Cambridge). The six-page foreword by Lyn Irvine includes reminiscences and is more frequently quoted. It was re-published by Cambridge University Press in 2012, to honour the centenary of his birth, and included a new foreword by Martin Davis, as well as a never-before-published memoir by Turing's older brother John F. Turing.
(originally published in 1959 by W. Heffer & Sons, Ltd)
This 1986 Hugh Whitemore play tells the story of Turing's life and death. In the original West End and Broadway runs, Derek Jacobi played Turing and he recreated the role in a 1997 television film based on the play made jointly by the BBC and WGBH, Boston. The play is published by Amber Lane Press, Oxford, ASIN: B000B7TM0Q
External links
Oral history interview with Nicholas C. Metropolis, Charles Babbage Institute, University of Minnesota. Metropolis was the first director of computing services at Los Alamos National Laboratory; topics include the relationship between Turing and John von Neumann
How Alan Turing Cracked The Enigma Code Imperial War Museums
Alan Turing Year
CiE 2012: Turing Centenary Conference
Science in the Making Alan Turing's papers in the Royal Society's archives
Alan Turing site maintained by Andrew Hodges including a short biography
AlanTuring.net – Turing Archive for the History of Computing by Jack Copeland
The Turing Digital Archive – contains scans of some unpublished documents and material from the King's College, Cambridge archive
Alan Turing Papers – University of Manchester Library, Manchester
Sherborne School Archives – holds papers relating to Turing's time at Sherborne School
Alan Turing plaques recorded on openplaques.org
Alan Turing archive on New Scientist
1912 births
1954 deaths
1954 suicides
20th-century atheists
20th-century English LGBTQ people
20th-century English mathematicians
20th-century English philosophers
Academics of the University of Manchester Institute of Science and Technology
Academics of the University of Manchester
Alumni of King's College, Cambridge
Bayesian statisticians
Bletchley Park people
British anti-fascists
British artificial intelligence researchers
British cryptographers
British people of World War II
Castrated people
Computability theorists
Computer chess people
Computer designers
English atheists
English computer scientists
English gay sportsmen
English inventors
English LGBTQ scientists
English logicians
English male long-distance runners
British male long-distance runners
English people of Irish descent
English people of Scottish descent
Enigma machine
Fellows of King's College, Cambridge
Fellows of the Royal Society
Foreign Office personnel of World War II
Former Protestants
Gay academics
Gay scientists
GCHQ people
History of computing in the United Kingdom
LGBTQ mathematicians
LGBTQ philosophers
LGBTQ track and field athletes
LGBTQ people who died by suicide
Officers of the Order of the British Empire
People convicted for homosexuality in the United Kingdom
People educated at Sherborne School
People from Maida Vale
People from Wilmslow
People who have received posthumous pardons
Princeton University alumni
Recipients of British royal pardons
Scientists of the National Physical Laboratory (United Kingdom)
Suicides by cyanide poisoning
Suicides in England
Theoretical biologists
Theoretical computer scientists | Alan Turing | [
"Technology",
"Biology"
] | 9,841 | [
"Bioinformatics",
"Theoretical biologists",
"History of computing",
"History of computing in the United Kingdom"
] |
1,209 | https://en.wikipedia.org/wiki/Area | Area is the measure of a region's size on a surface. The area of a plane region or plane area refers to the area of a shape or planar lamina, while surface area refers to the area of an open surface or the boundary of a three-dimensional object. Area can be understood as the amount of material with a given thickness that would be necessary to fashion a model of the shape, or the amount of paint necessary to cover the surface with a single coat. It is the two-dimensional analogue of the length of a curve (a one-dimensional concept) or the volume of a solid (a three-dimensional concept).
Two different regions may have the same area (as in squaring the circle); by synecdoche, "area" sometimes is used to refer to the region, as in a "polygonal area".
The area of a shape can be measured by comparing the shape to squares of a fixed size. In the International System of Units (SI), the standard unit of area is the square metre (written as m2), which is the area of a square whose sides are one metre long. A shape with an area of three square metres would have the same area as three such squares. In mathematics, the unit square is defined to have area one, and the area of any other shape or surface is a dimensionless real number.
There are several well-known formulas for the areas of simple shapes such as triangles, rectangles, and circles. Using these formulas, the area of any polygon can be found by dividing the polygon into triangles. For shapes with curved boundary, calculus is usually required to compute the area. Indeed, the problem of determining the area of plane figures was a major motivation for the historical development of calculus.
For a solid shape such as a sphere, cone, or cylinder, the area of its boundary surface is called the surface area. Formulas for the surface areas of simple shapes were computed by the ancient Greeks, but computing the surface area of a more complicated shape usually requires multivariable calculus.
Area plays an important role in modern mathematics. In addition to its obvious importance in geometry and calculus, area is related to the definition of determinants in linear algebra, and is a basic property of surfaces in differential geometry. In analysis, the area of a subset of the plane is defined using Lebesgue measure, though not every subset is measurable if one supposes the axiom of choice. In general, area in higher mathematics is seen as a special case of volume for two-dimensional regions.
Area can be defined through the use of axioms, defining it as a function of a collection of certain plane figures to the set of real numbers. It can be proved that such a function exists.
Formal definition
An approach to defining what is meant by "area" is through axioms. "Area" can be defined as a function from a collection M of a special kinds of plane figures (termed measurable sets) to the set of real numbers, which satisfies the following properties:
For all S in M, .
If S and T are in M then so are and , and also .
If S and T are in M with then is in M and .
If a set S is in M and S is congruent to T then T is also in M and .
Every rectangle R is in M. If the rectangle has length h and breadth k then .
Let Q be a set enclosed between two step regions S and T. A step region is formed from a finite union of adjacent rectangles resting on a common base, i.e. . If there is a unique number c such that for all such step regions S and T, then .
It can be proved that such an area function actually exists.
Units
Every unit of length has a corresponding unit of area, namely the area of a square with the given side length. Thus areas can be measured in square metres (m2), square centimetres (cm2), square millimetres (mm2), square kilometres (km2), square feet (ft2), square yards (yd2), square miles (mi2), and so forth. Algebraically, these units can be thought of as the squares of the corresponding length units.
The SI unit of area is the square metre, which is considered an SI derived unit.
Conversions
Calculation of the area of a square whose length and width are 1 metre would be:
1 metre × 1 metre = 1 m2
and so, a rectangle with different sides (say length of 3 metres and width of 2 metres) would have an area in square units that can be calculated as:
3 metres × 2 metres = 6 m2. This is equivalent to 6 million square millimetres. Other useful conversions are:
1 square kilometre = 1,000,000 square metres
1 square metre = 10,000 square centimetres = 1,000,000 square millimetres
1 square centimetre = 100 square millimetres.
Non-metric units
In non-metric units, the conversion between two square units is the square of the conversion between the corresponding length units.
1 foot = 12 inches,
the relationship between square feet and square inches is
1 square foot = 144 square inches,
where 144 = 122 = 12 × 12. Similarly:
1 square yard = 9 square feet
1 square mile = 3,097,600 square yards = 27,878,400 square feet
In addition, conversion factors include:
1 square inch = 6.4516 square centimetres
1 square foot = square metres
1 square yard = square metres
1 square mile = square kilometres
Other units including historical
There are several other common units for area. The are was the original unit of area in the metric system, with:
1 are = 100 square metres
Though the are has fallen out of use, the hectare is still commonly used to measure land:
1 hectare = 100 ares = 10,000 square metres = 0.01 square kilometres
Other uncommon metric units of area include the tetrad, the hectad, and the myriad.
The acre is also commonly used to measure land areas, where
1 acre = 4,840 square yards = 43,560 square feet.
An acre is approximately 40% of a hectare.
On the atomic scale, area is measured in units of barns, such that:
1 barn = 10−28 square meters.
The barn is commonly used in describing the cross-sectional area of interaction in nuclear physics.
In South Asia (mainly Indians), although the countries use SI units as official, many South Asians still use traditional units. Each administrative division has its own area unit, some of them have same names, but with different values. There's no official consensus about the traditional units values. Thus, the conversions between the SI units and the traditional units may have different results, depending on what reference that has been used.
Some traditional South Asian units that have fixed value:
1 Killa = 1 acre
1 Ghumaon = 1 acre
1 Kanal = 0.125 acre (1 acre = 8 kanal)
1 Decimal = 48.4 square yards
1 Chatak = 180 square feet
History
Circle area
In the 5th century BCE, Hippocrates of Chios was the first to show that the area of a disk (the region enclosed by a circle) is proportional to the square of its diameter, as part of his quadrature of the lune of Hippocrates, but did not identify the constant of proportionality. Eudoxus of Cnidus, also in the 5th century BCE, also found that the area of a disk is proportional to its radius squared.
Subsequently, Book I of Euclid's Elements dealt with equality of areas between two-dimensional figures. The mathematician Archimedes used the tools of Euclidean geometry to show that the area inside a circle is equal to that of a right triangle whose base has the length of the circle's circumference and whose height equals the circle's radius, in his book Measurement of a Circle. (The circumference is 2r, and the area of a triangle is half the base times the height, yielding the area r2 for the disk.) Archimedes approximated the value of (and hence the area of a unit-radius circle) with his doubling method, in which he inscribed a regular triangle in a circle and noted its area, then doubled the number of sides to give a regular hexagon, then repeatedly doubled the number of sides as the polygon's area got closer and closer to that of the circle (and did the same with circumscribed polygons).
Triangle area
Quadrilateral area
In the 7th century CE, Brahmagupta developed a formula, now known as Brahmagupta's formula, for the area of a cyclic quadrilateral (a quadrilateral inscribed in a circle) in terms of its sides. In 1842, the German mathematicians Carl Anton Bretschneider and Karl Georg Christian von Staudt independently found a formula, known as Bretschneider's formula, for the area of any quadrilateral.
General polygon area
The development of Cartesian coordinates by René Descartes in the 17th century allowed the development of the surveyor's formula for the area of any polygon with known vertex locations by Gauss in the 19th century.
Areas determined using calculus
The development of integral calculus in the late 17th century provided tools that could subsequently be used for computing more complicated areas, such as the area of an ellipse and the surface areas of various curved three-dimensional objects.
Area formulas
Polygon formulas
For a non-self-intersecting (simple) polygon, the Cartesian coordinates (i=0, 1, ..., n-1) of whose n vertices are known, the area is given by the surveyor's formula:
where when i=n-1, then i+1 is expressed as modulus n and so refers to 0.
Rectangles
The most basic area formula is the formula for the area of a rectangle. Given a rectangle with length and width , the formula for the area is:
(rectangle).
That is, the area of the rectangle is the length multiplied by the width. As a special case, as in the case of a square, the area of a square with side length is given by the formula:
(square).
The formula for the area of a rectangle follows directly from the basic properties of area, and is sometimes taken as a definition or axiom. On the other hand, if geometry is developed before arithmetic, this formula can be used to define multiplication of real numbers.
Dissection, parallelograms, and triangles
Most other simple formulas for area follow from the method of dissection.
This involves cutting a shape into pieces, whose areas must sum to the area of the original shape.
For an example, any parallelogram can be subdivided into a trapezoid and a right triangle, as shown in figure to the left. If the triangle is moved to the other side of the trapezoid, then the resulting figure is a rectangle. It follows that the area of the parallelogram is the same as the area of the rectangle:
(parallelogram).
However, the same parallelogram can also be cut along a diagonal into two congruent triangles, as shown in the figure to the right. It follows that the area of each triangle is half the area of the parallelogram:
(triangle).
Similar arguments can be used to find area formulas for the trapezoid as well as more complicated polygons.
Area of curved shapes
Circles
The formula for the area of a circle (more properly called the area enclosed by a circle or the area of a disk) is based on a similar method. Given a circle of radius , it is possible to partition the circle into sectors, as shown in the figure to the right. Each sector is approximately triangular in shape, and the sectors can be rearranged to form an approximate parallelogram. The height of this parallelogram is , and the width is half the circumference of the circle, or . Thus, the total area of the circle is :
(circle).
Though the dissection used in this formula is only approximate, the error becomes smaller and smaller as the circle is partitioned into more and more sectors. The limit of the areas of the approximate parallelograms is exactly , which is the area of the circle.
This argument is actually a simple application of the ideas of calculus. In ancient times, the method of exhaustion was used in a similar way to find the area of the circle, and this method is now recognized as a precursor to integral calculus. Using modern methods, the area of a circle can be computed using a definite integral:
Ellipses
The formula for the area enclosed by an ellipse is related to the formula of a circle; for an ellipse with semi-major and semi-minor axes and the formula is:
Non-planar surface area
Most basic formulas for surface area can be obtained by cutting surfaces and flattening them out (see: developable surfaces). For example, if the side surface of a cylinder (or any prism) is cut lengthwise, the surface can be flattened out into a rectangle. Similarly, if a cut is made along the side of a cone, the side surface can be flattened out into a sector of a circle, and the resulting area computed.
The formula for the surface area of a sphere is more difficult to derive: because a sphere has nonzero Gaussian curvature, it cannot be flattened out. The formula for the surface area of a sphere was first obtained by Archimedes in his work On the Sphere and Cylinder. The formula is:
(sphere),
where is the radius of the sphere. As with the formula for the area of a circle, any derivation of this formula inherently uses methods similar to calculus.
General formulas
Areas of 2-dimensional figures
A triangle: (where B is any side, and h is the distance from the line on which B lies to the other vertex of the triangle). This formula can be used if the height h is known. If the lengths of the three sides are known then Heron's formula can be used: where a, b, c are the sides of the triangle, and is half of its perimeter. If an angle and its two included sides are given, the area is where is the given angle and and are its included sides. If the triangle is graphed on a coordinate plane, a matrix can be used and is simplified to the absolute value of . This formula is also known as the shoelace formula and is an easy way to solve for the area of a coordinate triangle by substituting the 3 points (x1,y1), (x2,y2), and (x3,y3). The shoelace formula can also be used to find the areas of other polygons when their vertices are known. Another approach for a coordinate triangle is to use calculus to find the area.
A simple polygon constructed on a grid of equal-distanced points (i.e., points with integer coordinates) such that all the polygon's vertices are grid points: , where i is the number of grid points inside the polygon and b is the number of boundary points. This result is known as Pick's theorem.
Area in calculus
The area between a positive-valued curve and the horizontal axis, measured between two values a and b (b is defined as the larger of the two values) on the horizontal axis, is given by the integral from a to b of the function that represents the curve:
The area between the graphs of two functions is equal to the integral of one function, f(x), minus the integral of the other function, g(x):
where is the curve with the greater y-value.
An area bounded by a function expressed in polar coordinates is:
The area enclosed by a parametric curve with endpoints is given by the line integrals:
or the z-component of
(For details, see .) This is the principle of the planimeter mechanical device.
Bounded area between two quadratic functions
To find the bounded area between two quadratic functions, we first subtract one from the other, writing the difference as
where f(x) is the quadratic upper bound and g(x) is the quadratic lower bound.
By the area integral formulas above and Vieta's formula, we can obtain that
The above remains valid if one of the bounding functions is linear instead of quadratic.
Surface area of 3-dimensional figures
Cone: , where r is the radius of the circular base, and h is the height. That can also be rewritten as or where r is the radius and l is the slant height of the cone. is the base area while is the lateral surface area of the cone.
Cube: , where s is the length of an edge.
Cylinder: , where r is the radius of a base and h is the height. The can also be rewritten as , where d is the diameter.
Prism: , where B is the area of a base, P is the perimeter of a base, and h is the height of the prism.
pyramid: , where B is the area of the base, P is the perimeter of the base, and L is the length of the slant.
Rectangular prism: , where is the length, w is the width, and h is the height.
General formula for surface area
The general formula for the surface area of the graph of a continuously differentiable function where and is a region in the xy-plane with the smooth boundary:
An even more general formula for the area of the graph of a parametric surface in the vector form where is a continuously differentiable vector function of is:
List of formulas
The above calculations show how to find the areas of many common shapes.
The areas of irregular (and thus arbitrary) polygons can be calculated using the "Surveyor's formula" (shoelace formula).
Relation of area to perimeter
The isoperimetric inequality states that, for a closed curve of length L (so the region it encloses has perimeter L) and for area A of the region that it encloses,
and equality holds if and only if the curve is a circle. Thus a circle has the largest area of any closed figure with a given perimeter.
At the other extreme, a figure with given perimeter L could have an arbitrarily small area, as illustrated by a rhombus that is "tipped over" arbitrarily far so that two of its angles are arbitrarily close to 0° and the other two are arbitrarily close to 180°.
For a circle, the ratio of the area to the circumference (the term for the perimeter of a circle) equals half the radius r. This can be seen from the area formula πr2 and the circumference formula 2πr.
The area of a regular polygon is half its perimeter times the apothem (where the apothem is the distance from the center to the nearest point on any side).
Fractals
Doubling the edge lengths of a polygon multiplies its area by four, which is two (the ratio of the new to the old side length) raised to the power of two (the dimension of the space the polygon resides in). But if the one-dimensional lengths of a fractal drawn in two dimensions are all doubled, the spatial content of the fractal scales by a power of two that is not necessarily an integer. This power is called the fractal dimension of the fractal.
Area bisectors
There are an infinitude of lines that bisect the area of a triangle. Three of them are the medians of the triangle (which connect the sides' midpoints with the opposite vertices), and these are concurrent at the triangle's centroid; indeed, they are the only area bisectors that go through the centroid. Any line through a triangle that splits both the triangle's area and its perimeter in half goes through the triangle's incenter (the center of its incircle). There are either one, two, or three of these for any given triangle.
Any line through the midpoint of a parallelogram bisects the area.
All area bisectors of a circle or other ellipse go through the center, and any chords through the center bisect the area. In the case of a circle they are the diameters of the circle.
Optimization
Given a wire contour, the surface of least area spanning ("filling") it is a minimal surface. Familiar examples include soap bubbles.
The question of the filling area of the Riemannian circle remains open.
The circle has the largest area of any two-dimensional object having the same perimeter.
A cyclic polygon (one inscribed in a circle) has the largest area of any polygon with a given number of sides of the same lengths.
A version of the isoperimetric inequality for triangles states that the triangle of greatest area among all those with a given perimeter is equilateral.
The triangle of largest area of all those inscribed in a given circle is equilateral; and the triangle of smallest area of all those circumscribed around a given circle is equilateral.
The ratio of the area of the incircle to the area of an equilateral triangle, , is larger than that of any non-equilateral triangle.
The ratio of the area to the square of the perimeter of an equilateral triangle, is larger than that for any other triangle.
See also
Brahmagupta quadrilateral, a cyclic quadrilateral with integer sides, integer diagonals, and integer area.
Equiareal map
Heronian triangle, a triangle with integer sides and integer area.
List of triangle inequalities
One-seventh area triangle, an inner triangle with one-seventh the area of the reference triangle.
Routh's theorem, a generalization of the one-seventh area triangle.
Orders of magnitude—A list of areas by size.
Derivation of the formula of a pentagon
Planimeter, an instrument for measuring small areas, e.g. on maps.
Area of a convex quadrilateral
Robbins pentagon, a cyclic pentagon whose side lengths and area are all rational numbers.
References
External links | Area | [
"Physics",
"Mathematics"
] | 4,623 | [
"Scalar physical quantities",
"Physical quantities",
"Quantity",
"Size",
"Wikipedia categories named after physical quantities",
"Area"
] |
1,210 | https://en.wikipedia.org/wiki/Astronomical%20unit | The astronomical unit (symbol: au or AU) is a unit of length defined to be exactly equal to . Historically, the astronomical unit was conceived as the average Earth-Sun distance (the average of Earth's aphelion and perihelion), before its modern redefinition in 2012.
The astronomical unit is used primarily for measuring distances within the Solar System or around other stars. It is also a fundamental component in the definition of another unit of astronomical length, the parsec. One au is equivalent to 499 light-seconds to within 10 parts per million.
History of symbol usage
A variety of unit symbols and abbreviations have been in use for the astronomical unit. In a 1976 resolution, the International Astronomical Union (IAU) had used the symbol A to denote a length equal to the astronomical unit. In the astronomical literature, the symbol AU is common. In 2006, the International Bureau of Weights and Measures (BIPM) had recommended ua as the symbol for the unit, from the French "unité astronomique". In the non-normative Annex C to ISO 80000-3:2006 (later withdrawn), the symbol of the astronomical unit was also ua.
In 2012, the IAU, noting "that various symbols are presently in use for the astronomical unit", recommended the use of the symbol "au". The scientific journals published by the American Astronomical Society and the Royal Astronomical Society subsequently adopted this symbol. In the 2014 revision and 2019 edition of the SI Brochure, the BIPM used the unit symbol "au". ISO 80000-3:2019, which replaces ISO 80000-3:2006, does not mention the astronomical unit.
Development of unit definition
Earth's orbit around the Sun is an ellipse. The semi-major axis of this elliptic orbit is defined to be half of the straight line segment that joins the perihelion and aphelion. The centre of the Sun lies on this straight line segment, but not at its midpoint. Because ellipses are well-understood shapes, measuring the points of its extremes defined the exact shape mathematically, and made possible calculations for the entire orbit as well as predictions based on observation. In addition, it mapped out exactly the largest straight-line distance that Earth traverses over the course of a year, defining times and places for observing the largest parallax (apparent shifts of position) in nearby stars. Knowing Earth's shift and a star's shift enabled the star's distance to be calculated. But all measurements are subject to some degree of error or uncertainty, and the uncertainties in the length of the astronomical unit only increased uncertainties in the stellar distances. Improvements in precision have always been a key to improving astronomical understanding. Throughout the twentieth century, measurements became increasingly precise and sophisticated, and ever more dependent on accurate observation of the effects described by Einstein's theory of relativity and upon the mathematical tools it used.
Improving measurements were continually checked and cross-checked by means of improved understanding of the laws of celestial mechanics, which govern the motions of objects in space. The expected positions and distances of objects at an established time are calculated (in au) from these laws, and assembled into a collection of data called an ephemeris. NASA Jet Propulsion Laboratory HORIZONS System provides one of several ephemeris computation services.
In 1976, to establish a more precise measure for the astronomical unit, the IAU formally adopted a new definition. Although directly based on the then-best available observational measurements, the definition was recast in terms of the then-best mathematical derivations from celestial mechanics and planetary ephemerides. It stated that "the astronomical unit of length is that length (A) for which the Gaussian gravitational constant (k) takes the value when the units of measurement are the astronomical units of length, mass and time". Equivalently, by this definition, one au is "the radius of an unperturbed circular Newtonian orbit about the sun of a particle having infinitesimal mass, moving with an angular frequency of "; or alternatively that length for which the heliocentric gravitational constant (the product G) is equal to ()2 au3/d2, when the length is used to describe the positions of objects in the Solar System.
Subsequent explorations of the Solar System by space probes made it possible to obtain precise measurements of the relative positions of the inner planets and other objects by means of radar and telemetry. As with all radar measurements, these rely on measuring the time taken for photons to be reflected from an object. Because all photons move at the speed of light in vacuum, a fundamental constant of the universe, the distance of an object from the probe is calculated as the product of the speed of light and the measured time. However, for precision the calculations require adjustment for things such as the motions of the probe and object while the photons are transiting. In addition, the measurement of the time itself must be translated to a standard scale that accounts for relativistic time dilation. Comparison of the ephemeris positions with time measurements expressed in Barycentric Dynamical Time (TDB) leads to a value for the speed of light in astronomical units per day (of ). By 2009, the IAU had updated its standard measures to reflect improvements, and calculated the speed of light at (TDB).
In 1983, the CIPM modified the International System of Units (SI) to make the metre defined as the distance travelled in a vacuum by light in 1 / . This replaced the previous definition, valid between 1960 and 1983, which was that the metre equalled a certain number of wavelengths of a certain emission line of krypton-86. (The reason for the change was an improved method of measuring the speed of light.) The speed of light could then be expressed exactly as c0 = , a standard also adopted by the IERS numerical standards. From this definition and the 2009 IAU standard, the time for light to traverse an astronomical unit is found to be τA = , which is slightly more than 8 minutes 19 seconds. By multiplication, the best IAU 2009 estimate was A = c0τA = , based on a comparison of Jet Propulsion Laboratory and IAA–RAS ephemerides.
In 2006, the BIPM reported a value of the astronomical unit as . In the 2014 revision of the SI Brochure, the BIPM recognised the IAU's 2012 redefinition of the astronomical unit as .
This estimate was still derived from observation and measurements subject to error, and based on techniques that did not yet standardize all relativistic effects, and thus were not constant for all observers. In 2012, finding that the equalization of relativity alone would make the definition overly complex, the IAU simply used the 2009 estimate to redefine the astronomical unit as a conventional unit of length directly tied to the metre (exactly ). The new definition recognizes as a consequence that the astronomical unit has reduced importance, limited in use to a convenience in some applications.
{| style="border-spacing:0"
|-
|rowspan=7 style="vertical-align:top; padding-right:0"|1 astronomical unit
|= metres (by definition)
|-
|= (exactly)
|-
|≈
|-
|≈ light-seconds
|-
|≈
|-
|≈
|}
This definition makes the speed of light, defined as exactly , equal to exactly × ÷ or about , some 60 parts per trillion less than the 2009 estimate.
Usage and significance
With the definitions used before 2012, the astronomical unit was dependent on the heliocentric gravitational constant, that is the product of the gravitational constant, G, and the solar mass, . Neither G nor can be measured to high accuracy separately, but the value of their product is known very precisely from observing the relative positions of planets (Kepler's third law expressed in terms of Newtonian gravitation). Only the product is required to calculate planetary positions for an ephemeris, so ephemerides are calculated in astronomical units and not in SI units.
The calculation of ephemerides also requires a consideration of the effects of general relativity. In particular, time intervals measured on Earth's surface (Terrestrial Time, TT) are not constant when compared with the motions of the planets: the terrestrial second (TT) appears to be longer near January and shorter near July when compared with the "planetary second" (conventionally measured in TDB). This is because the distance between Earth and the Sun is not fixed (it varies between and ) and, when Earth is closer to the Sun (perihelion), the Sun's gravitational field is stronger and Earth is moving faster along its orbital path. As the metre is defined in terms of the second and the speed of light is constant for all observers, the terrestrial metre appears to change in length compared with the "planetary metre" on a periodic basis.
The metre is defined to be a unit of proper length. Indeed, the International Committee for Weights and Measures (CIPM) notes that "its definition applies only within a spatial extent sufficiently small that the effects of the non-uniformity of the gravitational field can be ignored". As such, a distance within the Solar System without specifying the frame of reference for the measurement is problematic. The 1976 definition of the astronomical unit was incomplete because it did not specify the frame of reference in which to apply the measurement, but proved practical for the calculation of ephemerides: a fuller definition that is consistent with general relativity was proposed, and "vigorous debate" ensued until August 2012 when the IAU adopted the current definition of 1 astronomical unit = metres.
The astronomical unit is typically used for stellar system scale distances, such as the size of a protostellar disk or the heliocentric distance of an asteroid, whereas other units are used for other distances in astronomy. The astronomical unit is too small to be convenient for interstellar distances, where the parsec and light-year are widely used. The parsec (parallax arcsecond) is defined in terms of the astronomical unit, being the distance of an object with a parallax of . The light-year is often used in popular works, but is not an approved non-SI unit and is rarely used by professional astronomers.
When simulating a numerical model of the Solar System, the astronomical unit provides an appropriate scale that minimizes (overflow, underflow and truncation) errors in floating point calculations.
History
The book On the Sizes and Distances of the Sun and Moon, which is ascribed to Aristarchus, says the distance to the Sun is 18 to 20 times the distance to the Moon, whereas the true ratio is about . The latter estimate was based on the angle between the half-moon and the Sun, which he estimated as (the true value being close to ). Depending on the distance that van Helden assumes Aristarchus used for the distance to the Moon, his calculated distance to the Sun would fall between and Earth radii.
Hipparchus gave an estimate of the distance of Earth from the Sun, quoted by Pappus as equal to 490 Earth radii. According to the conjectural reconstructions of Noel Swerdlow and G. J. Toomer, this was derived from his assumption of a "least perceptible" solar parallax of .
A Chinese mathematical treatise, the Zhoubi Suanjing (), shows how the distance to the Sun can be computed geometrically, using the different lengths of the noontime shadows observed at three places li apart and the assumption that Earth is flat.
According to Eusebius in the Praeparatio evangelica (Book XV, Chapter 53), Eratosthenes found the distance to the Sun to be "σταδιων μυριαδας τετρακοσιας και οκτωκισμυριας" (literally "of stadia myriads 400 and ") but with the additional note that in the Greek text the grammatical agreement is between myriads (not stadia) on the one hand and both 400 and on the other: all three are accusative plural, while σταδιων is genitive plural ("of stadia") . All three words (or all four including stadia) are inflected. This has been translated either as stadia (1903 translation by Edwin Hamilton Gifford), or as stadia (edition of Édouard des Places, dated 1974–1991). Using the Greek stadium of 185 to 190 metres, the former translation comes to to , which is far too low, whereas the second translation comes to 148.7 to 152.8 billion metres (accurate within 2%).
In the 2nd century CE, Ptolemy estimated the mean distance of the Sun as times Earth's radius. To determine this value, Ptolemy started by measuring the Moon's parallax, finding what amounted to a horizontal lunar parallax of 1° 26′, which was much too large. He then derived a maximum lunar distance of Earth radii. Because of cancelling errors in his parallax figure, his theory of the Moon's orbit, and other factors, this figure was approximately correct. He then measured the apparent sizes of the Sun and the Moon and concluded that the apparent diameter of the Sun was equal to the apparent diameter of the Moon at the Moon's greatest distance, and from records of lunar eclipses, he estimated this apparent diameter, as well as the apparent diameter of the shadow cone of Earth traversed by the Moon during a lunar eclipse. Given these data, the distance of the Sun from Earth can be trigonometrically computed to be Earth radii. This gives a ratio of solar to lunar distance of approximately 19, matching Aristarchus's figure. Although Ptolemy's procedure is theoretically workable, it is very sensitive to small changes in the data, so much so that changing a measurement by a few per cent can make the solar distance infinite.
After Greek astronomy was transmitted to the medieval Islamic world, astronomers made some changes to Ptolemy's cosmological model, but did not greatly change his estimate of the Earth–Sun distance. For example, in his introduction to Ptolemaic astronomy, al-Farghānī gave a mean solar distance of Earth radii, whereas in his zij, al-Battānī used a mean solar distance of Earth radii. Subsequent astronomers, such as al-Bīrūnī, used similar values. Later in Europe, Copernicus and Tycho Brahe also used comparable figures ( and Earth radii), and so Ptolemy's approximate Earth–Sun distance survived through the 16th century.
Johannes Kepler was the first to realize that Ptolemy's estimate must be significantly too low (according to Kepler, at least by a factor of three) in his Rudolphine Tables (1627). Kepler's laws of planetary motion allowed astronomers to calculate the relative distances of the planets from the Sun, and rekindled interest in measuring the absolute value for Earth (which could then be applied to the other planets). The invention of the telescope allowed far more accurate measurements of angles than is possible with the naked eye. Flemish astronomer Godefroy Wendelin repeated Aristarchus’ measurements in 1635, and found that Ptolemy's value was too low by a factor of at least eleven.
A somewhat more accurate estimate can be obtained by observing the transit of Venus. By measuring the transit in two different locations, one can accurately calculate the parallax of Venus and from the relative distance of Earth and Venus from the Sun, the solar parallax (which cannot be measured directly due to the brightness of the Sun). Jeremiah Horrocks had attempted to produce an estimate based on his observation of the 1639 transit (published in 1662), giving a solar parallax of , similar to Wendelin's figure. The solar parallax is related to the Earth–Sun distance as measured in Earth radii by
The smaller the solar parallax, the greater the distance between the Sun and Earth: a solar parallax of is equivalent to an Earth–Sun distance of Earth radii.
Christiaan Huygens believed that the distance was even greater: by comparing the apparent sizes of Venus and Mars, he estimated a value of about Earth radii, equivalent to a solar parallax of . Although Huygens' estimate is remarkably close to modern values, it is often discounted by historians of astronomy because of the many unproven (and incorrect) assumptions he had to make for his method to work; the accuracy of his value seems to be based more on luck than good measurement, with his various errors cancelling each other out.
Jean Richer and Giovanni Domenico Cassini measured the parallax of Mars between Paris and Cayenne in French Guiana when Mars was at its closest to Earth in 1672. They arrived at a figure for the solar parallax of , equivalent to an Earth–Sun distance of about Earth radii. They were also the first astronomers to have access to an accurate and reliable value for the radius of Earth, which had been measured by their colleague Jean Picard in 1669 as toises. This same year saw another estimate for the astronomical unit by John Flamsteed, which accomplished it alone by measuring the martian diurnal parallax. Another colleague, Ole Rømer, discovered the finite speed of light in 1676: the speed was so great that it was usually quoted as the time required for light to travel from the Sun to the Earth, or "light time per unit distance", a convention that is still followed by astronomers today.
A better method for observing Venus transits was devised by James Gregory and published in his Optica Promata (1663). It was strongly advocated by Edmond Halley and was applied to the transits of Venus observed in 1761 and 1769, and then again in 1874 and 1882. Transits of Venus occur in pairs, but less than one pair every century, and observing the transits in 1761 and 1769 was an unprecedented international scientific operation including observations by James Cook and Charles Green from Tahiti. Despite the Seven Years' War, dozens of astronomers were dispatched to observing points around the world at great expense and personal danger: several of them died in the endeavour. The various results were collated by Jérôme Lalande to give a figure for the solar parallax of . Karl Rudolph Powalky had made an estimate of in 1864.
Another method involved determining the constant of aberration. Simon Newcomb gave great weight to this method when deriving his widely accepted value of for the solar parallax (close to the modern value of ), although Newcomb also used data from the transits of Venus. Newcomb also collaborated with A. A. Michelson to measure the speed of light with Earth-based equipment; combined with the constant of aberration (which is related to the light time per unit distance), this gave the first direct measurement of the Earth–Sun distance in metres. Newcomb's value for the solar parallax (and for the constant of aberration and the Gaussian gravitational constant) were incorporated into the first international system of astronomical constants in 1896, which remained in place for the calculation of ephemerides until 1964. The name "astronomical unit" appears first to have been used in 1903.
The discovery of the near-Earth asteroid 433 Eros and its passage near Earth in 1900–1901 allowed a considerable improvement in parallax measurement. Another international project to measure the parallax of 433 Eros was undertaken in 1930–1931.
Direct radar measurements of the distances to Venus and Mars became available in the early 1960s. Along with improved measurements of the speed of light, these showed that Newcomb's values for the solar parallax and the constant of aberration were inconsistent with one another.
Developments
The unit distance (the value of the astronomical unit in metres) can be expressed in terms of other astronomical constants:
where is the Newtonian constant of gravitation, is the solar mass, is the numerical value of Gaussian gravitational constant and is the time period of one day.
The Sun is constantly losing mass by radiating away energy, so the orbits of the planets are steadily expanding outward from the Sun. This has led to calls to abandon the astronomical unit as a unit of measurement.
As the speed of light has an exact defined value in SI units and the Gaussian gravitational constant is fixed in the astronomical system of units, measuring the light time per unit distance is exactly equivalent to measuring the product × in SI units. Hence, it is possible to construct ephemerides entirely in SI units, which is increasingly becoming the norm.
A 2004 analysis of radiometric measurements in the inner Solar System suggested that the secular increase in the unit distance was much larger than can be accounted for by solar radiation, + metres per century.
The measurements of the secular variations of the astronomical unit are not confirmed by other authors and are quite controversial.
Furthermore, since 2010, the astronomical unit has not been estimated by the planetary ephemerides.
Examples
The following table contains some distances given in astronomical units. It includes some examples with distances that are normally not given in astronomical units, because they are either too short or far too long. Distances normally change over time. Examples are listed by increasing distance.
See also
Orders of magnitude (length)
References
Further reading
External links
The IAU and astronomical units
Recommendations concerning Units (HTML version of the IAU Style Manual)
Chasing Venus, Observing the Transits of Venus
Transit of Venus
Celestial mechanics
Unit
Units of length | Astronomical unit | [
"Physics",
"Astronomy",
"Mathematics"
] | 4,477 | [
"Units of length",
"Quantity",
"Units of measurement in astronomy",
"Classical mechanics",
"Astrophysics",
"Celestial mechanics",
"Units of measurement"
] |
1,234 | https://en.wikipedia.org/wiki/Acoustic%20theory | Acoustic theory is a scientific field that relates to the description of sound waves. It derives from fluid dynamics. See acoustics for the engineering approach.
For sound waves of any magnitude of a disturbance in velocity, pressure, and density we have
In the case that the fluctuations in velocity, density, and pressure are small, we can approximate these as
Where is the perturbed velocity of the fluid, is the pressure of the fluid at rest, is the perturbed pressure of the system as a function of space and time, is the density of the fluid at rest, and is the variance in the density of the fluid over space and time.
In the case that the velocity is irrotational (), we then have the acoustic wave equation that describes the system:
Where we have
Derivation for a medium at rest
Starting with the Continuity Equation and the Euler Equation:
If we take small perturbations of a constant pressure and density:
Then the equations of the system are
Noting that the equilibrium pressures and densities are constant, this simplifies to
A Moving Medium
Starting with
We can have these equations work for a moving medium by setting , where is the constant velocity that the whole fluid is moving at before being disturbed (equivalent to a moving observer) and is the fluid velocity.
In this case the equations look very similar:
Note that setting returns the equations at rest.
Linearized Waves
Starting with the above given equations of motion for a medium at rest:
Let us now take to all be small quantities.
In the case that we keep terms to first order, for the continuity equation, we have the term going to 0. This similarly applies for the density perturbation times the time derivative of the velocity. Moreover, the spatial components of the material derivative go to 0. We thus have, upon rearranging the equilibrium density:
Next, given that our sound wave occurs in an ideal fluid, the motion is adiabatic, and then we can relate the small change in the pressure to the small change in the density by
Under this condition, we see that we now have
Defining the speed of sound of the system:
Everything becomes
For Irrotational Fluids
In the case that the fluid is irrotational, that is , we can then write and thus write our equations of motion as
The second equation tells us that
And the use of this equation in the continuity equation tells us that
This simplifies to
Thus the velocity potential obeys the wave equation in the limit of small disturbances. The boundary conditions required to solve for the potential come from the fact that the velocity of the fluid must be 0 normal to the fixed surfaces of the system.
Taking the time derivative of this wave equation and multiplying all sides by the unperturbed density, and then using the fact that tells us that
Similarly, we saw that . Thus we can multiply the above equation appropriately and see that
Thus, the velocity potential, pressure, and density all obey the wave equation. Moreover, we only need to solve one such equation to determine all other three. In particular, we have
For a moving medium
Again, we can derive the small-disturbance limit for sound waves in a moving medium. Again, starting with
We can linearize these into
For Irrotational Fluids in a Moving Medium
Given that we saw that
If we make the previous assumptions of the fluid being ideal and the velocity being irrotational, then we have
Under these assumptions, our linearized sound equations become
Importantly, since is a constant, we have , and then the second equation tells us that
Or just that
Now, when we use this relation with the fact that , alongside cancelling and rearranging terms, we arrive at
We can write this in a familiar form as
This differential equation must be solved with the appropriate boundary conditions. Note that setting returns us the wave equation. Regardless, upon solving this equation for a moving medium, we then have
See also
Acoustic attenuation
Sound
Fourier analysis
References
Fluid dynamics
Acoustics
Sound | Acoustic theory | [
"Physics",
"Chemistry",
"Engineering"
] | 806 | [
"Chemical engineering",
"Classical mechanics",
"Acoustics",
"Piping",
"Fluid dynamics"
] |
1,242 | https://en.wikipedia.org/wiki/Ada%20%28programming%20language%29 | Ada is a structured, statically typed, imperative, and object-oriented high-level programming language, inspired by Pascal and other languages. It has built-in language support for design by contract (DbC), extremely strong typing, explicit concurrency, tasks, synchronous message passing, protected objects, and non-determinism. Ada improves code safety and maintainability by using the compiler to find errors in favor of runtime errors. Ada is an international technical standard, jointly defined by the International Organization for Standardization (ISO), and the International Electrotechnical Commission (IEC). , the standard, called Ada 2022 informally, is ISO/IEC 8652:2023.
Ada was originally designed by a team led by French computer scientist Jean Ichbiah of Honeywell under contract to the United States Department of Defense (DoD) from 1977 to 1983 to supersede over 450 programming languages used by the DoD at that time. Ada was named after Ada Lovelace (1815–1852), who has been credited as the first computer programmer.
Features
Ada was originally designed for embedded and real-time systems. The Ada 95 revision, designed by S. Tucker Taft of Intermetrics between 1992 and 1995, improved support for systems, numerical, financial, and object-oriented programming (OOP).
Features of Ada include: strong typing, modular programming mechanisms (packages), run-time checking, parallel processing (tasks, synchronous message passing, protected objects, and nondeterministic select statements), exception handling, and generics. Ada 95 added support for object-oriented programming, including dynamic dispatch.
The syntax of Ada minimizes choices of ways to perform basic operations, and prefers English keywords (such as "or else" and "and then") to symbols (such as "||" and "&&"). Ada uses the basic arithmetical operators "+", "-", "*", and "/", but avoids using other symbols. Code blocks are delimited by words such as "declare", "begin", and "end", where the "end" (in most cases) is followed by the identifier of the block it closes (e.g., if ... end if, loop ... end loop). In the case of conditional blocks this avoids a dangling else that could pair with the wrong nested if-expression in other languages like C or Java.
Ada is designed for developing very large software systems. Ada packages can be compiled separately. Ada package specifications (the package interface) can also be compiled separately without the implementation to check for consistency. This makes it possible to detect problems early during the design phase, before implementation starts.
A large number of compile-time checks are supported to help avoid bugs that would not be detectable until run-time in some other languages or would require explicit checks to be added to the source code. For example, the syntax requires explicitly named closing of blocks to prevent errors due to mismatched end tokens. The adherence to strong typing allows detecting many common software errors (wrong parameters, range violations, invalid references, mismatched types, etc.) either during compile-time, or otherwise during run-time. As concurrency is part of the language specification, the compiler can in some cases detect potential deadlocks. Compilers also commonly check for misspelled identifiers, visibility of packages, redundant declarations, etc. and can provide warnings and useful suggestions on how to fix the error.
Ada also supports run-time checks to protect against access to unallocated memory, buffer overflow errors, range violations, off-by-one errors, array access errors, and other detectable bugs. These checks can be disabled in the interest of runtime efficiency, but can often be compiled efficiently. It also includes facilities to help program verification. For these reasons, Ada is sometimes used in critical systems, where any anomaly might lead to very serious consequences, e.g., accidental death, injury or severe financial loss. Examples of systems where Ada is used include avionics, air traffic control, railways, banking, military and space technology.
Ada's dynamic memory management is high-level and type-safe. Ada has no generic or untyped pointers; nor does it implicitly declare any pointer type. Instead, all dynamic memory allocation and deallocation must occur via explicitly declared access types. Each access type has an associated storage pool that handles the low-level details of memory management; the programmer can either use the default storage pool or define new ones (this is particularly relevant for Non-Uniform Memory Access). It is even possible to declare several different access types that all designate the same type but use different storage pools. Also, the language provides for accessibility checks, both at compile time and at run time, that ensures that an access value cannot outlive the type of the object it points to.
Though the semantics of the language allow automatic garbage collection of inaccessible objects, most implementations do not support it by default, as it would cause unpredictable behaviour in real-time systems. Ada does support a limited form of region-based memory management; also, creative use of storage pools can provide for a limited form of automatic garbage collection, since destroying a storage pool also destroys all the objects in the pool.
A double-dash ("--"), resembling an em dash, denotes comment text. Comments stop at end of line; there is intentionally no way to make a comment span multiple lines, to prevent unclosed comments from accidentally voiding whole sections of source code. Disabling a whole block of code therefore requires the prefixing of each line (or column) individually with "--". While this clearly denotes disabled code by creating a column of repeated "--" down the page, it also renders the experimental dis/re-enablement of large blocks a more drawn-out process in editors without block commenting support.
The semicolon (";") is a statement terminator, and the null or no-operation statement is null;. A single ; without a statement to terminate is not allowed.
Unlike most ISO standards, the Ada language definition (known as the Ada Reference Manual or ARM, or sometimes the Language Reference Manual or LRM) is free content. Thus, it is a common reference for Ada programmers, not only programmers implementing Ada compilers. Apart from the reference manual, there is also an extensive rationale document which explains the language design and the use of various language constructs. This document is also widely used by programmers. When the language was revised, a new rationale document was written.
One notable free software tool that is used by many Ada programmers to aid them in writing Ada source code is the GNAT Programming Studio, and GNAT which is part of the GNU Compiler Collection.
Alire is a package and toolchain management tool for Ada.
History
In the 1970s the US Department of Defense (DoD) became concerned by the number of different programming languages being used for its embedded computer system projects, many of which were obsolete or hardware-dependent, and none of which supported safe modular programming. In 1975, a working group, the High Order Language Working Group (HOLWG), was formed with the intent to reduce this number by finding or creating a programming language generally suitable for the department's and the UK Ministry of Defence's requirements. After many iterations beginning with an original straw-man proposal the eventual programming language was named Ada. The total number of high-level programming languages in use for such projects fell from over 450 in 1983 to 37 by 1996.
HOLWG crafted the Steelman language requirements , a series of documents stating the requirements they felt a programming language should satisfy. Many existing languages were formally reviewed, but the team concluded in 1977 that no existing language met the specifications. The requirements were created by the United States Department of Defense in The Department of Defense Common High Order Language program in 1978. The predecessors of this document were called, in order, "Strawman", "Woodenman", "Tinman" and "Ironman". The requirements focused on the needs of embedded computer applications, and emphasised reliability, maintainability, and efficiency. Notably, they included exception handling facilities, run-time checking, and parallel computing.
It was concluded that no existing language met these criteria to a sufficient extent, so a contest was called to create a language that would be closer to fulfilling them. The design that won this contest became the Ada programming language. The resulting language followed the Steelman requirements closely, though not exactly.
Requests for proposals for a new programming language were issued and four contractors were hired to develop their proposals under the names of Red (Intermetrics led by Benjamin Brosgol), Green (Honeywell, led by Jean Ichbiah), Blue (SofTech, led by John Goodenough) and Yellow (SRI International, led by Jay Spitzen). In April 1978, after public scrutiny, the Red and Green proposals passed to the next phase. In May 1979, the Green proposal, designed by Jean Ichbiah at Honeywell, was chosen and given the name Ada—after Augusta Ada King, Countess of Lovelace, usually known as Ada Lovelace. This proposal was influenced by the language LIS that Ichbiah and his group had developed in the 1970s. The preliminary Ada reference manual was published in ACM SIGPLAN Notices in June 1979. The Military Standard reference manual was approved on December 10, 1980 (Ada Lovelace's birthday), and given the number MIL-STD-1815 in honor of Ada Lovelace's birth year. In 1981, Tony Hoare took advantage of his Turing Award speech to criticize Ada for being overly complex and hence unreliable, but subsequently seemed to recant in the foreword he wrote for an Ada textbook.
Ada attracted much attention from the programming community as a whole during its early days. Its backers and others predicted that it might become a dominant language for general purpose programming and not only defense-related work. Ichbiah publicly stated that within ten years, only two programming languages would remain: Ada and Lisp. Early Ada compilers struggled to implement the large, complex language, and both compile-time and run-time performance tended to be slow and tools primitive. Compiler vendors expended most of their efforts in passing the massive, language-conformance-testing, government-required Ada Compiler Validation Capability (ACVC) validation suite that was required in another novel feature of the Ada language effort.
The first validated Ada implementation was the NYU Ada/Ed translator, certified on April 11, 1983. NYU Ada/Ed is implemented in the high-level set language SETL. Several commercial companies began offering Ada compilers and associated development tools, including Alsys, TeleSoft, DDC-I, Advanced Computer Techniques, Tartan Laboratories, Irvine Compiler, TLD Systems, and Verdix. Computer manufacturers who had a significant business in the defense, aerospace, or related industries, also offered Ada compilers and tools on their platforms; these included Concurrent Computer Corporation, Cray Research, Inc., Digital Equipment Corporation, Harris Computer Systems, and Siemens Nixdorf Informationssysteme AG.
In 1991, the US Department of Defense began to require the use of Ada (the Ada mandate) for all software, though exceptions to this rule were often granted. The Department of Defense Ada mandate was effectively removed in 1997, as the DoD began to embrace commercial off-the-shelf (COTS) technology. Similar requirements existed in other NATO countries: Ada was required for NATO systems involving command and control and other functions, and Ada was the mandated or preferred language for defense-related applications in countries such as Sweden, Germany, and Canada.
By the late 1980s and early 1990s, Ada compilers had improved in performance, but there were still barriers to fully exploiting Ada's abilities, including a tasking model that was different from what most real-time programmers were used to.
Because of Ada's safety-critical support features, it is now used not only for military applications, but also in commercial projects where a software bug can have severe consequences, e.g., avionics and air traffic control, commercial rockets such as the Ariane 4 and 5, satellites and other space systems, railway transport and banking.
For example, the Primary Flight Control System, the fly-by-wire system software in the Boeing 777, was written in Ada, as were the fly-by-wire systems for the aerodynamically unstable Eurofighter Typhoon, Saab Gripen, Lockheed Martin F-22 Raptor and the DFCS replacement flight control system for the Grumman F-14 Tomcat. The Canadian Automated Air Traffic System was written in 1 million lines of Ada (SLOC count). It featured advanced distributed processing, a distributed Ada database, and object-oriented design. Ada is also used in other air traffic systems, e.g., the UK's next-generation Interim Future Area Control Tools Support () air traffic control system is designed and implemented using SPARK Ada.
It is also used in the French TVM in-cab signalling system on the TGV high-speed rail system, and the metro suburban trains in Paris, London, Hong Kong and New York City.
The Ada 95 revision of the language went beyond the Steelman requirements, targeting general-purpose systems in addition to embedded ones, and adding features supporting object-oriented programming.
Standardization
Preliminary Ada can be found in ACM Sigplan Notices Vol 14, No 6, June 1979
Ada was first published in 1980 as an ANSI standard ANSI/MIL-STD 1815. As this very first version held many errors and inconsistencies , the revised edition was published in 1983 as ANSI/MIL-STD 1815A. Without any further changes, it became an ISO standard in 1987. This version of the language is commonly known as Ada 83, from the date of its adoption by ANSI, but is sometimes referred to also as Ada 87, from the date of its adoption by ISO. There is also a French translation; DIN translated it into German as DIN 66268 in 1988.
Ada 95, the joint ISO/IEC/ANSI standard ISO/IEC 8652:1995 was published in February 1995, making it the first ISO standard object-oriented programming language. To help with the standard revision and future acceptance, the US Air Force funded the development of the GNAT Compiler. Presently, the GNAT Compiler is part of the GNU Compiler Collection.
Work has continued on improving and updating the technical content of the Ada language. A Technical Corrigendum to Ada 95 was published in October 2001, and a major Amendment, ISO/IEC 8652:1995/Amd 1:2007 was published on March 9, 2007, commonly known as Ada 2005 because work on the new standard was finished that year.
At the Ada-Europe 2012 conference in Stockholm, the Ada Resource Association (ARA) and Ada-Europe announced the completion of the design of the latest version of the Ada language and the submission of the reference manual to the ISO/IEC JTC 1/SC 22/WG 9 of the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) for approval. ISO/IEC 8652:2012(see Ada 2012 RM) was published in December 2012, known as Ada 2012. A technical corrigendum, ISO/IEC 8652:2012/COR 1:2016, was published (see RM 2012 with TC 1).
On May 2, 2023, the Ada community saw the formal approval of publication of the Ada 2022 edition of the programming language standard.
Despite the names Ada 83, 95 etc., legally there is only one Ada standard, the one of the last ISO/IEC standard: with the acceptance of a new standard version, the previous one becomes withdrawn. The other names are just informal ones referencing a certain edition.
Other related standards include ISO/IEC 8651-3:1988 Information processing systems—Computer graphics—Graphical Kernel System (GKS) language bindings—Part 3: Ada.
Language constructs
Ada is an ALGOL-like programming language featuring control structures with reserved words such as if, then, else, while, for, and so on. However, Ada also has many data structuring facilities and other abstractions which were not included in the original ALGOL 60, such as type definitions, records, pointers, enumerations. Such constructs were in part inherited from or inspired by Pascal.
"Hello, world!" in Ada
A common example of a language's syntax is the Hello world program:
(hello.adb)
with Ada.Text_IO;
procedure Hello is
begin
Ada.Text_IO.Put_Line ("Hello, world!");
end Hello;
This program can be compiled by using the freely available open source compiler GNAT, by executing
gnatmake hello.adb
Data types
Ada's type system is not based on a set of predefined primitive types but allows users to declare their own types. This declaration in turn is not based on the internal representation of the type but on describing the goal which should be achieved. This allows the compiler to determine a suitable memory size for the type, and to check for violations of the type definition at compile time and run time (i.e., range violations, buffer overruns, type consistency, etc.). Ada supports numerical types defined by a range, modulo types, aggregate types (records and arrays), and enumeration types. Access types define a reference to an instance of a specified type; untyped pointers are not permitted.
Special types provided by the language are task types and protected types.
For example, a date might be represented as:
type Day_type is range 1 .. 31;
type Month_type is range 1 .. 12;
type Year_type is range 1800 .. 2100;
type Hours is mod 24;
type Weekday is (Monday, Tuesday, Wednesday, Thursday, Friday, Saturday, Sunday);
type Date is
record
Day : Day_type;
Month : Month_type;
Year : Year_type;
end record;
Important to note: Day_type, Month_type, Year_type, Hours are incompatible types, meaning that for instance the following expression is illegal:
Today: Day_type := 4;
Current_Month: Month_type := 10;
... Today + Current_Month ... -- illegal
The predefined plus-operator can only add values of the same type, so the expression is illegal.
Types can be refined by declaring subtypes:
subtype Working_Hours is Hours range 0 .. 12; -- at most 12 Hours to work a day
subtype Working_Day is Weekday range Monday .. Friday; -- Days to work
Work_Load: constant array(Working_Day) of Working_Hours -- implicit type declaration
:= (Friday => 6, Monday => 4, others => 10); -- lookup table for working hours with initialization
Types can have modifiers such as limited, abstract, private etc. Private types do not show their inner structure; objects of limited types cannot be copied. Ada 95 adds further features for object-oriented extension of types.
Control structures
Ada is a structured programming language, meaning that the flow of control is structured into standard statements. All standard constructs and deep-level early exit are supported, so the use of the also supported "go to" commands is seldom needed.
-- while a is not equal to b, loop.
while a /= b loop
Ada.Text_IO.Put_Line ("Waiting");
end loop;
if a > b then
Ada.Text_IO.Put_Line ("Condition met");
else
Ada.Text_IO.Put_Line ("Condition not met");
end if;
for i in 1 .. 10 loop
Ada.Text_IO.Put ("Iteration: ");
Ada.Text_IO.Put (i);
Ada.Text_IO.Put_Line;
end loop;
loop
a := a + 1;
exit when a = 10;
end loop;
case i is
when 0 => Ada.Text_IO.Put ("zero");
when 1 => Ada.Text_IO.Put ("one");
when 2 => Ada.Text_IO.Put ("two");
-- case statements have to cover all possible cases:
when others => Ada.Text_IO.Put ("none of the above");
end case;
for aWeekday in Weekday'Range loop -- loop over an enumeration
Put_Line ( Weekday'Image(aWeekday) ); -- output string representation of an enumeration
if aWeekday in Working_Day then -- check of a subtype of an enumeration
Put_Line ( " to work for " &
Working_Hours'Image (Work_Load(aWeekday)) ); -- access into a lookup table
end if;
end loop;
Packages, procedures and functions
Among the parts of an Ada program are packages, procedures and functions.
Functions differ from procedures in that they must return a value. Function calls cannot be used "as a statement", and their result must be assigned to a variable. However, since Ada 2012, functions are not required to be pure and may mutate their suitably declared parameters or the global state.
Example:
Package specification (example.ads)
package Example is
type Number is range 1 .. 11;
procedure Print_and_Increment (j: in out Number);
end Example;
Package body (example.adb)
with Ada.Text_IO;
package body Example is
i : Number := Number'First;
procedure Print_and_Increment (j: in out Number) is
function Next (k: in Number) return Number is
begin
return k + 1;
end Next;
begin
Ada.Text_IO.Put_Line ( "The total is: " & Number'Image(j) );
j := Next (j);
end Print_and_Increment;
-- package initialization executed when the package is elaborated
begin
while i < Number'Last loop
Print_and_Increment (i);
end loop;
end Example;
This program can be compiled, e.g., by using the freely available open-source compiler GNAT, by executing
gnatmake -z example.adb
Packages, procedures and functions can nest to any depth, and each can also be the logical outermost block.
Each package, procedure or function can have its own declarations of constants, types, variables, and other procedures, functions and packages, which can be declared in any order.
Pragmas
A pragma is a compiler directive that conveys information to the compiler to allow specific manipulating of compiled output. Certain pragmas are built into the language, while others are implementation-specific.
Examples of common usage of compiler pragmas would be to disable certain features, such as run-time type checking or array subscript boundary checking, or to instruct the compiler to insert object code instead of a function call (as C/C++ does with inline functions).
Generics
See also
Ada compilers
ALGOL 68
APSE – a specification for a programming environment to support software development in Ada
Pascal
Ravenscar profile – a subset of the Ada tasking features designed for safety-critical hard real-time computing
Smalltalk
SPARK – a programming language consisting of a highly restricted subset of Ada, annotated with meta-information describing desired component behavior and individual runtime requirements
VHDL, Ada-based hardware description language
Notes
References
International standards
ISO/IEC 8652: Information technology—Programming languages—Ada
ISO/IEC 15291: Information technology—Programming languages—Ada Semantic Interface Specification (ASIS)
ISO/IEC 18009: Information technology—Programming languages—Ada: Conformity assessment of a language processor (ACATS)
IEEE Standard 1003.5b-1996, the POSIX Ada binding
Ada Language Mapping Specification, the CORBA interface description language (IDL) to Ada mapping
Rationale
These documents have been published in various forms, including print.
Also available apps.dtic.mil, pdf
Books
795 pages.
Further reading
External links
Ada Resource Association
DOD Ada programming language (ANSI/MIL STD 1815A-1983) specification
JTC1/SC22/WG9 ISO home of Ada Standards
Ada Programming Language Materials, 1981–1990. Charles Babbage Institute, University of Minnesota.
Department of Defense (June 1978), Requirements for High Order Computer Programming Languages: "Steelman"
David A. Wheeler (1996), Introduction to Steelman On-Line (Version 1.2).
SoftTech Inc. (1976), "Evaluation of ALGOL 68, Jovial J3B, Pascal, SIMULA 67, and TACPOL Versus TINMAN - Requirements for a Common High Order Programming Language." - See also: ALGOL 68, Jovial J3B, Pascal, SIMULA 67, and TACPOL (Defense Technical Information Center - DTIC ADA037637, Report Number 1021-14).
David A. Wheeler (1997), "Ada, C, C++, and Java vs. The Steelman". Originally published in Ada Letters July/August 1997.
Programming languages
.NET programming languages
Avionics programming languages
High Integrity Programming Language
Multi-paradigm programming languages
Programming language standards
Programming languages created in 1980
Programming languages with an ISO standard
Statically typed programming languages
Systems programming languages
1980 software
High-level programming languages
Ada Lovelace | Ada (programming language) | [
"Technology"
] | 5,324 | [
"Computer standards",
"Programming language standards"
] |
1,260 | https://en.wikipedia.org/wiki/Advanced%20Encryption%20Standard | The Advanced Encryption Standard (AES), also known by its original name Rijndael (), is a specification for the encryption of electronic data established by the U.S. National Institute of Standards and Technology (NIST) in 2001.
AES is a variant of the Rijndael block cipher developed by two Belgian cryptographers, Joan Daemen and Vincent Rijmen, who submitted a proposal to NIST during the AES selection process. Rijndael is a family of ciphers with different key and block sizes. For AES, NIST selected three members of the Rijndael family, each with a block size of 128 bits, but three different key lengths: 128, 192 and 256 bits.
AES has been adopted by the U.S. government. It supersedes the Data Encryption Standard (DES), which was published in 1977. The algorithm described by AES is a symmetric-key algorithm, meaning the same key is used for both encrypting and decrypting the data.
In the United States, AES was announced by the NIST as U.S. FIPS PUB 197 (FIPS 197) on November 26, 2001. This announcement followed a five-year standardization process in which fifteen competing designs were presented and evaluated, before the Rijndael cipher was selected as the most suitable.
AES is included in the ISO/IEC 18033-3 standard. AES became effective as a U.S. federal government standard on May 26, 2002, after approval by U.S. Secretary of Commerce Donald Evans. AES is available in many different encryption packages, and is the first (and only) publicly accessible cipher approved by the U.S. National Security Agency (NSA) for top secret information when used in an NSA approved cryptographic module.
Definitive standards
The Advanced Encryption Standard (AES) is defined in each of:
FIPS PUB 197: Advanced Encryption Standard (AES)
ISO/IEC 18033-3: Block ciphers
Description of the ciphers
AES is based on a design principle known as a substitution–permutation network, and is efficient in both software and hardware. Unlike its predecessor DES, AES does not use a Feistel network. AES is a variant of Rijndael, with a fixed block size of 128 bits, and a key size of 128, 192, or 256 bits. By contrast, Rijndael per se is specified with block and key sizes that may be any multiple of 32 bits, with a minimum of 128 and a maximum of 256 bits. Most AES calculations are done in a particular finite field.
AES operates on a 4 × 4 column-major order array of 16 bytes termed the state:
The key size used for an AES cipher specifies the number of transformation rounds that convert the input, called the plaintext, into the final output, called the ciphertext. The number of rounds are as follows:
10 rounds for 128-bit keys.
12 rounds for 192-bit keys.
14 rounds for 256-bit keys.
Each round consists of several processing steps, including one that depends on the encryption key itself. A set of reverse rounds are applied to transform ciphertext back into the original plaintext using the same encryption key.
High-level description of the algorithm
round keys are derived from the cipher key using the AES key schedule. AES requires a separate 128-bit round key block for each round plus one more.
Initial round key addition:
each byte of the state is combined with a byte of the round key using bitwise xor.
9, 11 or 13 rounds:
a non-linear substitution step where each byte is replaced with another according to a lookup table.
a transposition step where the last three rows of the state are shifted cyclically a certain number of steps.
a linear mixing operation which operates on the columns of the state, combining the four bytes in each column.
Final round (making 10, 12 or 14 rounds in total):
The step
In the step, each byte in the state array is replaced with a using an 8-bit substitution box. Before round 0, the state array is simply the plaintext/input. This operation provides the non-linearity in the cipher. The S-box used is derived from the multiplicative inverse over , known to have good non-linearity properties. To avoid attacks based on simple algebraic properties, the S-box is constructed by combining the inverse function with an invertible affine transformation. The S-box is also chosen to avoid any fixed points (and so is a derangement), i.e., , and also any opposite fixed points, i.e., .
While performing the decryption, the step (the inverse of ) is used, which requires first taking the inverse of the affine transformation and then finding the multiplicative inverse.
The step
The step operates on the rows of the state; it cyclically shifts the bytes in each row by a certain offset. For AES, the first row is left unchanged. Each byte of the second row is shifted one to the left. Similarly, the third and fourth rows are shifted by offsets of two and three respectively. In this way, each column of the output state of the step is composed of bytes from each column of the input state. The importance of this step is to avoid the columns being encrypted independently, in which case AES would degenerate into four independent block ciphers.
The step
In the step, the four bytes of each column of the state are combined using an invertible linear transformation. The function takes four bytes as input and outputs four bytes, where each input byte affects all four output bytes. Together with , provides diffusion in the cipher.
During this operation, each column is transformed using a fixed matrix (matrix left-multiplied by column gives new value of column in the state):
Matrix multiplication is composed of multiplication and addition of the entries. Entries are bytes treated as coefficients of polynomial of order . Addition is simply XOR. Multiplication is modulo irreducible polynomial . If processed bit by bit, then, after shifting, a conditional XOR with 1B16 should be performed if the shifted value is larger than FF16 (overflow must be corrected by subtraction of generating polynomial). These are special cases of the usual multiplication in .
In more general sense, each column is treated as a polynomial over and is then multiplied modulo with a fixed polynomial . The coefficients are displayed in their hexadecimal equivalent of the binary representation of bit polynomials from . The step can also be viewed as a multiplication by the shown particular MDS matrix in the finite field . This process is described further in the article Rijndael MixColumns.
The
In the step, the subkey is combined with the state. For each round, a subkey is derived from the main key using Rijndael's key schedule; each subkey is the same size as the state. The subkey is added by combining of the state with the corresponding byte of the subkey using bitwise XOR.
Optimization of the cipher
On systems with 32-bit or larger words, it is possible to speed up execution of this cipher by combining the and steps with the step by transforming them into a sequence of table lookups. This requires four 256-entry 32-bit tables (together occupying 4096 bytes). A round can then be performed with 16 table lookup operations and 12 32-bit exclusive-or operations, followed by four 32-bit exclusive-or operations in the step. Alternatively, the table lookup operation can be performed with a single 256-entry 32-bit table (occupying 1024 bytes) followed by circular rotation operations.
Using a byte-oriented approach, it is possible to combine the , , and steps into a single round operation.
Security
The National Security Agency (NSA) reviewed all the AES finalists, including Rijndael, and stated that all of them were secure enough for U.S. Government non-classified data. In June 2003, the U.S. Government announced that AES could be used to protect classified information:
The design and strength of all key lengths of the AES algorithm (i.e., 128, 192 and 256) are sufficient to protect classified information up to the SECRET level. TOP SECRET information will require use of either the 192 or 256 key lengths. The implementation of AES in products intended to protect national security systems and/or information must be reviewed and certified by NSA prior to their acquisition and use.
AES has 10 rounds for 128-bit keys, 12 rounds for 192-bit keys, and 14 rounds for 256-bit keys.
By 2006, the best known attacks were on 7 rounds for 128-bit keys, 8 rounds for 192-bit keys, and 9 rounds for 256-bit keys.
Known attacks
For cryptographers, a cryptographic "break" is anything faster than a brute-force attacki.e., performing one trial decryption for each possible key in sequence . A break can thus include results that are infeasible with current technology. Despite being impractical, theoretical breaks can sometimes provide insight into vulnerability patterns. The largest successful publicly known brute-force attack against a widely implemented block-cipher encryption algorithm was against a 64-bit RC5 key by distributed.net in 2006.
The key space increases by a factor of 2 for each additional bit of key length, and if every possible value of the key is equiprobable; this translates into a doubling of the average brute-force key search time with every additional bit of key length. This implies that the effort of a brute-force search increases exponentially with key length. Key length in itself does not imply security against attacks, since there are ciphers with very long keys that have been found to be vulnerable.
AES has a fairly simple algebraic framework. In 2002, a theoretical attack, named the "XSL attack", was announced by Nicolas Courtois and Josef Pieprzyk, purporting to show a weakness in the AES algorithm, partially due to the low complexity of its nonlinear components. Since then, other papers have shown that the attack, as originally presented, is unworkable; see XSL attack on block ciphers.
During the AES selection process, developers of competing algorithms wrote of Rijndael's algorithm "we are concerned about [its] use ... in security-critical applications." In October 2000, however, at the end of the AES selection process, Bruce Schneier, a developer of the competing algorithm Twofish, wrote that while he thought successful academic attacks on Rijndael would be developed someday, he "did not believe that anyone will ever discover an attack that will allow someone to read Rijndael traffic."
Until May 2009, the only successful published attacks against the full AES were side-channel attacks on some specific implementations. In 2009, a new related-key attack was discovered that exploits the simplicity of AES's key schedule and has a complexity of 2119. In December 2009 it was improved to 299.5. This is a follow-up to an attack discovered earlier in 2009 by Alex Biryukov, Dmitry Khovratovich, and Ivica Nikolić, with a complexity of 296 for one out of every 235 keys. However, related-key attacks are not of concern in any properly designed cryptographic protocol, as a properly designed protocol (i.e., implementational software) will take care not to allow related keys, essentially by constraining an attacker's means of selecting keys for relatedness.
Another attack was blogged by Bruce Schneier
on July 30, 2009, and released as a preprint
on August 3, 2009. This new attack, by Alex Biryukov, Orr Dunkelman, Nathan Keller, Dmitry Khovratovich, and Adi Shamir, is against AES-256 that uses only two related keys and 239 time to recover the complete 256-bit key of a 9-round version, or 245 time for a 10-round version with a stronger type of related subkey attack, or 270 time for an 11-round version. 256-bit AES uses 14 rounds, so these attacks are not effective against full AES.
The practicality of these attacks with stronger related keys has been criticized, for instance, by the paper on chosen-key-relations-in-the-middle attacks on AES-128 authored by Vincent Rijmen in 2010.
In November 2009, the first known-key distinguishing attack against a reduced 8-round version of AES-128 was released as a preprint.
This known-key distinguishing attack is an improvement of the rebound, or the start-from-the-middle attack, against AES-like permutations, which view two consecutive rounds of permutation as the application of a so-called Super-S-box. It works on the 8-round version of AES-128, with a time complexity of 248, and a memory complexity of 232. 128-bit AES uses 10 rounds, so this attack is not effective against full AES-128.
The first key-recovery attacks on full AES were by Andrey Bogdanov, Dmitry Khovratovich, and Christian Rechberger, and were published in 2011. The attack is a biclique attack and is faster than brute force by a factor of about four. It requires 2126.2 operations to recover an AES-128 key. For AES-192 and AES-256, 2190.2 and 2254.6 operations are needed, respectively. This result has been further improved to 2126.0 for AES-128, 2189.9 for AES-192, and 2254.3 for AES-256 by Biaoshuai Tao and Hongjun Wu in a 2015 paper, which are the current best results in key recovery attack against AES.
This is a very small gain, as a 126-bit key (instead of 128 bits) would still take billions of years to brute force on current and foreseeable hardware. Also, the authors calculate the best attack using their technique on AES with a 128-bit key requires storing 288 bits of data. That works out to about 38 trillion terabytes of data, which was more than all the data stored on all the computers on the planet in 2016. A paper in 2015 later improved the space complexity to 256 bits, which is 9007 terabytes (while still keeping a time complexity of approximately 2126).
According to the Snowden documents, the NSA is doing research on whether a cryptographic attack based on tau statistic may help to break AES.
At present, there is no known practical attack that would allow someone without knowledge of the key to read data encrypted by AES when correctly implemented.
Side-channel attacks
Side-channel attacks do not attack the cipher as a black box, and thus are not related to cipher security as defined in the classical context, but are important in practice. They attack implementations of the cipher on hardware or software systems that inadvertently leak data. There are several such known attacks on various implementations of AES.
In April 2005, D. J. Bernstein announced a cache-timing attack that he used to break a custom server that used OpenSSL's AES encryption. The attack required over 200 million chosen plaintexts. The custom server was designed to give out as much timing information as possible (the server reports back the number of machine cycles taken by the encryption operation). However, as Bernstein pointed out, "reducing the precision of the server's timestamps, or eliminating them from the server's responses, does not stop the attack: the client simply uses round-trip timings based on its local clock, and compensates for the increased noise by averaging over a larger number of samples."
In October 2005, Dag Arne Osvik, Adi Shamir and Eran Tromer presented a paper demonstrating several cache-timing attacks against the implementations in AES found in OpenSSL and Linux's dm-crypt partition encryption function. One attack was able to obtain an entire AES key after only 800 operations triggering encryptions, in a total of 65 milliseconds. This attack requires the attacker to be able to run programs on the same system or platform that is performing AES.
In December 2009 an attack on some hardware implementations was published that used differential fault analysis and allows recovery of a key with a complexity of 232.
In November 2010 Endre Bangerter, David Gullasch and Stephan Krenn published a paper which described a practical approach to a "near real time" recovery of secret keys from AES-128 without the need for either cipher text or plaintext. The approach also works on AES-128 implementations that use compression tables, such as OpenSSL. Like some earlier attacks, this one requires the ability to run unprivileged code on the system performing the AES encryption, which may be achieved by malware infection far more easily than commandeering the root account.
In March 2016, C. Ashokkumar, Ravi Prakash Giri and Bernard Menezes presented a side-channel attack on AES implementations that can recover the complete 128-bit AES key in just 6–7 blocks of plaintext/ciphertext, which is a substantial improvement over previous works that require between 100 and a million encryptions. The proposed attack requires standard user privilege and key-retrieval algorithms run under a minute.
Many modern CPUs have built-in hardware instructions for AES, which protect against timing-related side-channel attacks.
Quantum attacks
AES-256 is considered to be quantum resistant, as it has similar quantum resistance to AES-128's resistance against traditional, non-quantum, attacks at 128 bits of security. AES-192 and AES-128 are not considered quantum resistant due to their smaller key sizes. AES-192 has a strength of 96 bits against quantum attacks and AES-128 has 64 bits of strength against quantum attacks, making them both insecure.
NIST/CSEC validation
The Cryptographic Module Validation Program (CMVP) is operated jointly by the United States Government's National Institute of Standards and Technology (NIST) Computer Security Division and the Communications Security Establishment (CSE) of the Government of Canada. The use of cryptographic modules validated to NIST FIPS 140-2 is required by the United States Government for encryption of all data that has a classification of Sensitive but Unclassified (SBU) or above. From NSTISSP #11, National Policy Governing the Acquisition of Information Assurance: "Encryption products for protecting classified information will be certified by NSA, and encryption products intended for protecting sensitive information will be certified in accordance with NIST FIPS 140-2."
The Government of Canada also recommends the use of FIPS 140 validated cryptographic modules in unclassified applications of its departments.
Although NIST publication 197 ("FIPS 197") is the unique document that covers the AES algorithm, vendors typically approach the CMVP under FIPS 140 and ask to have several algorithms (such as Triple DES or SHA1) validated at the same time. Therefore, it is rare to find cryptographic modules that are uniquely FIPS 197 validated and NIST itself does not generally take the time to list FIPS 197 validated modules separately on its public web site. Instead, FIPS 197 validation is typically just listed as an "FIPS approved: AES" notation (with a specific FIPS 197 certificate number) in the current list of FIPS 140 validated cryptographic modules.
The Cryptographic Algorithm Validation Program (CAVP) allows for independent validation of the correct implementation of the AES algorithm. Successful validation results in being listed on the NIST validations page. This testing is a pre-requisite for the FIPS 140-2 module validation. However, successful CAVP validation in no way implies that the cryptographic module implementing the algorithm is secure. A cryptographic module lacking FIPS 140-2 validation or specific approval by the NSA is not deemed secure by the US Government and cannot be used to protect government data.
FIPS 140-2 validation is challenging to achieve both technically and fiscally. There is a standardized battery of tests as well as an element of source code review that must be passed over a period of a few weeks. The cost to perform these tests through an approved laboratory can be significant (e.g., well over $30,000 US) and does not include the time it takes to write, test, document and prepare a module for validation. After validation, modules must be re-submitted and re-evaluated if they are changed in any way. This can vary from simple paperwork updates if the security functionality did not change to a more substantial set of re-testing if the security functionality was impacted by the change.
Test vectors
Test vectors are a set of known ciphers for a given input and key. NIST distributes the reference of AES test vectors as AES Known Answer Test (KAT) Vectors.
Performance
High speed and low RAM requirements were some of the criteria of the AES selection process. As the chosen algorithm, AES performed well on a wide variety of hardware, from 8-bit smart cards to high-performance computers.
On a Pentium Pro, AES encryption requires 18 clock cycles per byte (cpb), equivalent to a throughput of about 11 MiB/s for a 200 MHz processor.
On Intel Core and AMD Ryzen CPUs supporting AES-NI instruction set extensions, throughput can be multiple GiB/s. On an Intel Westmere CPU, AES encryption using AES-NI takes about 1.3 cpb for AES-128 and 1.8 cpb for AES-256.
Implementations
See also
AES modes of operation
Disk encryption
Whirlpool – hash function created by Vincent Rijmen and Paulo S. L. M. Barreto
List of free and open-source software packages
Notes
References
alternate link (companion web site contains online lectures on AES)
External links
AES algorithm archive information – (old, unmaintained)
Animation of Rijndael – AES deeply explained and animated using Flash (by Enrique Zabala / University ORT / Montevideo / Uruguay). This animation (in English, Spanish, and German) is also part of CrypTool 1 (menu Indiv. Procedures → Visualization of Algorithms → AES).
HTML5 Animation of Rijndael – Same Animation as above made in HTML5.
Advanced Encryption Standard
Cryptography | Advanced Encryption Standard | [
"Mathematics",
"Engineering"
] | 4,686 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
1,264 | https://en.wikipedia.org/wiki/Anisotropy | Anisotropy () is the structural property of non-uniformity in different directions, as opposed to isotropy. An anisotropic object or pattern has properties that differ according to direction of measurement. For example, many materials exhibit very different physical or mechanical properties when measured along different axes, e.g. absorbance, refractive index, conductivity, and tensile strength.
An example of anisotropy is light coming through a polarizer. Another is wood, which is easier to split along its grain than across it because of the directional non-uniformity of the grain (the grain is the same in one direction, not all directions).
Fields of interest
Computer graphics
In the field of computer graphics, an anisotropic surface changes in appearance as it rotates about its geometric normal, as is the case with velvet.
Anisotropic filtering (AF) is a method of enhancing the image quality of textures on surfaces that are far away and viewed at a shallow angle. Older techniques, such as bilinear and trilinear filtering, do not take into account the angle a surface is viewed from, which can result in aliasing or blurring of textures. By reducing detail in one direction more than another, these effects can be reduced easily.
Chemistry
A chemical anisotropic filter, as used to filter particles, is a filter with increasingly smaller interstitial spaces in the direction of filtration so that the proximal regions filter out larger particles and distal regions increasingly remove smaller particles, resulting in greater flow-through and more efficient filtration.
In fluorescence spectroscopy, the fluorescence anisotropy, calculated from the polarization properties of fluorescence from samples excited with plane-polarized light, is used, e.g., to determine the shape of a macromolecule. Anisotropy measurements reveal the average angular displacement of the fluorophore that occurs between absorption and subsequent emission of a photon.
In NMR spectroscopy, the orientation of nuclei with respect to the applied magnetic field determines their chemical shift. In this context, anisotropic systems refer to the electron distribution of molecules with abnormally high electron density, like the pi system of benzene. This abnormal electron density affects the applied magnetic field and causes the observed chemical shift to change.
Real-world imagery
Images of a gravity-bound or man-made environment are particularly anisotropic in the orientation domain, with more image structure located at orientations parallel with or orthogonal to the direction of gravity (vertical and horizontal).
Physics
Physicists from University of California, Berkeley reported about their detection of the cosmic anisotropy in cosmic microwave background radiation in 1977. Their experiment demonstrated the Doppler shift caused by the movement of the earth with respect to the early Universe matter, the source of the radiation. Cosmic anisotropy has also been seen in the alignment of galaxies' rotation axes and polarization angles of quasars.
Physicists use the term anisotropy to describe direction-dependent properties of materials. Magnetic anisotropy, for example, may occur in a plasma, so that its magnetic field is oriented in a preferred direction. Plasmas may also show "filamentation" (such as that seen in lightning or a plasma globe) that is directional.
An anisotropic liquid has the fluidity of a normal liquid, but has an average structural order relative to each other along the molecular axis, unlike water or chloroform, which contain no structural ordering of the molecules. Liquid crystals are examples of anisotropic liquids.
Some materials conduct heat in a way that is isotropic, that is independent of spatial orientation around the heat source. Heat conduction is more commonly anisotropic, which implies that detailed geometric modeling of typically diverse materials being thermally managed is required. The materials used to transfer and reject heat from the heat source in electronics are often anisotropic.
Many crystals are anisotropic to light ("optical anisotropy"), and exhibit properties such as birefringence. Crystal optics describes light propagation in these media. An "axis of anisotropy" is defined as the axis along which isotropy is broken (or an axis of symmetry, such as normal to crystalline layers). Some materials can have multiple such optical axes.
Geophysics and geology
Seismic anisotropy is the variation of seismic wavespeed with direction. Seismic anisotropy is an indicator of long range order in a material, where features smaller than the seismic wavelength (e.g., crystals, cracks, pores, layers, or inclusions) have a dominant alignment. This alignment leads to a directional variation of elasticity wavespeed. Measuring the effects of anisotropy in seismic data can provide important information about processes and mineralogy in the Earth; significant seismic anisotropy has been detected in the Earth's crust, mantle, and inner core.
Geological formations with distinct layers of sedimentary material can exhibit electrical anisotropy; electrical conductivity in one direction (e.g. parallel to a layer), is different from that in another (e.g. perpendicular to a layer). This property is used in the gas and oil exploration industry to identify hydrocarbon-bearing sands in sequences of sand and shale. Sand-bearing hydrocarbon assets have high resistivity (low conductivity), whereas shales have lower resistivity. Formation evaluation instruments measure this conductivity or resistivity, and the results are used to help find oil and gas in wells. The mechanical anisotropy measured for some of the sedimentary rocks like coal and shale can change with corresponding changes in their surface properties like sorption when gases are produced from the coal and shale reservoirs.
The hydraulic conductivity of aquifers is often anisotropic for the same reason. When calculating groundwater flow to drains or to wells, the difference between horizontal and vertical permeability must be taken into account; otherwise the results may be subject to error.
Most common rock-forming minerals are anisotropic, including quartz and feldspar. Anisotropy in minerals is most reliably seen in their optical properties. An example of an isotropic mineral is garnet.
Igneous rock like granite also shows the anisotropy due to the orientation of the minerals during the solidification process.
Medical acoustics
Anisotropy is also a well-known property in medical ultrasound imaging describing a different resulting echogenicity of soft tissues, such as tendons, when the angle of the transducer is changed. Tendon fibers appear hyperechoic (bright) when the transducer is perpendicular to the tendon, but can appear hypoechoic (darker) when the transducer is angled obliquely. This can be a source of interpretation error for inexperienced practitioners.
Materials science and engineering
Anisotropy, in materials science, is a material's directional dependence of a physical property. This is a critical consideration for materials selection in engineering applications. A material with physical properties that are symmetric about an axis that is normal to a plane of isotropy is called a transversely isotropic material. Tensor descriptions of material properties can be used to determine the directional dependence of that property. For a monocrystalline material, anisotropy is associated with the crystal symmetry in the sense that more symmetric crystal types have fewer independent coefficients in the tensor description of a given property. When a material is polycrystalline, the directional dependence on properties is often related to the processing techniques it has undergone. A material with randomly oriented grains will be isotropic, whereas materials with texture will be often be anisotropic. Textured materials are often the result of processing techniques like cold rolling, wire drawing, and heat treatment.
Mechanical properties of materials such as Young's modulus, ductility, yield strength, and high-temperature creep rate, are often dependent on the direction of measurement. Fourth-rank tensor properties, like the elastic constants, are anisotropic, even for materials with cubic symmetry. The Young's modulus relates stress and strain when an isotropic material is elastically deformed; to describe elasticity in an anisotropic material, stiffness (or compliance) tensors are used instead.
In metals, anisotropic elasticity behavior is present in all single crystals with three independent coefficients for cubic crystals, for example. For face-centered cubic materials such as nickel and copper, the stiffness is highest along the <111> direction, normal to the close-packed planes, and smallest parallel to <100>. Tungsten is so nearly isotropic at room temperature that it can be considered to have only two stiffness coefficients; aluminium is another metal that is nearly isotropic.
For an isotropic material, where is the shear modulus, is the Young's modulus, and is the material's Poisson's ratio. Therefore, for cubic materials, we can think of anisotropy, , as the ratio between the empirically determined shear modulus for the cubic material and its (isotropic) equivalent:
The latter expression is known as the Zener ratio, , where refers to elastic constants in Voigt (vector-matrix) notation. For an isotropic material, the ratio is one.
Limitation of the Zener ratio to cubic materials is waived in the Tensorial anisotropy index AT that takes into consideration all the 27 components of the fully anisotropic stiffness tensor. It is composed of two major parts and , the former referring to components existing in cubic tensor and the latter in anisotropic tensor so that This first component includes the modified Zener ratio and additionally accounts for directional differences in the material, which exist in orthotropic material, for instance. The second component of this index covers the influence of stiffness coefficients that are nonzero only for non-cubic materials and remains zero otherwise.
Fiber-reinforced or layered composite materials exhibit anisotropic mechanical properties, due to orientation of the reinforcement material. In many fiber-reinforced composites like carbon fiber or glass fiber based composites, the weave of the material (e.g. unidirectional or plain weave) can determine the extent of the anisotropy of the bulk material. The tunability of orientation of the fibers allows for application-based designs of composite materials, depending on the direction of stresses applied onto the material.
Amorphous materials such as glass and polymers are typically isotropic. Due to the highly randomized orientation of macromolecules in polymeric materials, polymers are in general described as isotropic. However, mechanically gradient polymers can be engineered to have directionally dependent properties through processing techniques or introduction of anisotropy-inducing elements. Researchers have built composite materials with aligned fibers and voids to generate anisotropic hydrogels, in order to mimic hierarchically ordered biological soft matter. 3D printing, especially Fused Deposition Modeling, can introduce anisotropy into printed parts. This is due to the fact that FDM is designed to extrude and print layers of thermoplastic materials. This creates materials that are strong when tensile stress is applied in parallel to the layers and weak when the material is perpendicular to the layers.
Microfabrication
Anisotropic etching techniques (such as deep reactive-ion etching) are used in microfabrication processes to create well defined microscopic features with a high aspect ratio. These features are commonly used in MEMS (microelectromechanical systems) and microfluidic devices, where the anisotropy of the features is needed to impart desired optical, electrical, or physical properties to the device. Anisotropic etching can also refer to certain chemical etchants used to etch a certain material preferentially over certain crystallographic planes (e.g., KOH etching of silicon [100] produces pyramid-like structures)
Neuroscience
Diffusion tensor imaging is an MRI technique that involves measuring the fractional anisotropy of the random motion (Brownian motion) of water molecules in the brain. Water molecules located in fiber tracts are more likely to move anisotropically, since they are restricted in their movement (they move more in the dimension parallel to the fiber tract rather than in the two dimensions orthogonal to it), whereas water molecules dispersed in the rest of the brain have less restricted movement and therefore display more isotropy. This difference in fractional anisotropy is exploited to create a map of the fiber tracts in the brains of the individual.
Remote sensing and radiative transfer modeling
Radiance fields (see Bidirectional reflectance distribution function (BRDF)) from a reflective surface are often not isotropic in nature. This makes calculations of the total energy being reflected from any scene a difficult quantity to calculate. In remote sensing applications, anisotropy functions can be derived for specific scenes, immensely simplifying the calculation of the net reflectance or (thereby) the net irradiance of a scene.
For example, let the BRDF be where 'i' denotes incident direction and 'v' denotes viewing direction (as if from a satellite or other instrument). And let P be the Planar Albedo, which represents the total reflectance from the scene.
It is of interest because, with knowledge of the anisotropy function as defined, a measurement of the BRDF from a single viewing direction (say, ) yields a measure of the total scene reflectance (planar albedo) for that specific incident geometry (say, ).
See also
Circular symmetry
References
External links
"Overview of Anisotropy"
DoITPoMS Teaching and Learning Package: "Introduction to Anisotropy"
"Gauge, and knitted fabric generally, is an anisotropic phenomenon"
Orientation (geometry)
Asymmetry | Anisotropy | [
"Physics",
"Mathematics"
] | 2,874 | [
"Topology",
"Space",
"Geometry",
"Asymmetry",
"Spacetime",
"Orientation (geometry)",
"Symmetry"
] |
1,267 | https://en.wikipedia.org/wiki/Alpha%20decay | Alpha decay or α-decay is a type of radioactive decay in which an atomic nucleus emits an alpha particle (helium nucleus) and thereby transforms or "decays" into a different atomic nucleus, with a mass number that is reduced by four and an atomic number that is reduced by two. An alpha particle is identical to the nucleus of a helium-4 atom, which consists of two protons and two neutrons. It has a charge of and a mass of . For example, uranium-238 decays to form thorium-234.
While alpha particles have a charge , this is not usually shown because a nuclear equation describes a nuclear reaction without considering the electrons – a convention that does not imply that the nuclei necessarily occur in neutral atoms.
Alpha decay typically occurs in the heaviest nuclides. Theoretically, it can occur only in nuclei somewhat heavier than nickel (element 28), where the overall binding energy per nucleon is no longer a maximum and the nuclides are therefore unstable toward spontaneous fission-type processes. In practice, this mode of decay has only been observed in nuclides considerably heavier than nickel, with the lightest known alpha emitter being the second lightest isotope of antimony, 104Sb. Exceptionally, however, beryllium-8 decays to two alpha particles.
Alpha decay is by far the most common form of cluster decay, where the parent atom ejects a defined daughter collection of nucleons, leaving another defined product behind. It is the most common form because of the combined extremely high nuclear binding energy and relatively small mass of the alpha particle. Like other cluster decays, alpha decay is fundamentally a quantum tunneling process. Unlike beta decay, it is governed by the interplay between both the strong nuclear force and the electromagnetic force.
Alpha particles have a typical kinetic energy of 5 MeV (or ≈ 0.13% of their total energy, 110 TJ/kg) and have a speed of about 15,000,000 m/s, or 5% of the speed of light. There is surprisingly small variation around this energy, due to the strong dependence of the half-life of this process on the energy produced. Because of their relatively large mass, the electric charge of and relatively low velocity, alpha particles are very likely to interact with other atoms and lose their energy, and their forward motion can be stopped by a few centimeters of air.
Approximately 99% of the helium produced on Earth is the result of the alpha decay of underground deposits of minerals containing uranium or thorium. The helium is brought to the surface as a by-product of natural gas production.
History
Alpha particles were first described in the investigations of radioactivity by Ernest Rutherford in 1899, and by 1907 they were identified as He2+ ions.
By 1928, George Gamow had solved the theory of alpha decay via tunneling. The alpha particle is trapped inside the nucleus by an attractive nuclear potential well
and a repulsive electromagnetic potential barrier. Classically, it is forbidden to escape, but according to the (then) newly discovered principles of quantum mechanics, it has a tiny (but non-zero) probability of "tunneling" through the barrier and appearing on the other side to escape the nucleus. Gamow solved a model potential for the nucleus and derived, from first principles, a relationship between the half-life of the decay, and the energy of the emission, which had been previously discovered empirically and was known as the Geiger–Nuttall law.
Mechanism
The nuclear force holding an atomic nucleus together is very strong, in general much stronger than the repulsive electromagnetic forces between the protons. However, the nuclear force is also short-range, dropping quickly in strength beyond about 3 femtometers, while the electromagnetic force has an unlimited range. The strength of the attractive nuclear force keeping a nucleus together is thus proportional to the number of the nucleons, but the total disruptive electromagnetic force of proton-proton repulsion trying to break the nucleus apart is roughly proportional to the square of its atomic number. A nucleus with 210 or more nucleons is so large that the strong nuclear force holding it together can just barely counterbalance the electromagnetic repulsion between the protons it contains. Alpha decay occurs in such nuclei as a means of increasing stability by reducing size.
One curiosity is why alpha particles, helium nuclei, should be preferentially emitted as opposed to other particles like a single proton or neutron or other atomic nuclei. Part of the reason is the high binding energy of the alpha particle, which means that its mass is less than the sum of the masses of two free protons and two free neutrons. This increases the disintegration energy. Computing the total disintegration energy given by the equation
where is the initial mass of the nucleus, is the mass of the nucleus after particle emission, and is the mass of the emitted (alpha-)particle, one finds that in certain cases it is positive and so alpha particle emission is possible, whereas other decay modes would require energy to be added. For example, performing the calculation for uranium-232 shows that alpha particle emission releases 5.4 MeV of energy, while a single proton emission would require 6.1 MeV. Most of the disintegration energy becomes the kinetic energy of the alpha particle, although to fulfill conservation of momentum, part of the energy goes to the recoil of the nucleus itself (see atomic recoil). However, since the mass numbers of most alpha-emitting radioisotopes exceed 210, far greater than the mass number of the alpha particle (4), the fraction of the energy going to the recoil of the nucleus is generally quite small, less than 2%. Nevertheless, the recoil energy (on the scale of keV) is still much larger than the strength of chemical bonds (on the scale of eV), so the daughter nuclide will break away from the chemical environment the parent was in. The energies and ratios of the alpha particles can be used to identify the radioactive parent via alpha spectrometry.
These disintegration energies, however, are substantially smaller than the repulsive potential barrier created by the interplay between the strong nuclear and the electromagnetic force, which prevents the alpha particle from escaping. The energy needed to bring an alpha particle from infinity to a point near the nucleus just outside the range of the nuclear force's influence is generally in the range of about 25 MeV. An alpha particle within the nucleus can be thought of as being inside a potential barrier whose walls are 25 MeV above the potential at infinity. However, decay alpha particles only have energies of around 4 to 9 MeV above the potential at infinity, far less than the energy needed to overcome the barrier and escape.
Quantum tunneling
Quantum mechanics, however, allows the alpha particle to escape via quantum tunneling. The quantum tunneling theory of alpha decay, independently developed by George Gamow and by Ronald Wilfred Gurney and Edward Condon in 1928, was hailed as a very striking confirmation of quantum theory. Essentially, the alpha particle escapes from the nucleus not by acquiring enough energy to pass over the wall confining it, but by tunneling through the wall. Gurney and Condon made the following observation in their paper on it:
It has hitherto been necessary to postulate some special arbitrary 'instability' of the nucleus, but in the following note, it is pointed out that disintegration is a natural consequence of the laws of quantum mechanics without any special hypothesis... Much has been written of the explosive violence with which the α-particle is hurled from its place in the nucleus. But from the process pictured above, one would rather say that the α-particle almost slips away unnoticed.
The theory supposes that the alpha particle can be considered an independent particle within a nucleus, that is in constant motion but held within the nucleus by strong interaction. At each collision with the repulsive potential barrier of the electromagnetic force, there is a small non-zero probability that it will tunnel its way out. An alpha particle with a speed of 1.5×107 m/s within a nuclear diameter of approximately 10−14 m will collide with the barrier more than 1021 times per second. However, if the probability of escape at each collision is very small, the half-life of the radioisotope will be very long, since it is the time required for the total probability of escape to reach 50%. As an extreme example, the half-life of the isotope bismuth-209 is .
The isotopes in beta-decay stable isobars that are also stable with regards to double beta decay with mass number A = 5, A = 8, 143 ≤ A ≤ 155, 160 ≤ A ≤ 162, and A ≥ 165 are theorized to undergo alpha decay. All other mass numbers (isobars) have exactly one theoretically stable nuclide. Those with mass 5 decay to helium-4 and a proton or a neutron, and those with mass 8 decay to two helium-4 nuclei; their half-lives (helium-5, lithium-5, and beryllium-8) are very short, unlike the half-lives for all other such nuclides with A ≤ 209, which are very long. (Such nuclides with A ≤ 209 are primordial nuclides except 146Sm.)
Working out the details of the theory leads to an equation relating the half-life of a radioisotope to the decay energy of its alpha particles, a theoretical derivation of the empirical Geiger–Nuttall law.
Uses
Americium-241, an alpha emitter, is used in smoke detectors. The alpha particles ionize air in an open ion chamber and a small current flows through the ionized air. Smoke particles from the fire that enter the chamber reduce the current, triggering the smoke detector's alarm.
Radium-223 is also an alpha emitter. It is used in the treatment of skeletal metastases (cancers in the bones).
Alpha decay can provide a safe power source for radioisotope thermoelectric generators used for space probes and were used for artificial heart pacemakers. Alpha decay is much more easily shielded against than other forms of radioactive decay.
Static eliminators typically use polonium-210, an alpha emitter, to ionize the air, allowing the "static cling" to dissipate more rapidly.
Toxicity
Highly charged and heavy, alpha particles lose their several MeV of energy within a small volume of material, along with a very short mean free path. This increases the chance of double-strand breaks to the DNA in cases of internal contamination, when ingested, inhaled, injected or introduced through the skin. Otherwise, touching an alpha source is typically not harmful, as alpha particles are effectively shielded by a few centimeters of air, a piece of paper, or the thin layer of dead skin cells that make up the epidermis; however, many alpha sources are also accompanied by beta-emitting radio daughters, and both are often accompanied by gamma photon emission.
Relative biological effectiveness (RBE) quantifies the ability of radiation to cause certain biological effects, notably either cancer or cell-death, for equivalent radiation exposure. Alpha radiation has a high linear energy transfer (LET) coefficient, which is about one ionization of a molecule/atom for every angstrom of travel by the alpha particle. The RBE has been set at the value of 20 for alpha radiation by various government regulations. The RBE is set at 10 for neutron irradiation, and at 1 for beta radiation and ionizing photons.
However, the recoil of the parent nucleus (alpha recoil) gives it a significant amount of energy, which also causes ionization damage (see ionizing radiation). This energy is roughly the weight of the alpha () divided by the weight of the parent (typically about 200 Da) times the total energy of the alpha. By some estimates, this might account for most of the internal radiation damage, as the recoil nucleus is part of an atom that is much larger than an alpha particle, and causes a very dense trail of ionization; the atom is typically a heavy metal, which preferentially collect on the chromosomes. In some studies, this has resulted in an RBE approaching 1,000 instead of the value used in governmental regulations.
The largest natural contributor to public radiation dose is radon, a naturally occurring, radioactive gas found in soil and rock. If the gas is inhaled, some of the radon particles may attach to the inner lining of the lung. These particles continue to decay, emitting alpha particles, which can damage cells in the lung tissue. The death of Marie Curie at age 66 from aplastic anemia was probably caused by prolonged exposure to high doses of ionizing radiation, but it is not clear if this was due to alpha radiation or X-rays. Curie worked extensively with radium, which decays into radon, along with other radioactive materials that emit beta and gamma rays. However, Curie also worked with unshielded X-ray tubes during World War I, and analysis of her skeleton during a reburial showed a relatively low level of radioisotope burden.
The Russian defector Alexander Litvinenko's 2006 murder by radiation poisoning is thought to have been carried out with polonium-210, an alpha emitter.
References
Alpha emitters by increasing energy (Appendix 1)
Notes
External links
The LIVEChart of Nuclides - IAEA with filter on alpha decay
Alpha decay with 3 animated examples showing the recoil of daughter
See also
Beta decay
Gamma decay
Helium
Nuclear physics
Radioactivity | Alpha decay | [
"Physics",
"Chemistry"
] | 2,790 | [
"Radioactivity",
"Nuclear physics"
] |
1,271 | https://en.wikipedia.org/wiki/Analytical%20engine | The analytical engine was a proposed digital mechanical general-purpose computer designed by English mathematician and computer pioneer Charles Babbage. It was first described in 1837 as the successor to Babbage's Difference Engine, which was a design for a simpler mechanical calculator.
The analytical engine incorporated an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-Complete. In other words, the structure of the analytical engine was essentially the same as that which has dominated computer design in the electronic era. The analytical engine is one of the most successful achievements of Charles Babbage.
Babbage was never able to complete construction of any of his machines due to conflicts with his chief engineer and inadequate funding. It was not until 1941 that Konrad Zuse built the first general-purpose computer, Z3, more than a century after Babbage had proposed the pioneering analytical engine in 1837.
Design
Babbage's first attempt at a mechanical computing device, the Difference Engine, was a special-purpose machine designed to tabulate logarithms and trigonometric functions by evaluating finite differences to create approximating polynomials. Construction of this machine was never completed; Babbage had conflicts with his chief engineer, Joseph Clement, and ultimately the British government withdrew its funding for the project.
During this project, Babbage realised that a much more general design, the analytical engine, was possible. The work on the design of the analytical engine started around 1833.
The input, consisting of programs ("formulae") and data, was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter, and a bell. The machine would also be able to punch numbers onto cards to be read in later. It employed ordinary base-10 fixed-point arithmetic.
There was to be a store (that is, a memory) capable of holding 1,000 numbers of 40 decimal digits each (ca. 16.6 kB). An arithmetic unit (the "mill") would be able to perform all four arithmetic operations, plus comparisons and optionally square roots. Initially (1838) it was conceived as a difference engine curved back upon itself, in a generally circular layout, with the long store exiting off to one side. Later drawings (1858) depict a regularised grid layout. Like the central processing unit (CPU) in a modern computer, the mill would rely upon its own internal procedures, roughly equivalent to microcode in modern CPUs, to be stored in the form of pegs inserted into rotating drums called "barrels", to carry out some of the more complex instructions the user's program might specify.
The programming language to be employed by users was akin to modern day assembly languages. Loops and conditional branching were possible, and so the language as conceived would have been Turing-complete as later defined by Alan Turing. Three different types of punch cards were used: one for arithmetical operations, one for numerical constants, and one for load and store operations, transferring numbers from the store to the arithmetical unit or back. There were three separate readers for the three types of cards. Babbage developed some two dozen programs for the analytical engine between 1837 and 1840, and one program later. These programs treat polynomials, iterative formulas, Gaussian elimination, and Bernoulli numbers.
In 1842, the Italian mathematician Luigi Federico Menabrea published a description of the engine in French, based on lectures Babbage gave when he visited Turin in 1840. In 1843, the description was translated into English and extensively annotated by Ada Lovelace, who had become interested in the engine eight years earlier. In recognition of her additions to Menabrea's paper, which included a way to calculate Bernoulli numbers using the machine (widely considered to be the first complete computer program), she has been described as the first computer programmer.
Construction
Late in his life, Babbage sought ways to build a simplified version of the machine, and assembled a small part of it before his death in 1871.
In 1878, a committee of the British Association for the Advancement of Science described the analytical engine as "a marvel of mechanical ingenuity", but recommended against constructing it. The committee acknowledged the usefulness and value of the machine, but could not estimate the cost of building it, and were unsure whether the machine would function correctly after being built.
Intermittently from 1880 to 1910, Babbage's son Henry Prevost Babbage was constructing a part of the mill and the printing apparatus. In 1910, it was able to calculate a (faulty) list of multiples of pi. This constituted only a small part of the whole engine; it was not programmable and had no storage. (Popular images of this section have sometimes been mislabelled, implying that it was the entire mill or even the entire engine.) Henry Babbage's "analytical engine mill" is on display at the Science Museum in London. Henry also proposed building a demonstration version of the full engine, with a smaller storage capacity: "perhaps for a first machine ten (columns) would do, with fifteen wheels in each". Such a version could manipulate 20 numbers of 25 digits each, and what it could be told to do with those numbers could still be impressive. "It is only a question of cards and time", wrote Henry Babbage in 1888, "... and there is no reason why (twenty thousand) cards should not be used if necessary, in an analytical engine for the purposes of the mathematician".
In 1991, the London Science Museum built a complete and working specimen of Babbage's Difference Engine No. 2, a design that incorporated refinements Babbage discovered during the development of the analytical engine. This machine was built using materials and engineering tolerances that would have been available to Babbage, quelling the suggestion that Babbage's designs could not have been produced using the manufacturing technology of his time.
In October 2010, John Graham-Cumming started a "Plan 28" campaign to raise funds by "public subscription" to enable serious historical and academic study of Babbage's plans, with a view to then build and test a fully working virtual design which will then in turn enable construction of the physical analytical engine. As of May 2016, actual construction had not been attempted, since no consistent understanding could yet be obtained from Babbage's original design drawings. In particular it was unclear whether it could handle the indexed variables which were required for Lovelace's Bernoulli program. By 2017, the "Plan 28" effort reported that a searchable database of all catalogued material was available, and an initial review of Babbage's voluminous Scribbling Books had been completed.
Many of Babbage's original drawings have been digitised and are publicly available online.
Instruction set
Babbage is not known to have written down an explicit set of instructions for the engine in the manner of a modern processor manual. Instead he showed his programs as lists of states during their execution, showing what operator was run at each step with little indication of how the control flow would be guided.
Allan G. Bromley has assumed that the card deck could be read in forwards and backwards directions as a function of conditional branching after testing for conditions, which would make the engine Turing-complete:
...the cards could be ordered to move forward and reverse (and hence to loop)...
The introduction for the first time, in 1845, of user operations for a variety of service functions including, most importantly, an effective system for user control of looping in user programs.
There is no indication how the direction of turning of the operation and variable cards is specified. In the absence of other evidence I have had to adopt the minimal default assumption that both the operation and variable cards can only be turned backward as is necessary to implement the loops used in Babbage's sample programs. There would be no mechanical or microprogramming difficulty in placing the direction of motion under the control of the user.
In their emulator of the engine, Fourmilab say:
The Engine's Card Reader is not constrained to simply process the cards in a chain one after another from start to finish. It can, in addition, directed by the very cards it reads and advised by whether the Mill's run-up lever is activated, either advance the card chain forward, skipping the intervening cards, or backward, causing previously-read cards to be processed once again.
This emulator does provide a written symbolic instruction set, though this has been constructed by its authors rather than based on Babbage's original works. For example, a factorial program would be written as:
N0 6
N1 1
N2 1
×
L1
L0
S1
–
L0
L2
S0
L2
L0
CB?11
where the CB is the conditional branch instruction or "combination card" used to make the control flow jump, in this case backward by 11 cards.
Influence
Predicted influence
Babbage understood that the existence of an automatic computer would kindle interest in the field now known as algorithmic efficiency, writing in his Passages from the Life of a Philosopher, "As soon as an analytical engine exists, it will necessarily guide the future course of the science. Whenever any result is sought by its aid, the question will then arise—By what course of calculation can these results be arrived at by the machine in the shortest time?"
Computer science
From 1872, Henry continued diligently with his father's work and then intermittently in retirement in 1875.
Percy Ludgate wrote about the engine in 1914 and published his own design for an analytical engine in 1909. It was drawn up in detail, but never built, and the drawings have never been found. Ludgate's engine would be much smaller (about , which corresponds to cube of side length ) than Babbage's, and hypothetically would be capable of multiplying two 20-decimal-digit numbers in about six seconds.
In his work Essays on Automatics (1914) Leonardo Torres Quevedo, inspired by Babbage, designed a theoretical electromechanical calculating machine which was to be controlled by a read-only program. The paper also contains the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, which consisted of an arithmetic unit connected to a (possibly remote) typewriter, on which commands could be typed and the results printed automatically.
Vannevar Bush's paper Instrumental Analysis (1936) included several references to Babbage's work. In the same year he started the Rapid Arithmetical Machine project to investigate the problems of constructing an electronic digital computer.
Despite this groundwork, Babbage's work fell into historical obscurity, and the analytical engine was unknown to builders of electromechanical and electronic computing machines in the 1930s and 1940s when they began their work, resulting in the need to re-invent many of the architectural innovations Babbage had proposed. Howard Aiken, who built the quickly-obsoleted electromechanical calculator, the Harvard Mark I, between 1937 and 1945, praised Babbage's work likely as a way of enhancing his own stature, but knew nothing of the analytical engine's architecture during the construction of the Mark I, and considered his visit to the constructed portion of the analytical engine "the greatest disappointment of my life". The Mark I showed no influence from the analytical engine and lacked the analytical engine's most prescient architectural feature, conditional branching. J. Presper Eckert and John W. Mauchly similarly were not aware of the details of Babbage's analytical engine work prior to the completion of their design for the first electronic general-purpose computer, the ENIAC.
Comparison to other early computers
If the analytical engine had been built, it would have been digital, programmable and Turing-complete. It would, however, have been very slow. Luigi Federico Menabrea reported in Sketch of the Analytical Engine: "Mr. Babbage believes he can, by his engine, form the product of two numbers, each containing twenty figures, in three minutes".
By comparison the Harvard Mark I could perform the same task in just six seconds (though it is debatable that computer is Turing complete; the ENIAC, which is, would also have been faster). A modern CPU could do the same thing in under a billionth of a second.
In popular culture
The cyberpunk novelists William Gibson and Bruce Sterling co-authored a steampunk novel of alternative history titled The Difference Engine in which Babbage's difference and analytical engines became available to Victorian society. The novel explores the consequences and implications of the early introduction of computational technology.
Moriarty by Modem, a short story by Jack Nimersheim, describes an alternative history where Babbage's analytical engine was indeed completed and had been deemed highly classified by the British government. The characters of Sherlock Holmes and Moriarty had in reality been a set of prototype programs written for the analytical engine. This short story follows Holmes as his program is implemented on modern computers and he is forced to compete against his nemesis yet again in the modern counterparts of Babbage's analytical engine.
A similar setting to The Difference Engine is used by Sydney Padua in the webcomic The Thrilling Adventures of Lovelace and Babbage. It features an alternative history where Ada Lovelace and Babbage have built the analytical engine and use it to fight crime at Queen Victoria's request. The comic is based on thorough research on the biographies of and correspondence between Babbage and Lovelace, which is then twisted for humorous effect.
The Orion's Arm online project features the Machina Babbagenseii, fully sentient Babbage-inspired mechanical computers. Each is the size of a large asteroid, only capable of surviving in microgravity conditions, and processes data at 0.5% the speed of a human brain.
Charles Babbage and Ada Lovelace appear in an episode of Doctor Who, "Spyfall Part 2", where the engine is displayed and referenced.
References
Bibliography
External links
The Babbage Papers, Science Museum archive
The Analytical Engine at Fourmilab, includes historical documents and online simulations
Image of a later Plan of Analytical Engine with grid layout (1858)
First working Babbage "barrel" actually assembled, circa 2005
Special issue, IEEE Annals of the History of Computing, Volume 22, Number 4, October–December 2000
Babbage, Science Museum, London (archived)
Plan 28: Building Charles Babbage's Analytical Engine
Charles Babbage
Computer-related introductions in 1837
English inventions
Mechanical calculators
Mechanical computers
One-of-a-kind computers
Ada Lovelace | Analytical engine | [
"Physics",
"Technology"
] | 3,101 | [
"Physical systems",
"Machines",
"Mechanical computers"
] |
1,309 | https://en.wikipedia.org/wiki/Almost%20all | In mathematics, the term "almost all" means "all but a negligible quantity". More precisely, if is a set, "almost all elements of " means "all elements of but those in a negligible subset of ". The meaning of "negligible" depends on the mathematical context; for instance, it can mean finite, countable, or null.
In contrast, "almost no" means "a negligible quantity"; that is, "almost no elements of " means "a negligible quantity of elements of ".
Meanings in different areas of mathematics
Prevalent meaning
Throughout mathematics, "almost all" is sometimes used to mean "all (elements of an infinite set) except for finitely many". This use occurs in philosophy as well. Similarly, "almost all" can mean "all (elements of an uncountable set) except for countably many".
Examples:
Almost all positive integers are greater than 1012.
Almost all prime numbers are odd (2 is the only exception).
Almost all polyhedra are irregular (as there are only nine exceptions: the five platonic solids and the four Kepler–Poinsot polyhedra).
If P is a nonzero polynomial, then P(x) ≠ 0 for almost all x (if not all x).
Meaning in measure theory
When speaking about the reals, sometimes "almost all" can mean "all reals except for a null set". Similarly, if S is some set of reals, "almost all numbers in S" can mean "all numbers in S except for those in a null set". The real line can be thought of as a one-dimensional Euclidean space. In the more general case of an n-dimensional space (where n is a positive integer), these definitions can be generalised to "all points except for those in a null set" or "all points in S except for those in a null set" (this time, S is a set of points in the space). Even more generally, "almost all" is sometimes used in the sense of "almost everywhere" in measure theory, or in the closely related sense of "almost surely" in probability theory.
Examples:
In a measure space, such as the real line, countable sets are null. The set of rational numbers is countable, so almost all real numbers are irrational.
Georg Cantor's first set theory article proved that the set of algebraic numbers is countable as well, so almost all reals are transcendental.
Almost all reals are normal.
The Cantor set is also null. Thus, almost all reals are not in it even though it is uncountable.
The derivative of the Cantor function is 0 for almost all numbers in the unit interval. It follows from the previous example because the Cantor function is locally constant, and thus has derivative 0 outside the Cantor set.
Meaning in number theory
In number theory, "almost all positive integers" can mean "the positive integers in a set whose natural density is 1". That is, if A is a set of positive integers, and if the proportion of positive integers in A below n (out of all positive integers below n) tends to 1 as n tends to infinity, then almost all positive integers are in A.
More generally, let S be an infinite set of positive integers, such as the set of even positive numbers or the set of primes, if A is a subset of S, and if the proportion of elements of S below n that are in A (out of all elements of S below n) tends to 1 as n tends to infinity, then it can be said that almost all elements of S are in A.
Examples:
The natural density of cofinite sets of positive integers is 1, so each of them contains almost all positive integers.
Almost all positive integers are composite.
Almost all even positive numbers can be expressed as the sum of two primes.
Almost all primes are isolated. Moreover, for every positive integer , almost all primes have prime gaps of more than both to their left and to their right; that is, there is no other prime between and .
Meaning in graph theory
In graph theory, if A is a set of (finite labelled) graphs, it can be said to contain almost all graphs, if the proportion of graphs with n vertices that are in A tends to 1 as n tends to infinity. However, it is sometimes easier to work with probabilities, so the definition is reformulated as follows. The proportion of graphs with n vertices that are in A equals the probability that a random graph with n vertices (chosen with the uniform distribution) is in A, and choosing a graph in this way has the same outcome as generating a graph by flipping a coin for each pair of vertices to decide whether to connect them. Therefore, equivalently to the preceding definition, the set A contains almost all graphs if the probability that a coin-flip–generated graph with n vertices is in A tends to 1 as n tends to infinity. Sometimes, the latter definition is modified so that the graph is chosen randomly in some other way, where not all graphs with n vertices have the same probability, and those modified definitions are not always equivalent to the main one.
The use of the term "almost all" in graph theory is not standard; the term "asymptotically almost surely" is more commonly used for this concept.
Example:
Almost all graphs are asymmetric.
Almost all graphs have diameter 2.
Meaning in topology
In topology and especially dynamical systems theory (including applications in economics), "almost all" of a topological space's points can mean "all of the space's points except for those in a meagre set". Some use a more limited definition, where a subset contains almost all of the space's points only if it contains some open dense set.
Example:
Given an irreducible algebraic variety, the properties that hold for almost all points in the variety are exactly the generic properties. This is due to the fact that in an irreducible algebraic variety equipped with the Zariski topology, all nonempty open sets are dense.
Meaning in algebra
In abstract algebra and mathematical logic, if U is an ultrafilter on a set X, "almost all elements of X" sometimes means "the elements of some element of U". For any partition of X into two disjoint sets, one of them will necessarily contain almost all elements of X. It is possible to think of the elements of a filter on X as containing almost all elements of X, even if it isn't an ultrafilter.
Proofs
See also
Almost
Almost everywhere
Almost surely
References
Primary sources
Secondary sources
Mathematical terminology | Almost all | [
"Mathematics"
] | 1,379 | [
"nan"
] |
1,313 | https://en.wikipedia.org/wiki/Aromatic%20compound | Aromatic compounds or arenes are organic compounds "with a chemistry typified by benzene" and "cyclically conjugated."
The word "aromatic" originates from the past grouping of molecules based on odor, before their general chemical properties were understood. The current definition of aromatic compounds does not have any relation to their odor. Aromatic compounds are now defined as cyclic compounds satisfying Hückel's Rule.
Aromatic compounds have the following general properties:
Typically unreactive
Often non polar and hydrophobic
High carbon-hydrogen ratio
Burn with a strong sooty yellow flame, due to high C:H ratio
Undergo electrophilic substitution reactions and nucleophilic aromatic substitutions
Arenes are typically split into two categories - benzoids, that contain a benzene derivative and follow the benzene ring model, and non-benzoids that contain other aromatic cyclic derivatives. Aromatic compounds are commonly used in organic synthesis and are involved in many reaction types, following both additions and removals, as well as saturation and dearomatization.
Heteroarenes
Heteroarenes are aromatic compounds, where at least one methine or vinylene (-C= or -CH=CH-) group is replaced by a heteroatom: oxygen, nitrogen, or sulfur. Examples of non-benzene compounds with aromatic properties are furan, a heterocyclic compound with a five-membered ring that includes a single oxygen atom, and pyridine, a heterocyclic compound with a six-membered ring containing one nitrogen atom. Hydrocarbons without an aromatic ring are called aliphatic. Approximately half of compounds known in 2000 are described as aromatic to some extent.
Applications
Aromatic compounds are pervasive in nature and industry. Key industrial aromatic hydrocarbons are benzene, toluene, xylene called BTX. Many biomolecules have phenyl groups including the so-called aromatic amino acids.
Benzene ring model
Benzene, C6H6, is the least complex aromatic hydrocarbon, and it was the first one defined as such. Its bonding nature was first recognized independently by Joseph Loschmidt and August Kekulé in the 19th century. Each carbon atom in the hexagonal cycle has four electrons to share. One electron forms a sigma bond with the hydrogen atom, and one is used in covalently bonding to each of the two neighboring carbons. This leaves six electrons, shared equally around the ring in delocalized pi molecular orbitals the size of the ring itself. This represents the equivalent nature of the six carbon-carbon bonds all of bond order 1.5. This equivalency can also explained by resonance forms. The electrons are visualized as floating above and below the ring, with the electromagnetic fields they generate acting to keep the ring flat.
The circle symbol for aromaticity was introduced by Sir Robert Robinson and his student James Armit in 1925 and popularized starting in 1959 by the Morrison & Boyd textbook on organic chemistry. The proper use of the symbol is debated: some publications use it to any cyclic π system, while others use it only for those π systems that obey Hückel's rule. Some argue that, in order to stay in line with Robinson's originally intended proposal, the use of the circle symbol should be limited to monocyclic 6 π-electron systems. In this way the circle symbol for a six-center six-electron bond can be compared to the Y symbol for a three-center two-electron bond.
Benzene and derivatives of benzene
Benzene derivatives have from one to six substituents attached to the central benzene core. Examples of benzene compounds with just one substituent are phenol, which carries a hydroxyl group, and toluene with a methyl group. When there is more than one substituent present on the ring, their spatial relationship becomes important for which the arene substitution patterns ortho, meta, and para are devised. When reacting to form more complex benzene derivatives, the substituents on a benzene ring can be described as either activated or deactivated, which are electron donating and electron withdrawing respectively. Activators are known as ortho-para directors, and deactivators are known as meta directors. Upon reacting, substituents will be added at the ortho, para or meta positions, depending on the directivity of the current substituents to make more complex benzene derivatives, often with several isomers. Electron flow leading to re-aromatization is key in ensuring the stability of such products.
For example, three isomers exist for cresol because the methyl group and the hydroxyl group (both ortho para directors) can be placed next to each other (ortho), one position removed from each other (meta), or two positions removed from each other (para). Given that both the methyl and hydroxyl group are ortho-para directors, the ortho and para isomers are typically favoured. Xylenol has two methyl groups in addition to the hydroxyl group, and, for this structure, 6 isomers exist.
Arene rings can stabilize charges, as seen in, for example, phenol (C6H5–OH), which is acidic at the hydroxyl (OH), as charge on the oxygen (alkoxide –O−) is partially delocalized into the benzene ring.
Non-benzylic arenes
Although benzylic arenes are common, non-benzylic compounds are also exceedingly important. Any compound containing a cyclic portion that conforms to Hückel's rule and is not a benzene derivative can be considered a non-benzylic aromatic compound.
Monocyclic arenes
Of annulenes larger than benzene, [12]annulene and [14]annulene are weakly aromatic compounds and [18]annulene, Cyclooctadecanonaene, is aromatic, though strain within the structure causes a slight deviation from the precisely planar structure necessary for aromatic categorization. Another example of a non-benzylic monocyclic arene is the cyclopropenyl (cyclopropenium cation), which satisfies Hückel's rule with an n equal to 0. Note, only the cationic form of this cyclic propenyl is aromatic, given that neutrality in this compound would violate either the octet rule or Hückel's rule.
Other non-benzylic monocyclic arenes include the aforementioned heteroarenes that can replace carbon atoms with other heteroatoms such as N, O or S. Common examples of these are the six-membered pyrrole and five-membered pyridine, both of which have a substituted nitrogen
Polycyclic aromatic hydrocarbons
Polycyclic aromatic hydrocarbons, also known as polynuclear aromatic compounds (PAHs) are aromatic hydrocarbons that consist of fused aromatic rings and do not contain heteroatoms or carry substituents. Naphthalene is the simplest example of a PAH. PAHs occur in oil, coal, and tar deposits, and are produced as byproducts of fuel burning (whether fossil fuel or biomass). As pollutants, they are of concern because some compounds have been identified as carcinogenic, mutagenic, and teratogenic. PAHs are also found in cooked foods. Studies have shown that high levels of PAHs are found, for example, in meat cooked at high temperatures such as grilling or barbecuing, and in smoked fish. They are also a good candidate molecule to act as a basis for the earliest forms of life. In graphene the PAH motif is extended to large 2D sheets.
Reactions
Aromatic ring systems participate in many organic reactions.
Substitution
In aromatic substitution, one substituent on the arene ring, usually hydrogen, is replaced by another reagent. The two main types are electrophilic aromatic substitution, when the active reagent is an electrophile, and nucleophilic aromatic substitution, when the reagent is a nucleophile. In radical-nucleophilic aromatic substitution, the active reagent is a radical.
An example of electrophilic aromatic substitution is the nitration of salicylic acid, where a nitro group is added para to the hydroxide substituent:
Nucleophilic aromatic substitution involves displacement of a leaving group, such as a halide, on an aromatic ring. Aromatic rings usually nucleophilic, but in the presence of electron-withdrawing groups aromatic compounds undergo nucleophilic substitution. Mechanistically, this reaction differs from a common SN2 reaction, because it occurs at a trigonal carbon atom (sp2 hybridization).
Hydrogenation
Hydrogenation of arenes create saturated rings. The compound 1-naphthol is completely reduced to a mixture of decalin-ol isomers.
The compound resorcinol, hydrogenated with Raney nickel in presence of aqueous sodium hydroxide forms an enolate which is alkylated with methyl iodide to 2-methyl-1,3-cyclohexandione:
Dearomatization
In dearomatization reactions the aromaticity of the reactant is lost. In this regard, the dearomatization is related to hydrogenation. A classic approach is Birch reduction. The methodology is used in synthesis.
See also
Aromatic substituents: Aryl, Aryloxy and Arenediyl
Asphaltene
Hydrodealkylation
Simple aromatic rings
Rhodium-platinum oxide, a catalyst used to hydrogenate aromatic compounds.
References
External links | Aromatic compound | [
"Chemistry"
] | 2,045 | [
"Organic compounds",
"Aromatic compounds"
] |
1,317 | https://en.wikipedia.org/wiki/Antimatter | In modern physics, antimatter is defined as matter composed of the antiparticles (or "partners") of the corresponding particles in "ordinary" matter, and can be thought of as matter with reversed charge, parity, and time, known as CPT reversal. Antimatter occurs in natural processes like cosmic ray collisions and some types of radioactive decay, but only a tiny fraction of these have successfully been bound together in experiments to form antiatoms. Minuscule numbers of antiparticles can be generated at particle accelerators, but total artificial production has been only a few nanograms. No macroscopic amount of antimatter has ever been assembled due to the extreme cost and difficulty of production and handling. Nonetheless, antimatter is an essential component of widely available applications related to beta decay, such as positron emission tomography, radiation therapy, and industrial imaging.
In theory, a particle and its antiparticle (for example, a proton and an antiproton) have the same mass, but opposite electric charge, and other differences in quantum numbers.
A collision between any particle and its anti-particle partner leads to their mutual annihilation, giving rise to various proportions of intense photons (gamma rays), neutrinos, and sometimes less-massive particleantiparticle pairs. The majority of the total energy of annihilation emerges in the form of ionizing radiation. If surrounding matter is present, the energy content of this radiation will be absorbed and converted into other forms of energy, such as heat or light. The amount of energy released is usually proportional to the total mass of the collided matter and antimatter, in accordance with the notable mass–energy equivalence equation, .
Antiparticles bind with each other to form antimatter, just as ordinary particles bind to form normal matter. For example, a positron (the antiparticle of the electron) and an antiproton (the antiparticle of the proton) can form an antihydrogen atom. The nuclei of antihelium have been artificially produced, albeit with difficulty, and are the most complex anti-nuclei so far observed. Physical principles indicate that complex antimatter atomic nuclei are possible, as well as anti-atoms corresponding to the known chemical elements.
There is strong evidence that the observable universe is composed almost entirely of ordinary matter, as opposed to an equal mixture of matter and antimatter. This asymmetry of matter and antimatter in the visible universe is one of the great unsolved problems in physics. The process by which this inequality between matter and antimatter particles is hypothesised to have occurred is called baryogenesis.
Definitions
Antimatter particles carry the same charge as matter particles, but of opposite sign. That is, an antiproton is negatively charged and an antielectron (positron) is positively charged. Neutrons do not carry a net charge, but their constituent quarks do. Protons and neutrons have a baryon number of +1, while antiprotons and antineutrons have a baryon number of –1. Similarly, electrons have a lepton number of +1, while that of positrons is –1. When a particle and its corresponding antiparticle collide, they are both converted into energy.
The French term for "made of or pertaining to antimatter", , led to the initialism "C.T." and the science fiction term , as used in such novels as Seetee Ship.
Conceptual history
The idea of negative matter appears in past theories of matter that have now been abandoned. Using the once popular vortex theory of gravity, the possibility of matter with negative gravity was discussed by William Hicks in the 1880s. Between the 1880s and the 1890s, Karl Pearson proposed the existence of "squirts" and sinks of the flow of aether. The squirts represented normal matter and the sinks represented negative matter. Pearson's theory required a fourth dimension for the aether to flow from and into.
The term antimatter was first used by Arthur Schuster in two rather whimsical letters to Nature in 1898, in which he coined the term. He hypothesized antiatoms, as well as whole antimatter solar systems, and discussed the possibility of matter and antimatter annihilating each other. Schuster's ideas were not a serious theoretical proposal, merely speculation, and like the previous ideas, differed from the modern concept of antimatter in that it possessed negative gravity.
The modern theory of antimatter began in 1928, with a paper by Paul Dirac. Dirac realised that his relativistic version of the Schrödinger wave equation for electrons predicted the possibility of antielectrons. Although Dirac had laid the groundwork for the existence of these “antielectrons” he initially failed to pick up on the implications contained within his own equation. He freely gave the credit for that insight to J. Robert Oppenheimer, whose seminal paper “On the Theory of Electrons and Protons” (Feb 14th 1930) drew on Dirac's equation and argued for the existence of a positively charged electron (a positron), which as a counterpart to the electron should have the same mass as the electron itself. This meant that it could not be, as Dirac had in fact suggested, a proton. Dirac further postulated the existence of antimatter in a 1931 paper which referred to the positron as an "anti-electron". These were discovered by Carl D. Anderson in 1932 and named positrons from "positive electron". Although Dirac did not himself use the term antimatter, its use follows on naturally enough from antielectrons, antiprotons, etc. A complete periodic table of antimatter was envisaged by Charles Janet in 1929.
The Feynman–Stueckelberg interpretation states that antimatter and antiparticles behave exactly identical to regular particles, but traveling backward in time. This concept is nowadays used in modern particle physics, in Feynman diagrams.
Notation
One way to denote an antiparticle is by adding a bar over the particle's symbol. For example, the proton and antiproton are denoted as and , respectively. The same rule applies if one were to address a particle by its constituent components. A proton is made up of quarks, so an antiproton must therefore be formed from antiquarks. Another convention is to distinguish particles by positive and negative electric charge. Thus, the electron and positron are denoted simply as and respectively. To prevent confusion, however, the two conventions are never mixed.
Properties
There is no difference in the gravitational behavior of matter and antimatter. In other words, antimatter falls down when dropped, not up. This was confirmed with the thin, very cold gas of thousands of antihydrogen atoms that were confined in a vertical shaft surrounded by superconducting electromagnetic coils. These can create a magnetic bottle to keep the antimatter from coming into contact with matter and annihilating. The researchers then gradually weakened the magnetic fields and detected the antiatoms using two sensors as they escaped and annihilated. Most of the anti-atoms came out of the bottom opening, and only one-quarter out of the top.
There are compelling theoretical reasons to believe that, aside from the fact that antiparticles have different signs on all charges (such as electric and baryon charges), matter and antimatter have exactly the same properties. This means a particle and its corresponding antiparticle must have identical masses and decay lifetimes (if unstable). It also implies that, for example, a star made up of antimatter (an "antistar") will shine just like an ordinary star. This idea was tested experimentally in 2016 by the ALPHA experiment, which measured the transition between the two lowest energy states of antihydrogen. The results, which are identical to that of hydrogen, confirmed the validity of quantum mechanics for antimatter.
Origin and asymmetry
Most things observable from the Earth seem to be made of matter rather than antimatter. If antimatter-dominated regions of space existed, the gamma rays produced in annihilation reactions along the boundary between matter and antimatter regions would be detectable.
Antiparticles are created everywhere in the universe where high-energy particle collisions take place. High-energy cosmic rays striking Earth's atmosphere (or any other matter in the Solar System) produce minute quantities of antiparticles in the resulting particle jets, which are immediately annihilated by contact with nearby matter. They may similarly be produced in regions like the center of the Milky Way and other galaxies, where very energetic celestial events occur (principally the interaction of relativistic jets with the interstellar medium). The presence of the resulting antimatter is detectable by the two gamma rays produced every time positrons annihilate with nearby matter. The frequency and wavelength of the gamma rays indicate that each carries 511 keV of energy (that is, the rest mass of an electron multiplied by c2).
Observations by the European Space Agency's INTEGRAL satellite may explain the origin of a giant antimatter cloud surrounding the Galactic Center. The observations show that the cloud is asymmetrical and matches the pattern of X-ray binaries (binary star systems containing black holes or neutron stars), mostly on one side of the Galactic Center. While the mechanism is not fully understood, it is likely to involve the production of electron–positron pairs, as ordinary matter gains kinetic energy while falling into a stellar remnant.
Antimatter may exist in relatively large amounts in far-away galaxies due to cosmic inflation in the primordial time of the universe. Antimatter galaxies, if they exist, are expected to have the same chemistry and absorption and emission spectra as normal-matter galaxies, and their astronomical objects would be observationally identical, making them difficult to distinguish. NASA is trying to determine if such galaxies exist by looking for X-ray and gamma ray signatures of annihilation events in colliding superclusters.
In October 2017, scientists working on the BASE experiment at CERN reported a measurement of the antiproton magnetic moment to a precision of 1.5 parts per billion. It is consistent with the most precise measurement of the proton magnetic moment (also made by BASE in 2014), which supports the hypothesis of CPT symmetry. This measurement represents the first time that a property of antimatter is known more precisely than the equivalent property in matter.
Antimatter quantum interferometry has been first demonstrated in 2018 in the Positron Laboratory (L-NESS) of Rafael Ferragut in Como (Italy), by a group led by Marco Giammarchi.
Natural production
Positrons are produced naturally in β+ decays of naturally occurring radioactive isotopes (for example, potassium-40) and in interactions of gamma quanta (emitted by radioactive nuclei) with matter. Antineutrinos are another kind of antiparticle created by natural radioactivity (β− decay). Many different kinds of antiparticles are also produced by (and contained in) cosmic rays. In January 2011, research by the American Astronomical Society discovered antimatter (positrons) originating above thunderstorm clouds; positrons are produced in terrestrial gamma ray flashes created by electrons accelerated by strong electric fields in the clouds. Antiprotons have also been found to exist in the Van Allen Belts around the Earth by the PAMELA module.
Antiparticles are also produced in any environment with a sufficiently high temperature (mean particle energy greater than the pair production threshold). It is hypothesized that during the period of baryogenesis, when the universe was extremely hot and dense, matter and antimatter were continually produced and annihilated. The presence of remaining matter, and absence of detectable remaining antimatter, is called baryon asymmetry. The exact mechanism that produced this asymmetry during baryogenesis remains an unsolved problem. One of the necessary conditions for this asymmetry is the violation of CP symmetry, which has been experimentally observed in the weak interaction.
Recent observations indicate black holes and neutron stars produce vast amounts of positron-electron plasma via the jets.
Observation in cosmic rays
Satellite experiments have found evidence of positrons and a few antiprotons in primary cosmic rays, amounting to less than 1% of the particles in primary cosmic rays. This antimatter cannot all have been created in the Big Bang, but is instead attributed to have been produced by cyclic processes at high energies. For instance, electron-positron pairs may be formed in pulsars, as a magnetized neutron star rotation cycle shears electron-positron pairs from the star surface. Therein the antimatter forms a wind that crashes upon the ejecta of the progenitor supernovae. This weathering takes place as "the cold, magnetized relativistic wind launched by the star hits the non-relativistically expanding ejecta, a shock wave system forms in the impact: the outer one propagates in the ejecta, while a reverse shock propagates back towards the star." The former ejection of matter in the outer shock wave and the latter production of antimatter in the reverse shock wave are steps in a space weather cycle.
Preliminary results from the presently operating Alpha Magnetic Spectrometer (AMS-02) on board the International Space Station show that positrons in the cosmic rays arrive with no directionality, and with energies that range from 10 GeV to 250 GeV. In September, 2014, new results with almost twice as much data were presented in a talk at CERN and published in Physical Review Letters. A new measurement of positron fraction up to 500 GeV was reported, showing that positron fraction peaks at a maximum of about 16% of total electron+positron events, around an energy of 275 ± 32 GeV. At higher energies, up to 500 GeV, the ratio of positrons to electrons begins to fall again. The absolute flux of positrons also begins to fall before 500 GeV, but peaks at energies far higher than electron energies, which peak about 10 GeV. These results on interpretation have been suggested to be due to positron production in annihilation events of massive dark matter particles.
Cosmic ray antiprotons also have a much higher energy than their normal-matter counterparts (protons). They arrive at Earth with a characteristic energy maximum of 2 GeV, indicating their production in a fundamentally different process from cosmic ray protons, which on average have only one-sixth of the energy.
There is an ongoing search for larger antimatter nuclei, such as antihelium nuclei (that is, anti-alpha particles), in cosmic rays. The detection of natural antihelium could imply the existence of large antimatter structures such as an antistar. A prototype of the AMS-02 designated AMS-01, was flown into space aboard the on STS-91 in June 1998. By not detecting any antihelium at all, the AMS-01 established an upper limit of 1.1×10−6 for the antihelium to helium flux ratio. AMS-02 revealed in December 2016 that it had discovered a few signals consistent with antihelium nuclei amidst several billion helium nuclei. The result remains to be verified, and , the team is trying to rule out contamination.
Artificial production
Positrons
Positrons were reported in November 2008 to have been generated by Lawrence Livermore National Laboratory in large numbers. A laser drove electrons through a gold target's nuclei, which caused the incoming electrons to emit energy quanta that decayed into both matter and antimatter. Positrons were detected at a higher rate and in greater density than ever previously detected in a laboratory. Previous experiments made smaller quantities of positrons using lasers and paper-thin targets; newer simulations showed that short bursts of ultra-intense lasers and millimeter-thick gold are a far more effective source.
In 2023, the production of the first electron-positron beam-plasma was reported by a collaboration led by researchers at University of Oxford working with the High-Radiation to Materials (HRMT) facility at CERN. The beam demonstrated the highest positron yield achieved so far in a laboratory setting. The experiment employed the 440 GeV proton beam, with protons, from the Super Proton Synchrotron, and irradiated a particle converter composed of carbon and tantalum. This yielded a total electron-positron pairs via a particle shower process. The produced pair beams have a volume that fills multiple Debye spheres and are thus able to sustain collective plasma oscillations.
Antiprotons, antineutrons, and antinuclei
The existence of the antiproton was experimentally confirmed in 1955 by University of California, Berkeley physicists Emilio Segrè and Owen Chamberlain, for which they were awarded the 1959 Nobel Prize in Physics. An antiproton consists of two up antiquarks and one down antiquark (). The properties of the antiproton that have been measured all match the corresponding properties of the proton, with the exception of the antiproton having opposite electric charge and magnetic moment from the proton. Shortly afterwards, in 1956, the antineutron was discovered in proton–proton collisions at the Bevatron (Lawrence Berkeley National Laboratory) by Bruce Cork and colleagues.
In addition to antibaryons, anti-nuclei consisting of multiple bound antiprotons and antineutrons have been created. These are typically produced at energies far too high to form antimatter atoms (with bound positrons in place of electrons). In 1965, a group of researchers led by Antonino Zichichi reported production of nuclei of antideuterium at the Proton Synchrotron at CERN. At roughly the same time, observations of antideuterium nuclei were reported by a group of American physicists at the Alternating Gradient Synchrotron at Brookhaven National Laboratory.
Antihydrogen atoms
In 1995, CERN announced that it had successfully brought into existence nine hot antihydrogen atoms by implementing the SLAC/Fermilab concept during the PS210 experiment. The experiment was performed using the Low Energy Antiproton Ring (LEAR), and was led by Walter Oelert and Mario Macri. Fermilab soon confirmed the CERN findings by producing approximately 100 antihydrogen atoms at their facilities. The antihydrogen atoms created during PS210 and subsequent experiments (at both CERN and Fermilab) were extremely energetic and were not well suited to study. To resolve this hurdle, and to gain a better understanding of antihydrogen, two collaborations were formed in the late 1990s, namely, ATHENA and ATRAP.
In 1999, CERN activated the Antiproton Decelerator, a device capable of decelerating antiprotons from to – still too "hot" to produce study-effective antihydrogen, but a huge leap forward. In late 2002 the ATHENA project announced that they had created the world's first "cold" antihydrogen. The ATRAP project released similar results very shortly thereafter. The antiprotons used in these experiments were cooled by decelerating them with the Antiproton Decelerator, passing them through a thin sheet of foil, and finally capturing them in a Penning–Malmberg trap. The overall cooling process is workable, but highly inefficient; approximately 25 million antiprotons leave the Antiproton Decelerator and roughly 25,000 make it to the Penning–Malmberg trap, which is about or 0.1% of the original amount.
The antiprotons are still hot when initially trapped. To cool them further, they are mixed into an electron plasma. The electrons in this plasma cool via cyclotron radiation, and then sympathetically cool the antiprotons via Coulomb collisions. Eventually, the electrons are removed by the application of short-duration electric fields, leaving the antiprotons with energies less than . While the antiprotons are being cooled in the first trap, a small cloud of positrons is captured from radioactive sodium in a Surko-style positron accumulator. This cloud is then recaptured in a second trap near the antiprotons. Manipulations of the trap electrodes then tip the antiprotons into the positron plasma, where some combine with antiprotons to form antihydrogen. This neutral antihydrogen is unaffected by the electric and magnetic fields used to trap the charged positrons and antiprotons, and within a few microseconds the antihydrogen hits the trap walls, where it annihilates. Some hundreds of millions of antihydrogen atoms have been made in this fashion.
In 2005, ATHENA disbanded and some of the former members (along with others) formed the ALPHA Collaboration, which is also based at CERN. The ultimate goal of this endeavour is to test CPT symmetry through comparison of the atomic spectra of hydrogen and antihydrogen (see hydrogen spectral series).
Most of the sought-after high-precision tests of the properties of antihydrogen could only be performed if the antihydrogen were trapped, that is, held in place for a relatively long time. While antihydrogen atoms are electrically neutral, the spins of their component particles produce a magnetic moment. These magnetic moments can interact with an inhomogeneous magnetic field; some of the antihydrogen atoms can be attracted to a magnetic minimum. Such a minimum can be created by a combination of mirror and multipole fields. Antihydrogen can be trapped in such a magnetic minimum (minimum-B) trap; in November 2010, the ALPHA collaboration announced that they had so trapped 38 antihydrogen atoms for about a sixth of a second. This was the first time that neutral antimatter had been trapped.
On 26 April 2011, ALPHA announced that they had trapped 309 antihydrogen atoms, some for as long as 1,000 seconds (about 17 minutes). This was longer than neutral antimatter had ever been trapped before. ALPHA has used these trapped atoms to initiate research into the spectral properties of antihydrogen.
In 2016, a new antiproton decelerator and cooler called ELENA (Extra Low ENergy Antiproton decelerator) was built. It takes the antiprotons from the antiproton decelerator and cools them to 90 keV, which is "cold" enough to study. This machine works by using high energy and accelerating the particles within the chamber. More than one hundred antiprotons can be captured per second, a huge improvement, but it would still take several thousand years to make a nanogram of antimatter.
The biggest limiting factor in the large-scale production of antimatter is the availability of antiprotons. Recent data released by CERN states that, when fully operational, their facilities are capable of producing ten million antiprotons per minute. Assuming a 100% conversion of antiprotons to antihydrogen, it would take 100 billion years to produce 1 gram or 1 mole of antihydrogen (approximately atoms of anti-hydrogen). However, CERN only produces 1% of the anti-matter Fermilab does, and neither are designed to produce anti-matter. According to Gerald Jackson, using technology already in use today we are capable of producing and capturing 20 grams of anti-matter particles per year at a yearly cost of 670 million dollars per facility.
Antihelium
Antihelium-3 nuclei () were first observed in the 1970s in proton–nucleus collision experiments at the Institute for High Energy Physics by Y. Prockoshkin's group (Protvino near Moscow, USSR) and later created in nucleus–nucleus collision experiments. Nucleus–nucleus collisions produce antinuclei through the coalescence of antiprotons and antineutrons created in these reactions. In 2011, the STAR detector reported the observation of artificially created antihelium-4 nuclei (anti-alpha particles) () from such collisions.
The Alpha Magnetic Spectrometer on the International Space Station has, as of 2021, recorded eight events that seem to indicate the detection of antihelium-3.
Preservation
Antimatter cannot be stored in a container made of ordinary matter because antimatter reacts with any matter it touches, annihilating itself and an equal amount of the container. Antimatter in the form of charged particles can be contained by a combination of electric and magnetic fields, in a device called a Penning trap. This device cannot, however, contain antimatter that consists of uncharged particles, for which atomic traps are used. In particular, such a trap may use the dipole moment (electric or magnetic) of the trapped particles. At high vacuum, the matter or antimatter particles can be trapped and cooled with slightly off-resonant laser radiation using a magneto-optical trap or magnetic trap. Small particles can also be suspended with optical tweezers, using a highly focused laser beam.
In 2011, CERN scientists were able to preserve antihydrogen for approximately 17 minutes. The record for storing antiparticles is currently held by the TRAP experiment at CERN: antiprotons were kept in a Penning trap for 405 days. A proposal was made in 2018 to develop containment technology advanced enough to contain a billion anti-protons in a portable device to be driven to another lab for further experimentation.
Cost
Scientists claim that antimatter is the costliest material to make. In 2006, Gerald Smith estimated $250 million could produce 10 milligrams of positrons (equivalent to $25 billion per gram); in 1999, NASA gave a figure of $62.5 trillion per gram of antihydrogen. This is because production is difficult (only very few antiprotons are produced in reactions in particle accelerators) and because there is higher demand for other uses of particle accelerators. According to CERN, it has cost a few hundred million Swiss francs to produce about 1 billionth of a gram (the amount used so far for particle/antiparticle collisions). In comparison, to produce the first atomic weapon, the cost of the Manhattan Project was estimated at $23 billion with inflation during 2007.
Several studies funded by NASA Innovative Advanced Concepts are exploring whether it might be possible to use magnetic scoops to collect the antimatter that occurs naturally in the Van Allen belt of the Earth, and ultimately the belts of gas giants like Jupiter, ideally at a lower cost per gram.
Uses
Medical
Matter–antimatter reactions have practical applications in medical imaging, such as positron emission tomography (PET). In positive beta decay, a nuclide loses surplus positive charge by emitting a positron (in the same event, a proton becomes a neutron, and a neutrino is also emitted). Nuclides with surplus positive charge are easily made in a cyclotron and are widely generated for medical use. Antiprotons have also been shown within laboratory experiments to have the potential to treat certain cancers, in a similar method currently used for ion (proton) therapy.
Fuel
Isolated and stored antimatter could be used as a fuel for interplanetary or interstellar travel as part of an antimatter-catalyzed nuclear pulse propulsion or another antimatter rocket. Since the energy density of antimatter is higher than that of conventional fuels, an antimatter-fueled spacecraft would have a higher thrust-to-weight ratio than a conventional spacecraft.
If matter–antimatter collisions resulted only in photon emission, the entire rest mass of the particles would be converted to kinetic energy. The energy per unit mass () is about 10 orders of magnitude greater than chemical energies, and about 3 orders of magnitude greater than the nuclear potential energy that can be liberated, today, using nuclear fission (about per fission reaction or ), and about 2 orders of magnitude greater than the best possible results expected from fusion (about for the proton–proton chain). The reaction of of antimatter with of matter would produce (180 petajoules) of energy (by the mass–energy equivalence formula, ), or the rough equivalent of 43 megatons of TNT – slightly less than the yield of the 27,000 kg Tsar Bomba, the largest thermonuclear weapon ever detonated.
Not all of that energy can be utilized by any realistic propulsion technology because of the nature of the annihilation products. While electron–positron reactions result in gamma ray photons, these are difficult to direct and use for thrust. In reactions between protons and antiprotons, their energy is converted largely into relativistic neutral and charged pions. The neutral pions decay almost immediately (with a lifetime of 85 attoseconds) into high-energy photons, but the charged pions decay more slowly (with a lifetime of 26 nanoseconds) and can be deflected magnetically to produce thrust.
Charged pions ultimately decay into a combination of neutrinos (carrying about 22% of the energy of the charged pions) and unstable charged muons (carrying about 78% of the charged pion energy), with the muons then decaying into a combination of electrons, positrons and neutrinos (cf. muon decay; the neutrinos from this decay carry about 2/3 of the energy of the muons, meaning that from the original charged pions, the total fraction of their energy converted to neutrinos by one route or another would be about ).
Weapons
Antimatter has been considered as a trigger mechanism for nuclear weapons. A major obstacle is the difficulty of producing antimatter in large enough quantities, and there is no evidence that it will ever be feasible. Nonetheless, the U.S. Air Force funded studies of the physics of antimatter in the Cold War, and began considering its possible use in weapons, not just as a trigger, but as the explosive itself.
See also
Antihypernuclei – Antimatter hypernucleus
References
Further reading
External links
Freeview Video 'Antimatter' by the Vega Science Trust and the BBC/OU
CERN Webcasts (RealPlayer required)
What is Antimatter? (from the Frequently Asked Questions at the Center for Antimatter–Matter Studies)
FAQ from CERN with information about antimatter aimed at the general reader, posted in response to antimatter's fictional portrayal in Angels & Demons
What is direct CP-violation?
Animated illustration of antihydrogen production at CERN from the Exploratorium.
"Mining for Neutrinos", costly experiment to study neutrinos & anti-neutrinos. New York Times science article, updated Sept. 2, 2024
Quantum field theory
Fictional power sources
Articles containing video clips | Antimatter | [
"Physics"
] | 6,478 | [
"Quantum field theory",
"Antimatter",
"Quantum mechanics",
"Matter"
] |
1,327 | https://en.wikipedia.org/wiki/Antiparticle | In particle physics, every type of particle of "ordinary" matter (as opposed to antimatter) is associated with an antiparticle with the same mass but with opposite physical charges (such as electric charge). For example, the antiparticle of the electron is the positron (also known as an antielectron). While the electron has a negative electric charge, the positron has a positive electric charge, and is produced naturally in certain types of radioactive decay. The opposite is also true: the antiparticle of the positron is the electron.
Some particles, such as the photon, are their own antiparticle. Otherwise, for each pair of antiparticle partners, one is designated as the normal particle (the one that occurs in matter usually interacted with in daily life). The other (usually given the prefix "anti-") is designated the antiparticle.
Particle–antiparticle pairs can annihilate each other, producing photons; since the charges of the particle and antiparticle are opposite, total charge is conserved. For example, the positrons produced in natural radioactive decay quickly annihilate themselves with electrons, producing pairs of gamma rays, a process exploited in positron emission tomography.
The laws of nature are very nearly symmetrical with respect to particles and antiparticles. For example, an antiproton and a positron can form an antihydrogen atom, which is believed to have the same properties as a hydrogen atom. This leads to the question of why the formation of matter after the Big Bang resulted in a universe consisting almost entirely of matter, rather than being a half-and-half mixture of matter and antimatter. The discovery of charge parity violation helped to shed light on this problem by showing that this symmetry, originally thought to be perfect, was only approximate. The question about how the formation of matter after the Big Bang resulted in a universe consisting almost entirely of matter remains an unanswered one, and explanations so far are not truly satisfactory, overall.
Because charge is conserved, it is not possible to create an antiparticle without either destroying another particle of the same charge (as is for instance the case when antiparticles are produced naturally via beta decay or the collision of cosmic rays with Earth's atmosphere), or by the simultaneous creation of both a particle and its antiparticle (pair production), which can occur in particle accelerators such as the Large Hadron Collider at CERN.
Particles and their antiparticles have equal and opposite charges, so that an uncharged particle also gives rise to an uncharged antiparticle. In many cases, the antiparticle and the particle coincide: pairs of photons, Z0 bosons, mesons, and hypothetical gravitons and some hypothetical WIMPs all self-annihilate. However, electrically neutral particles need not be identical to their antiparticles: for example, the neutron and antineutron are distinct.
History
Experiment
In 1932, soon after the prediction of positrons by Paul Dirac, Carl D. Anderson found that cosmic-ray collisions produced these particles in a cloud chamber – a particle detector in which moving electrons (or positrons) leave behind trails as they move through the gas. The electric charge-to-mass ratio of a particle can be measured by observing the radius of curling of its cloud-chamber track in a magnetic field. Positrons, because of the direction that their paths curled, were at first mistaken for electrons travelling in the opposite direction. Positron paths in a cloud-chamber trace the same helical path as an electron but rotate in the opposite direction with respect to the magnetic field direction due to their having the same magnitude of charge-to-mass ratio but with opposite charge and, therefore, opposite signed charge-to-mass ratios.
The antiproton and antineutron were found by Emilio Segrè and Owen Chamberlain in 1955 at the University of California, Berkeley. Since then, the antiparticles of many other subatomic particles have been created in particle accelerator experiments. In recent years, complete atoms of antimatter have been assembled out of antiprotons and positrons, collected in electromagnetic traps.
Dirac hole theory
Solutions of the Dirac equation contain negative energy quantum states. As a result, an electron could always radiate energy and fall into a negative energy state. Even worse, it could keep radiating infinite amounts of energy because there were infinitely many negative energy states available. To prevent this unphysical situation from happening, Dirac proposed that a "sea" of negative-energy electrons fills the universe, already occupying all of the lower-energy states so that, due to the Pauli exclusion principle, no other electron could fall into them. Sometimes, however, one of these negative-energy particles could be lifted out of this Dirac sea to become a positive-energy particle. But, when lifted out, it would leave behind a hole in the sea that would act exactly like a positive-energy electron with a reversed charge. These holes were interpreted as "negative-energy electrons" by Paul Dirac and mistakenly identified with protons in his 1930 paper A Theory of Electrons and Protons However, these "negative-energy electrons" turned out to be positrons, and not protons.
This picture implied an infinite negative charge for the universea problem of which Dirac was aware. Dirac tried to argue that we would perceive this as the normal state of zero charge. Another difficulty was the difference in masses of the electron and the proton. Dirac tried to argue that this was due to the electromagnetic interactions with the sea, until Hermann Weyl proved that hole theory was completely symmetric between negative and positive charges. Dirac also predicted a reaction + → + , where an electron and a proton annihilate to give two photons. Robert Oppenheimer and Igor Tamm, however, proved that this would cause ordinary matter to disappear too fast. A year later, in 1931, Dirac modified his theory and postulated the positron, a new particle of the same mass as the electron. The discovery of this particle the next year removed the last two objections to his theory.
Within Dirac's theory, the problem of infinite charge of the universe remains. Some bosons also have antiparticles, but since bosons do not obey the Pauli exclusion principle (only fermions do), hole theory does not work for them. A unified interpretation of antiparticles is now available in quantum field theory, which solves both these problems by describing antimatter as negative energy states of the same underlying matter field, i.e. particles moving backwards in time.
Elementary antiparticles
Composite antiparticles
Particle–antiparticle annihilation
If a particle and antiparticle are in the appropriate quantum states, then they can annihilate each other and produce other particles. Reactions such as + → (the two-photon annihilation of an electron-positron pair) are an example. The single-photon annihilation of an electron-positron pair, + → , cannot occur in free space because it is impossible to conserve energy and momentum together in this process. However, in the Coulomb field of a nucleus the translational invariance is broken and single-photon annihilation may occur. The reverse reaction (in free space, without an atomic nucleus) is also impossible for this reason. In quantum field theory, this process is allowed only as an intermediate quantum state for times short enough that the violation of energy conservation can be accommodated by the uncertainty principle. This opens the way for virtual pair production or annihilation in which a one particle quantum state may fluctuate into a two particle state and back. These processes are important in the vacuum state and renormalization of a quantum field theory. It also opens the way for neutral particle mixing through processes such as the one pictured here, which is a complicated example of mass renormalization.
Properties
Quantum states of a particle and an antiparticle are interchanged by the combined application of charge conjugation , parity and time reversal .
and are linear, unitary operators, is antilinear and antiunitary,
.
If denotes the quantum state of a particle with momentum and spin whose component in the z-direction is , then one has
where denotes the charge conjugate state, that is, the antiparticle. In particular a massive particle and its antiparticle transform under the same irreducible representation of the Poincaré group which means the antiparticle has the same mass and the same spin.
If , and
can be defined separately on the particles and antiparticles, then
where the proportionality sign indicates that there might be a phase on the right hand side.
As anticommutes with the charges, , particle and antiparticle have opposite electric charges q and -q.
Quantum field theory
This section draws upon the ideas, language and notation of canonical quantization of a quantum field theory.
One may try to quantize an electron field without mixing the annihilation and creation operators by writing
where we use the symbol k to denote the quantum numbers p and σ of the previous section and the sign of the energy, E(k), and ak denotes the corresponding annihilation operators. Of course, since we are dealing with fermions, we have to have the operators satisfy canonical anti-commutation relations. However, if one now writes down the Hamiltonian
then one sees immediately that the expectation value of H need not be positive. This is because E(k) can have any sign whatsoever, and the combination of creation and annihilation operators has expectation value 1 or 0.
So one has to introduce the charge conjugate antiparticle field, with its own creation and annihilation operators satisfying the relations
where k has the same p, and opposite σ and sign of the energy. Then one can rewrite the field in the form
where the first sum is over positive energy states and the second over those of negative energy. The energy becomes
where E0 is an infinite negative constant. The vacuum state is defined as the state with no particle or antiparticle, i.e., and . Then the energy of the vacuum is exactly E0. Since all energies are measured relative to the vacuum, H is positive definite. Analysis of the properties of ak and bk shows that one is the annihilation operator for particles and the other for antiparticles. This is the case of a fermion.
This approach is due to Vladimir Fock, Wendell Furry and Robert Oppenheimer. If one quantizes a real scalar field, then one finds that there is only one kind of annihilation operator; therefore, real scalar fields describe neutral bosons. Since complex scalar fields admit two different kinds of annihilation operators, which are related by conjugation, such fields describe charged bosons.
Feynman–Stückelberg interpretation
By considering the propagation of the negative energy modes of the electron field backward in time, Ernst Stückelberg reached a pictorial understanding of the fact that the particle and antiparticle have equal mass m and spin J but opposite charges q. This allowed him to rewrite perturbation theory precisely in the form of diagrams. Richard Feynman later gave an independent systematic derivation of these diagrams from a particle formalism, and they are now called Feynman diagrams. Each line of a diagram represents a particle propagating either backward or forward in time. In Feynman diagrams, anti-particles are shown traveling backwards in time relative to normal matter, and vice versa. This technique is the most widespread method of computing amplitudes in quantum field theory today.
Since this picture was first developed by Stückelberg, and acquired its modern form in Feynman's work, it is called the Feynman–Stückelberg interpretation of antiparticles to honor both scientists.
See also
List of particles
Antimatter
Gravitational interaction of antimatter
Parity, charge conjugation, and time reversal symmetry
CP violations
Quantum field theory
Baryogenesis, baryon asymmetry, and Leptogenesis
One-electron universe
Paul Dirac
Notes
References
External links
Antimatter at CERN
Subatomic particles
Quantum field theory
Antimatter
Particle physics | Antiparticle | [
"Physics"
] | 2,571 | [
"Quantum field theory",
"Antimatter",
"Quantum mechanics",
"Subatomic particles",
"Particle physics",
"Nuclear physics",
"Atoms",
"Matter"
] |
1,335 | https://en.wikipedia.org/wiki/Associative%20property | In mathematics, the associative property is a property of some binary operations that means that rearranging the parentheses in an expression will not change the result. In propositional logic, associativity is a valid rule of replacement for expressions in logical proofs.
Within an expression containing two or more occurrences in a row of the same associative operator, the order in which the operations are performed does not matter as long as the sequence of the operands is not changed. That is (after rewriting the expression with parentheses and in infix notation if necessary), rearranging the parentheses in such an expression will not change its value. Consider the following equations:
Even though the parentheses were rearranged on each line, the values of the expressions were not altered. Since this holds true when performing addition and multiplication on any real numbers, it can be said that "addition and multiplication of real numbers are associative operations".
Associativity is not the same as commutativity, which addresses whether the order of two operands affects the result. For example, the order does not matter in the multiplication of real numbers, that is, , so we say that the multiplication of real numbers is a commutative operation. However, operations such as function composition and matrix multiplication are associative, but not (generally) commutative.
Associative operations are abundant in mathematics; in fact, many algebraic structures (such as semigroups and categories) explicitly require their binary operations to be associative.
However, many important and interesting operations are non-associative; some examples include subtraction, exponentiation, and the vector cross product. In contrast to the theoretical properties of real numbers, the addition of floating point numbers in computer science is not associative, and the choice of how to associate an expression can have a significant effect on rounding error.
Definition
Formally, a binary operation on a set is called associative if it satisfies the associative law:
, for all in .
Here, ∗ is used to replace the symbol of the operation, which may be any symbol, and even the absence of symbol (juxtaposition) as for multiplication.
, for all in .
The associative law can also be expressed in functional notation thus:
Generalized associative law
If a binary operation is associative, repeated application of the operation produces the same result regardless of how valid pairs of parentheses are inserted in the expression. This is called the generalized associative law.
The number of possible bracketings is just the Catalan number,
, for n operations on n+1 values. For instance, a product of 3 operations on 4 elements may be written (ignoring permutations of the arguments), in possible ways:
If the product operation is associative, the generalized associative law says that all these expressions will yield the same result. So unless the expression with omitted parentheses already has a different meaning (see below), the parentheses can be considered unnecessary and "the" product can be written unambiguously as
As the number of elements increases, the number of possible ways to insert parentheses grows quickly, but they remain unnecessary for disambiguation.
An example where this does not work is the logical biconditional . It is associative; thus, is equivalent to , but most commonly means , which is not equivalent.
Examples
Some examples of associative operations include the following.
Propositional logic
Rule of replacement
In standard truth-functional propositional logic, association, or associativity are two valid rules of replacement. The rules allow one to move parentheses in logical expressions in logical proofs. The rules (using logical connectives notation) are:
and
where "" is a metalogical symbol representing "can be replaced in a proof with".
Truth functional connectives
Associativity is a property of some logical connectives of truth-functional propositional logic. The following logical equivalences demonstrate that associativity is a property of particular connectives. The following (and their converses, since is commutative) are truth-functional tautologies.
Associativity of disjunction
Associativity of conjunction
Associativity of equivalence
Joint denial is an example of a truth functional connective that is not associative.
Non-associative operation
A binary operation on a set S that does not satisfy the associative law is called non-associative. Symbolically,
For such an operation the order of evaluation does matter. For example:
Subtraction
Division
Exponentiation
Vector cross product
Also although addition is associative for finite sums, it is not associative inside infinite sums (series). For example,
whereas
Some non-associative operations are fundamental in mathematics. They appear often as the multiplication in structures called non-associative algebras, which have also an addition and a scalar multiplication. Examples are the octonions and Lie algebras. In Lie algebras, the multiplication satisfies Jacobi identity instead of the associative law; this allows abstracting the algebraic nature of infinitesimal transformations.
Other examples are quasigroup, quasifield, non-associative ring, and commutative non-associative magmas.
Nonassociativity of floating point calculation
In mathematics, addition and multiplication of real numbers are associative. By contrast, in computer science, addition and multiplication of floating point numbers are not associative, as different rounding errors may be introduced when dissimilar-sized values are joined in a different order.
To illustrate this, consider a floating point representation with a 4-bit significand:
Even though most computers compute with 24 or 53 bits of significand, this is still an important source of rounding error, and approaches such as the Kahan summation algorithm are ways to minimise the errors. It can be especially problematic in parallel computing.
Notation for non-associative operations
In general, parentheses must be used to indicate the order of evaluation if a non-associative operation appears more than once in an expression (unless the notation specifies the order in another way, like ). However, mathematicians agree on a particular order of evaluation for several common non-associative operations. This is simply a notational convention to avoid parentheses.
A left-associative operation is a non-associative operation that is conventionally evaluated from left to right, i.e.,
while a right-associative operation is conventionally evaluated from right to left:
Both left-associative and right-associative operations occur. Left-associative operations include the following:
Subtraction and division of real numbers
Function application
This notation can be motivated by the currying isomorphism, which enables partial application.
Right-associative operations include the following:
Exponentiation of real numbers in superscript notation
Exponentiation is commonly used with brackets or right-associatively because a repeated left-associative exponentiation operation is of little use. Repeated powers would mostly be rewritten with multiplication:
Formatted correctly, the superscript inherently behaves as a set of parentheses; e.g. in the expression the addition is performed before the exponentiation despite there being no explicit parentheses wrapped around it. Thus given an expression such as , the full exponent of the base is evaluated first. However, in some contexts, especially in handwriting, the difference between , and can be hard to see. In such a case, right-associativity is usually implied.
Function definition
Using right-associative notation for these operations can be motivated by the Curry–Howard correspondence and by the currying isomorphism.
Non-associative operations for which no conventional evaluation order is defined include the following.
Exponentiation of real numbers in infix notation
Knuth's up-arrow operators
Taking the cross product of three vectors
Taking the pairwise average of real numbers
Taking the relative complement of sets
.(Compare material nonimplication in logic.)
History
William Rowan Hamilton seems to have coined the term "associative property" around 1844, a time when he was contemplating the non-associative algebra of the octonions he had learned about from John T. Graves.
See also
Light's associativity test
Telescoping series, the use of addition associativity for cancelling terms in an infinite series
A semigroup is a set with an associative binary operation.
Commutativity and distributivity are two other frequently discussed properties of binary operations.
Power associativity, alternativity, flexibility and N-ary associativity are weak forms of associativity.
Moufang identities also provide a weak form of associativity.
References
Properties of binary operations
Elementary algebra
Functional analysis
Rules of inference | Associative property | [
"Mathematics"
] | 1,887 | [
"Functions and mappings",
"Functional analysis",
"Proof theory",
"Mathematical objects",
"Rules of inference",
"Elementary algebra",
"Elementary mathematics",
"Mathematical relations",
"Algebra"
] |
1,349 | https://en.wikipedia.org/wiki/Atanasoff%E2%80%93Berry%20computer | The Atanasoff–Berry computer (ABC) was the first automatic electronic digital computer. The device was limited by the technology of the day. The ABC's priority is debated among historians of computer technology, because it was neither programmable, nor Turing-complete. Conventionally, the ABC would be considered the first electronic ALU (arithmetic logic unit) which is integrated into every modern processor's design.
Its unique contribution was to make computing faster by being the first to use vacuum tubes to do arithmetic calculations. Prior to this, slower electro-mechanical methods were used by Konrad Zuse's Z1 computer, and the simultaneously developed Harvard Mark I. The first electronic, programmable, digital machine, the Colossus computer from 1943 to 1945, used similar tube-based technology as ABC.
Overview
Conceived in 1937, the machine was built by Iowa State College mathematics and physics professor John Vincent Atanasoff with the help of graduate student Clifford Berry. It was designed only to solve systems of linear equations and was successfully tested in 1942. However, its intermediate result storage mechanism, a paper card writer/reader, was not perfected, and when John Vincent Atanasoff left Iowa State College for World War II assignments, work on the machine was discontinued. The ABC pioneered important elements of modern computing, including binary arithmetic and electronic switching elements, but its special-purpose nature and lack of a changeable, stored program distinguish it from modern computers. The computer was designated an IEEE Milestone in 1990.
Atanasoff and Berry's computer work was not widely known until it was rediscovered in the 1960s, amid patent disputes over the first instance of an electronic computer. At that time ENIAC, that had been created by John Mauchly and J. Presper Eckert, was considered to be the first computer in the modern sense, but in 1973 a U.S. District Court invalidated the ENIAC patent and concluded that the ENIAC inventors had derived the subject matter of the electronic digital computer from Atanasoff. When, in the mid-1970s, the secrecy surrounding the British World War II development of the Colossus computers that pre-dated ENIAC, was lifted and Colossus was described at a conference in Los Alamos, New Mexico, in June 1976, John Mauchly and Konrad Zuse were reported to have been astonished.
Design and construction
According to Atanasoff's account, several key principles of the Atanasoff–Berry computer were conceived in a sudden insight after a long nighttime drive to Rock Island, Illinois, during the winter of 1937–38. The ABC innovations included electronic computation, binary arithmetic, parallel processing, regenerative capacitor memory, and a separation of memory and computing functions. The mechanical and logic design was worked out by Atanasoff over the next year. A grant application to build a proof of concept prototype was submitted in March 1939 to the Agronomy department, which was also interested in speeding up computation for economic and research analysis. $5,000 of further funding () to complete the machine came from the nonprofit Research Corporation of New York City.
The ABC was built by Atanasoff and Berry in the basement of the physics building at Iowa State College from 1939 to 1942. The initial funds were released in September, and the 11-tube prototype was first demonstrated in October 1939. A December demonstration prompted a grant for construction of the full-scale machine. The ABC was built and tested over the next two years. A January 15, 1941, story in the Des Moines Register announced the ABC as "an electrical computing machine" with more than 300 vacuum tubes that would "compute complicated algebraic equations" (but gave no precise technical description of the computer). The system weighed more than . It contained approximately of wire, 280 dual-triode vacuum tubes, 31 thyratrons, and was about the size of a desk.
It was not programmable, which distinguishes it from more general machines of the same era, such as Konrad Zuse's 1941 Z3 (or earlier iterations) and the Colossus computers of 1943–1945. Nor did it implement the stored-program architecture, first implemented in the Manchester Baby of 1948, required for fully general-purpose practical computing machines.
The machine was, however, the first to implement:
Using vacuum tubes, rather than wheels, ratchets, mechanical switches, or telephone relays, allowing for greater speed than previous computers
Using capacitors for memory, rather than mechanical components, allowing for greater speed and density
The memory of the Atanasoff–Berry computer was a system called regenerative capacitor memory, which consisted of a pair of drums, each containing 1600 capacitors that rotated on a common shaft once per second. The capacitors on each drum were organized into 32 "bands" of 50 (30 active bands and two spares in case a capacitor failed), giving the machine a speed of 30 additions/subtractions per second. Data was represented as 50-bit binary fixed-point numbers. The electronics of the memory and arithmetic units could store and operate on 60 such numbers at a time (3000 bits).
The alternating current power-line frequency of 60 Hz was the primary clock rate for the lowest-level operations.
The arithmetic logic functions were fully electronic, implemented with vacuum tubes. The family of logic gates ranged from inverters to two- and three-input gates. The input and output levels and operating voltages were compatible between the different gates. Each gate consisted of one inverting vacuum-tube amplifier, preceded by a resistor divider input network that defined the logical function. The control logic functions, which only needed to operate once per drum rotation and therefore did not require electronic speed, were electromechanical, implemented with relays.
The ALU operated on only one bit of each number at a time; it kept the carry/borrow bit in a capacitor for use in the next AC cycle.
Although the Atanasoff–Berry computer was an important step up from earlier calculating machines, it was not able to run entirely automatically through an entire problem. An operator was needed to operate the control switches to set up its functions, much like the electro-mechanical calculators and unit record equipment of the time. Selection of the operation to be performed, reading, writing, converting to or from binary to decimal, or reducing a set of equations was made by front-panel switches and, in some cases, jumpers.
There were two forms of input and output: primary user input and output and an intermediate results output and input. The intermediate results storage allowed operation on problems too large to be handled entirely within the electronic memory. (The largest problem that could be solved without the use of the intermediate output and input was two simultaneous equations, a trivial problem.)
Intermediate results were binary, written onto paper sheets by electrostatically modifying the resistance at 1500 locations to represent 30 of the 50-bit numbers (one equation). Each sheet could be written or read in one second. The reliability of the system was limited to about 1 error in 100,000 calculations by these units, primarily attributed to lack of control of the sheets' material characteristics. In retrospect, a solution could have been to add a parity bit to each number as written. This problem was not solved by the time Atanasoff left the university for war-related work.
Primary user input was decimal, via standard IBM 80-column punched cards, and output was decimal, via a front-panel display.
Function
The ABC was designed for a specific purpose the solution of systems of simultaneous linear equations. It could handle systems with up to 29 equations, a difficult problem for the time. Problems of this scale were becoming common in physics, the department in which John Atanasoff worked. The machine could be fed two linear equations with up to 29 variables and a constant term and eliminate one of the variables. This process would be repeated manually for each of the equations, which would result in a system of equations with one fewer variable. Then the whole process would be repeated to eliminate another variable.
George W. Snedecor, the head of Iowa State's Statistics Department, was very likely the first user of an electronic digital computer to solve real-world mathematics problems. He submitted many of these problems to Atanasoff.
Patent dispute
On June 26, 1947, J. Presper Eckert and John Mauchly were the first to file for patent on a digital computing device (ENIAC), much to the surprise of Atanasoff. The ABC had been examined by John Mauchly in June 1941, and Isaac Auerbach, a former student of Mauchly's, alleged that it influenced his later work on ENIAC, although Mauchly denied this. The ENIAC patent did not issue until 1964, and by 1967 Honeywell sued Sperry Rand in an attempt to break the ENIAC patents, arguing that the ABC constituted prior art. The United States District Court for the District of Minnesota released its judgement on October 19, 1973, finding in Honeywell v. Sperry Rand that the ENIAC patent was a derivative of John Atanasoff's invention.
Campbell-Kelly and Aspray conclude:
The case was legally resolved on October 19, 1973, when U.S. District Judge Earl R. Larson held the ENIAC patent invalid, ruling that the ENIAC derived many basic ideas from the Atanasoff–Berry computer. Judge Larson explicitly stated:
Herman Goldstine, one of the original developers of ENIAC wrote:
Replica
The original ABC was eventually dismantled in 1948, when the university converted the basement to classrooms, and all of its pieces except for one memory drum were discarded.
In 1997, a team of researchers led by Delwyn Bluhm and John Gustafson from Ames Laboratory (located on the Iowa State University campus) finished building a working replica of the Atanasoff–Berry computer at a cost of $350,000 (equivalent to $ in ). The replica ABC was on display in the first floor lobby of the Durham Center for Computation and Communication at Iowa State University and was subsequently exhibited at the Computer History Museum.
See also
History of computing hardware
List of vacuum-tube computers
Mikhail Kravchuk
References
Bibliography
External links
The Birth of the ABC
Reconstruction of the ABC, 1994-1997
John Gustafson, Reconstruction of the Atanasoff-Berry Computer
The ENIAC patent trial
Honeywell v. Sperry Rand Records, 1846-1973, Charles Babbage Institute, University of Minnesota.
The Atanasoff-Berry Computer In Operation (YouTube)
1940s computers
One-of-a-kind computers
Vacuum tube computers
Computer-related introductions in 1942
Early computers
Iowa State University
Serial computers
Paper data storage | Atanasoff–Berry computer | [
"Technology"
] | 2,194 | [
"Serial computers",
"Computers"
] |
1,358 | https://en.wikipedia.org/wiki/Anchor | An anchor is a device, normally made of metal, used to secure a vessel to the bed of a body of water to prevent the craft from drifting due to wind or current. The word derives from Latin , which itself comes from the Greek ().
Anchors can either be temporary or permanent. Permanent anchors are used in the creation of a mooring, and are rarely moved; a specialist service is normally needed to move or maintain them. Vessels carry one or more temporary anchors, which may be of different designs and weights.
A sea anchor is a drag device, not in contact with the seabed, used to minimise drift of a vessel relative to the water. A drogue is a drag device used to slow or help steer a vessel running before a storm in a following or overtaking sea, or when crossing a bar in a breaking sea.
Anchoring
Anchors achieve holding power either by "hooking" into the seabed, or weight, or a combination of the two. The weight of the anchor chain can be more than that of the anchor and is critical to proper holding. Permanent moorings use large masses (commonly a block or slab of concrete) resting on the seabed. Semi-permanent mooring anchors (such as mushroom anchors) and large ship's anchors derive a significant portion of their holding power from their weight, while also hooking or embedding in the bottom. Modern anchors for smaller vessels have metal flukes that hook on to rocks on the bottom or bury themselves in soft seabed.
The vessel is attached to the anchor by the rode (also called a cable or a warp). It can be made of rope, chain or a combination of rope and chain. The ratio of the length of rode to the water depth is known as the scope.
Holding ground is the area of sea floor that holds an anchor, and thus the attached ship or boat. Different types of anchor are designed to hold in different types of holding ground. Some bottom materials hold better than others; for instance, hard sand holds well, shell holds poorly. Holding ground may be fouled with obstacles. An anchorage location may be chosen for its holding ground. In poor holding ground, only the weight of an anchor and chain matters; in good holding ground, it is able to dig in, and the holding power can be significantly higher.
The basic anchoring consists of determining the location, dropping the anchor, laying out the scope, setting the hook, and assessing where the vessel ends up. The ship seeks a location that is sufficiently protected; has suitable holding ground, enough depth at low tide and enough room for the boat to swing.
The location to drop the anchor should be approached from down wind or down current, whichever is stronger. As the chosen spot is approached, the vessel should be stopped or even beginning to drift back. The anchor should initially be lowered quickly but under control until it is on the bottom (see anchor windlass). The vessel should continue to drift back, and the cable should be veered out under control (slowly) so it is relatively straight.
Once the desired scope is laid out, the vessel should be gently forced astern, usually using the auxiliary motor but possibly by backing a sail. A hand on the anchor line may telegraph a series of jerks and jolts, indicating the anchor is dragging, or a smooth tension indicative of digging in. As the anchor begins to dig in and resist backward force, the engine may be throttled up to get a thorough set. If the anchor continues to drag, or sets after having dragged too far, it should be retrieved and moved back to the desired position (or another location chosen.)
Using an anchor weight, kellet or sentinel
Lowering a concentrated, heavy weight down the anchor line – rope or chain – directly in front of the bow to the seabed behaves like a heavy chain rode and lowers the angle of pull on the anchor. If the weight is suspended off the seabed it acts as a spring or shock absorber to dampen the sudden actions that are normally transmitted to the anchor and can cause it to dislodge and drag. In light conditions, a kellet reduces the swing of the vessel considerably. In heavier conditions these effects disappear as the rode becomes straightened and the weight ineffective. Known as an "anchor chum weight" or "angel" in the UK.
Forked moor
Using two anchors set approximately 45° apart, or wider angles up to 90°, from the bow is a strong mooring for facing into strong winds. To set anchors in this way, first one anchor is set in the normal fashion. Then, taking in on the first cable as the boat is motored into the wind and letting slack while drifting back, a second anchor is set approximately a half-scope away from the first on a line perpendicular to the wind. After this second anchor is set, the scope on the first is taken up until the vessel is lying between the two anchors and the load is taken equally on each cable.
This moor also to some degree limits the range of a vessel's swing to a narrower oval. Care should be taken that other vessels do not swing down on the boat due to the limited swing range.
Bow and stern
(Not to be mistaken with the Bahamian moor, below.) In the bow and stern technique, an anchor is set off each the bow and the stern, which can severely limit a vessel's swing range and also align it to steady wind, current or wave conditions. One method of accomplishing this moor is to set a bow anchor normally, then drop back to the limit of the bow cable (or to double the desired scope, e.g. 8:1 if the eventual scope should be 4:1, 10:1 if the eventual scope should be 5:1, etc.) to lower a stern anchor. By taking up on the bow cable the stern anchor can be set. After both anchors are set, tension is taken up on both cables to limit the swing or to align the vessel.
Bahamian moor
Similar to the above, a Bahamian moor is used to sharply limit the swing range of a vessel, but allows it to swing to a current. One of the primary characteristics of this technique is the use of a swivel as follows: the first anchor is set normally, and the vessel drops back to the limit of anchor cable. A second anchor is attached to the end of the anchor cable, and is dropped and set. A swivel is attached to the middle of the anchor cable, and the vessel connected to that.
The vessel now swings in the middle of two anchors, which is acceptable in strong reversing currents, but a wind perpendicular to the current may break out the anchors, as they are not aligned for this load.
Backing an anchor
Also known as tandem anchoring, in this technique two anchors are deployed in line with each other, on the same rode. With the foremost anchor reducing the load on the aft-most, this technique can develop great holding power and may be appropriate in "ultimate storm" circumstances. It does not limit swinging range, and might not be suitable in some circumstances. There are complications, and the technique requires careful preparation and a level of skill and experience above that required for a single anchor.
Kedging
Kedging or warping is a technique for moving or turning a ship by using a relatively light anchor.
In yachts, a kedge anchor is an anchor carried in addition to the main, or bower, anchor, and usually stowed aft. Every yacht should carry at least two anchors – the main or bower anchor and a second lighter kedge anchor. It is used occasionally when it is necessary to limit the turning circle as the yacht swings when it is anchored, such as in a narrow river or a deep pool in an otherwise shallow area. Kedge anchors are sometimes used to recover vessels that have run aground.
For ships, a kedge may be dropped while a ship is underway, or carried out in a suitable direction by a tender or ship's boat to enable the ship to be winched off if aground or swung into a particular heading, or even to be held steady against a tidal or other stream.
Historically, it was of particular relevance to sailing warships that used them to outmaneuver opponents when the wind had dropped but might be used by any vessel in confined, shoal water to place it in a more desirable position, provided she had enough manpower.
Club hauling
Club hauling is an archaic technique. When a vessel is in a narrow channel or on a lee shore so that there is no room to tack the vessel in a conventional manner, an anchor attached to the lee quarter may be dropped from the lee bow. This is deployed when the vessel is head to wind and has lost headway. As the vessel gathers sternway the strain on the cable pivots the vessel around what is now the weather quarter turning the vessel onto the other tack. The anchor is then normally cut away (the ship's momentum prevents recovery without aborting the maneuver).
Multiple anchor patterns
When it is necessary to moor a ship or floating platform with precise positioning and alignment, such as when drilling the seabed, for some types of salvage work, and for some types of diving operation, several anchors are set in a pattern which allows the vessel to be positioned by shortening and lengthening the scope of the anchors, and adjusting the tension on the rodes. The anchors are usually laid in prearranged positions by an anchor tender, and the moored vessel uses its own winches to adjust position and tension.
Similar arrangements are used for some types of single buoy moorings, like the catenary anchor leg mooring (CALM) used for loading and unloading liquid cargoes.
Weighing anchor
Since all anchors that embed themselves in the bottom require the strain to be along the seabed, anchors can be broken out of the bottom by shortening the rope until the vessel is directly above the anchor; at this point the anchor chain is "up and down", in naval parlance. If necessary, motoring slowly around the location of the anchor also helps dislodge it. Anchors are sometimes fitted with a trip line attached to the crown, by which they can be unhooked from underwater hazards.
The term aweigh describes an anchor when it is hanging on the rope and not resting on the bottom. This is linked to the term to weigh anchor, meaning to lift the anchor from the sea bed, allowing the ship or boat to move. An anchor is described as aweigh when it has been broken out of the bottom and is being hauled up to be stowed. Aweigh should not be confused with under way, which describes a vessel that is not moored to a dock or anchored, whether or not the vessel is moving through the water. Aweigh is also often confused with away, which is incorrect.
History
Evolution of the anchor
The earliest anchors were probably rocks, and many rock anchors have been found dating from at least the Bronze Age. Pre-European Māori waka (canoes) used one or more hollowed stones, tied with flax ropes, as anchors. Many modern moorings still rely on a large rock as the primary element of their design. However, using pure weight to resist the forces of a storm works well only as a permanent mooring; a large enough rock would be nearly impossible to move to a new location.
The ancient Greeks used baskets of stones, large sacks filled with sand, and wooden logs filled with lead. According to Apollonius Rhodius and Stephen of Byzantium, anchors were formed of stone, and Athenaeus states that they were also sometimes made of wood. Such anchors held the vessel merely by their weight and by their friction along the bottom.
Fluked anchors
Iron was afterwards introduced for the construction of anchors, and an improvement was made by forming them with teeth, or "flukes", to fasten themselves into the bottom. This is the iconic anchor shape most familiar to non-sailors.
This form has been used since antiquity. The Roman Nemi ships of the 1st century AD used this form. The Viking Ladby ship (probably 10th century) used a fluked anchor of this type, made of iron, which would have had a wooden stock mounted perpendicular to the shank and flukes to make the flukes contact the bottom at a suitable angle to hook or penetrate.
Admiralty anchor
The Admiralty Pattern anchor, or simply "Admiralty", also known as a "Fisherman", consists of a central shank with a ring or shackle for attaching the rode (the rope, chain, or cable connecting the ship and the anchor). At the other end of the shank there are two arms, carrying the flukes, while the stock is mounted to the shackle end, at ninety degrees to the arms. When the anchor lands on the bottom, it generally falls over with the arms parallel to the seabed. As a strain comes onto the rope, the stock digs into the bottom, canting the anchor until one of the flukes catches and digs into the bottom.
The Admiralty Anchor is an entirely independent reinvention of a classical design, as seen in one of the Nemi ship anchors. This basic design remained unchanged for centuries, with the most significant changes being to the overall proportions, and a move from stocks made of wood to iron stocks in the late 1830s and early 1840s.
Since one fluke always protrudes up from the set anchor, there is a great tendency of the rode to foul the anchor as the vessel swings due to wind or current shifts. When this happens, the anchor may be pulled out of the bottom, and in some cases may need to be hauled up to be re-set. In the mid-19th century, numerous modifications were attempted to alleviate these problems, as well as improve holding power, including one-armed mooring anchors. The most successful of these patent anchors, the Trotman Anchor, introduced a pivot at the centre of the crown where the arms join the shank, allowing the "idle" upper arm to fold against the shank. When deployed the lower arm may fold against the shank tilting the tip of the fluke upwards, so each fluke has a tripping palm at its base, to hook on the bottom as the folded arm drags along the seabed, which unfolds the downward oriented arm until the tip of the fluke can engage the bottom.
Handling and storage of these anchors requires special equipment and procedures. Once the anchor is hauled up to the hawsepipe, the ring end is hoisted up to the end of a timber projecting from the bow known as the cathead. The crown of the anchor is then hauled up with a heavy tackle until one fluke can be hooked over the rail. This is known as "catting and fishing" the anchor. Before dropping the anchor, the fishing process is reversed, and the anchor is dropped from the end of the cathead.
Stockless anchor
The stockless anchor, patented in England in 1821, represented the first significant departure in anchor design in centuries. Although their holding-power-to-weight ratio is significantly lower than admiralty pattern anchors, their ease of handling and stowage aboard large ships led to almost universal adoption. In contrast to the elaborate stowage procedures for earlier anchors, stockless anchors are simply hauled up until they rest with the shank inside the hawsepipes, and the flukes against the hull (or inside a recess in the hull called the anchor box).
While there are numerous variations, stockless anchors consist of a set of heavy flukes connected by a pivot or ball and socket joint to a shank. Cast into the crown of the anchor is a set of tripping palms, projections that drag on the bottom, forcing the main flukes to dig in.
Small boat anchors
Until the mid-20th century, anchors for smaller vessels were either scaled-down versions of admiralty anchors, or simple grapnels. As new designs with greater holding-power-to-weight ratios were sought, a great variety of anchor designs have emerged. Many of these designs are still under patent, and other types are best known by their original trademarked names.
Grapnel anchor / drag
A traditional design, the grapnel is merely a shank (no stock) with four or more tines, also known as a drag. It has a benefit in that, no matter how it reaches the bottom, one or more tines are aimed to set. In coral, or rock, it is often able to set quickly by hooking into the structure, but may be more difficult to retrieve. A grapnel is often quite light, and may have additional uses as a tool to recover gear lost overboard. Its weight also makes it relatively easy to move and carry, however its shape is generally not compact and it may be awkward to stow unless a collapsing model is used.
Grapnels rarely have enough fluke area to develop much hold in sand, clay, or mud. It is not unknown for the anchor to foul on its own rode, or to foul the tines with refuse from the bottom, preventing it from digging in. On the other hand, it is quite possible for this anchor to find such a good hook that, without a trip line from the crown, it is impossible to retrieve.
Herreshoff anchor
Designed by yacht designer L. Francis Herreshoff, this is essentially the same pattern as an admiralty anchor, albeit with small diamond-shaped flukes or palms. The novelty of the design lay in the means by which it could be broken down into three pieces for stowage. In use, it still presents all the issues of the admiralty pattern anchor.
Northill anchor
Originally designed as a lightweight anchor for seaplanes, this design consists of two plough-like blades mounted to a shank, with a folding stock crossing through the crown of the anchor.
CQR plough anchor
Many manufacturers produce a plough-type anchor, so-named after its resemblance to an agricultural plough. All such anchors are copied from the original CQR (Coastal Quick Release, or Clyde Quick Release, later rebranded as 'secure' by Lewmar), a 1933 design patented in the UK by mathematician Geoffrey Ingram Taylor.
Plough anchors stow conveniently in a roller at the bow, and have been popular with cruising sailors and private boaters. Ploughs can be moderately good in all types of seafloor, though not exceptional in any. Contrary to popular belief, the CQR's hinged shank is not to allow the anchor to turn with direction changes rather than breaking out, but actually to prevent the shank's weight from disrupting the fluke's orientation while setting. The hinge can wear out and may trap a sailor's fingers. Some later plough anchors have a rigid shank, such as the Lewmar's "Delta".
A plough anchor has a fundamental flaw: like its namesake, the agricultural plough, it digs in but then tends to break out back to the surface. Plough anchors sometimes have difficulty setting at all, and instead skip across the seafloor. By contrast, modern efficient anchors tend to be "scoop" types that dig ever deeper.
Delta anchor
The Delta anchor was derived from the CQR. It was patented by Philip McCarron, James Stewart, and Gordon Lyall of British marine manufacturer Simpson-Lawrence Ltd in 1992. It was designed as an advance over the anchors used for floating systems such as oil rigs. It retains the weighted tip of the CQR but has a much higher fluke area to weight ratio than its predecessor. The designers also eliminated the sometimes troublesome hinge. It is a plough anchor with a rigid, arched shank. It is described as self-launching because it can be dropped from a bow roller simply by paying out the rode, without manual assistance. This is an oft copied design with the European Brake and Australian Sarca Excel being two of the more notable ones. Although it is a plough type anchor, it sets and holds reasonably well in hard bottoms.
Danforth anchor
American Richard Danforth invented the Danforth Anchor in the 1940s for use aboard landing craft. It uses a stock at the crown to which two large flat triangular flukes are attached. The stock is hinged so the flukes can orient toward the bottom (and on some designs may be adjusted for an optimal angle depending on the bottom type). Tripping palms at the crown act to tip the flukes into the seabed. The design is a burying variety, and once well set can develop high resistance. Its lightweight and compact flat design make it easy to retrieve and relatively easy to store; some anchor rollers and hawsepipes can accommodate a fluke-style anchor.
A Danforth does not usually penetrate or hold in gravel or weeds. In boulders and coral it may hold by acting as a hook. If there is much current, or if the vessel is moving while dropping the anchor, it may "kite" or "skate" over the bottom due to the large fluke area acting as a sail or wing.
The FOB HP anchor designed in Brittany in the 1970s is a Danforth variant designed to give increased holding through its use of rounded flukes setting at a 30° angle.
The Fortress is an American aluminum alloy Danforth variant that can be disassembled for storage and it features an adjustable 32° and 45° shank/fluke angle to improve holding capability in common sea bottoms such as hard sand and soft mud. This anchor performed well in a 1989 US Naval Sea Systems Command (NAVSEA) test and in an August 2014 holding power test that was conducted in the soft mud bottoms of the Chesapeake Bay.
Bruce or claw anchor
This claw-shaped anchor was designed by Peter Bruce from Scotland in the 1970s. Bruce gained his early reputation from the production of large-scale commercial anchors for ships and fixed installations such as oil rigs. It was later scaled down for small boats, and copies of this popular design abound. The Bruce and its copies, known generically as "claw type anchors", have been adopted on smaller boats (partly because they stow easily on a bow roller) but they are most effective in larger sizes. Claw anchors are quite popular on charter fleets as they have a high chance to set on the first try in many bottoms. They have the reputation of not breaking out with tide or wind changes, instead slowly turning in the bottom to align with the force.
Bruce anchors can have difficulty penetrating weedy bottoms and grass. They offer a fairly low holding-power-to-weight ratio and generally have to be oversized to compete with newer types.
Scoop type anchors
Three time circumnavigator German Rolf Kaczirek invented the Bügel Anker in the 1980s. Kaczirek wanted an anchor that was self-righting without necessitating a ballasted tip. Instead, he added a roll bar and switched out the plough share for a flat blade design. As none of the innovations of this anchor were patented, copies of it abound.
Alain Poiraud of France introduced the scoop type anchor in 1996. Similar in design to the Bügel anchor, Poiraud's design features a concave fluke shaped like the blade of a shovel, with a shank attached parallel to the fluke, and the load applied toward the digging end. It is designed to dig into the bottom like a shovel, and dig deeper as more pressure is applied. The common challenge with all the scoop type anchors is that they set so well, they can be difficult to weigh.
Bügelanker, or Wasi: This German-designed bow anchor has a sharp tip for penetrating weed, and features a roll-bar that allows the correct setting attitude to be achieved without the need for extra weight to be inserted into the tip.
Spade: This is a French design that has proven successful since 1996. It features a demountable shank (hollow in some instances) and the choice of galvanized steel, stainless steel, or aluminium construction, which means a lighter and more easily stowable anchor. The geometry also makes this anchor self stowing on a single roller. The Spade anchor is the anchor of choice for Rubicon 3, one of Europe's largest adventure sailing companies
Rocna: This New Zealand spade design, available in galvanised or stainless steel, has been produced since 2004. It has a roll-bar (similar to that of the Bügel), a large spade-like fluke area, and a sharp toe for penetrating weed and grass. The Rocna sets quickly and holds well.
Mantus: This is claimed to be a fast setting anchor with high holding power. It is designed as an all round anchor capable of setting even in challenging bottoms such as hard sand/clay bottoms and grass. The shank is made out of a high tensile steel capable of withstanding high loads. It is similar in design to the Rocna but has a larger and wider roll-bar that reduces the risk of fouling and increases the angle of the fluke that results in improved penetration in some bottoms.
Ultra: This is an innovative spade design that dispenses with a roll-bar. Made primarily of stainless steel, its main arm is hollow, while the fluke tip has lead within it. It is similar in appearance to the Spade anchor.
Vulcan: A recent sibling to the Rocna, this anchor performs similarly but does not have a roll-bar. Instead the Vulcan has patented design features such as the "V-bulb" and the "Roll Palm" that allow it to dig in deeply. The Vulcan was designed primarily for sailors who had difficulties accommodating the roll-bar Rocna on their bow. Peter Smith (originator of the Rocna) designed it specifically for larger powerboats. Both Vulcans and Rocnas are available in galvanised steel, or in stainless steel. The Vulcan is similar in appearance to the Spade anchor.
Knox Anchor: This is produced in Scotland and was invented by Professor John Knox. It has a divided concave large area fluke arrangement and a shank in high tensile steel. A roll bar similar to the Rocna gives fast setting and a holding power of about 40 times anchor weight.
Other temporary anchors
Mud weight: Consists of a blunt heavy weight, usually cast iron or cast lead, that sinks into the mud and resist lateral movement. It is suitable only for soft silt bottoms and in mild conditions. Sizes range between 5 and 20 kg for small craft. Various designs exist and many are home produced from lead or improvised with heavy objects. This is a commonly used method on the Norfolk Broads in England.
Bulwagga: This is a unique design featuring three flukes instead of the usual two. It has performed well in tests by independent sources such as American boating magazine Practical Sailor.
Permanent anchors
These are used where the vessel is permanently or semi-permanently sited, for example in the case of lightvessels or channel marker buoys. The anchor needs to hold the vessel in all weathers, including the most severe storm, but needs to be lifted only occasionally, at most – for example, only if the vessel is to be towed into port for maintenance. An alternative to using an anchor under these circumstances, especially if the anchor need never be lifted at all, may be to use a pile that is driven into the seabed.
Permanent anchors come in a wide range of types and have no standard form. A slab of rock with an iron staple in it to attach a chain to would serve the purpose, as would any dense object of appropriate weight (for instance, an engine block). Modern moorings may be anchored by augers, which look and act like oversized screws drilled into the seabed, or by barbed metal beams pounded in (or even driven in with explosives) like pilings, or by a variety of other non-mass means of getting a grip on the bottom. One method of building a mooring is to use three or more conventional anchors laid out with short lengths of chain attached to a swivel, so no matter which direction the vessel moves, one or more anchors are aligned to resist the force.
Mushroom
The mushroom anchor is suitable where the seabed is composed of silt or fine sand. It was invented by Robert Stevenson, for use by an 82-ton converted fishing boat, Pharos, which was used as a lightvessel between 1807 and 1810 near to Bell Rock whilst the lighthouse was being constructed. It was equipped with a 1.5-ton example.
It is shaped like an inverted mushroom, the head becoming buried in the silt. A counterweight is often provided at the other end of the shank to lay it down before it becomes buried.
A mushroom anchor normally sinks in the silt to the point where it has displaced its own weight in bottom material, thus greatly increasing its holding power. These anchors are suitable only for a silt or mud bottom, since they rely upon suction and cohesion of the bottom material, which rocky or coarse sand bottoms lack. The holding power of this anchor is at best about twice its weight until it becomes buried, when it can be as much as ten times its weight. They are available in sizes from about 5 kg up to several tons.
Deadweight
A deadweight is an anchor that relies solely on being a heavy weight. It is usually just a large block of concrete or stone at the end of the chain. Its holding power is defined by its weight underwater (i.e., taking its buoyancy into account) regardless of the type of seabed, although suction can increase this if it becomes buried. Consequently, deadweight anchors are used where mushroom anchors are unsuitable, for example in rock, gravel or coarse sand. An advantage of a deadweight anchor over a mushroom is that if it does drag, it continues to provide its original holding force. The disadvantage of using deadweight anchors in conditions where a mushroom anchor could be used is that it needs to be around ten times the weight of the equivalent mushroom anchor.
Auger
Auger anchors can be used to anchor permanent moorings, floating docks, fish farms, etc. These anchors, which have one or more slightly pitched self-drilling threads, must be screwed into the seabed with the use of a tool, so require access to the bottom, either at low tide or by use of a diver. Hence they can be difficult to install in deep water without special equipment.
Weight for weight, augers have a higher holding than other permanent designs, and so can be cheap and relatively easily installed, although difficult to set in extremely soft mud.
High-holding-types
There is a need in the oil-and-gas industry to resist large anchoring forces when laying pipelines and for drilling vessels. These anchors are installed and removed using a support tug and pennant/pendant wire. Some examples are the Stevin range supplied by Vrijhof Ankers. Large plate anchors such as the Stevmanta are used for permanent moorings.
Anchoring gear
The elements of anchoring gear include the anchor, the cable (also called a rode), the method of attaching the two together, the method of attaching the cable to the ship, charts, and a method of learning the depth of the water.
Vessels may carry a number of anchors: bower anchors are the main anchors used by a vessel and normally carried at the bow of the vessel. A kedge anchor is a light anchor used for warping an anchor, also known as kedging, or more commonly on yachts for mooring quickly or in benign conditions. A stream anchor, which is usually heavier than a kedge anchor, can be used for kedging or warping in addition to temporary mooring and restraining stern movement in tidal conditions or in waters where vessel movement needs to be restricted, such as rivers and channels.
Charts are vital to good anchoring. Knowing the location of potential dangers, as well as being useful in estimating the effects of weather and tide in the anchorage, is essential in choosing a good place to drop the hook. One can get by without referring to charts, but they are an important tool and a part of good anchoring gear, and a skilled mariner would not choose to anchor without them.
Anchor rode
The anchor rode (or "cable" or "warp") that connects the anchor to the vessel is usually made up of chain, rope, or a combination of those. Large ships use only chain rode. Smaller craft might use a rope/chain combination or an all chain rode. All rodes should have some chain; chain is heavy but it resists abrasion from coral, sharp rocks, or shellfish beds, whereas a rope warp is susceptible to abrasion and can fail in a short time when stretched against an abrasive surface. The weight of the chain also helps keep the direction of pull on the anchor closer to horizontal, which improves holding, and absorbs part of snubbing loads. Where weight is not an issue, a heavier chain provides better holding by forming a catenary curve through the water and resting as much of its length on the bottom as would not be lifted by tension of the mooring load. Any changes to the tension are accommodated by additional chain being lifted or settling on the bottom, and this absorbs shock loads until the chain is straight, at which point the full load is taken by the anchor. Additional dissipation of shock loads can be achieved by fitting a snubber between the chain and a bollard or cleat on deck. This also reduces shock loads on the deck fittings, and the vessel usually lies more comfortably and quietly.
Being strong and elastic, nylon rope is the most suitable as an anchor rode. Polyester (terylene) is stronger but less elastic than nylon. Both materials sink, so they avoid fouling other craft in crowded anchorages and do not absorb much water. Neither breaks down quickly in sunlight. Elasticity helps absorb shock loading, but causes faster abrasive wear when the rope stretches over an abrasive surface, like a coral bottom or a poorly designed chock. Polypropylene ("polyprop") is not suited to rodes because it floats and is much weaker than nylon, being barely stronger than natural fibres. Some grades of polypropylene break down in sunlight and become hard, weak, and unpleasant to handle. Natural fibres such as manila or hemp are still used in developing nations but absorb a lot of water, are relatively weak, and rot, although they do give good handling grip and are often relatively cheap. Ropes that have little or no elasticity are not suitable as anchor rodes. Elasticity is partly a function of the fibre material and partly of the rope structure.
All anchors should have chain at least equal to the boat's length. Some skippers prefer an all chain warp for greater security on coral or sharp edged rock bottoms. The chain should be shackled to the warp through a steel eye or spliced to the chain using a chain splice. The shackle pin should be securely wired or moused. Either galvanized or stainless steel is suitable for eyes and shackles, galvanised steel being the stronger of the two. Some skippers prefer to add a swivel to the rode. There is a school of thought that says these should not be connected to the anchor itself, but should be somewhere in the chain. However, most skippers connect the swivel directly to the anchor.
Scope
Scope is the ratio of length of the rode to the depth of the water measured from the highest point (usually the anchor roller or bow chock) to the seabed, making allowance for the highest expected tide. When making this ratio large enough, one can ensure that the pull on the anchor is as horizontal as possible. This will make it unlikely for the anchor to break out of the bottom and drag, if it was properly embedded in the seabed to begin with. When deploying chain, a large enough scope leads to a load that is entirely horizontal, whilst an anchor rode made only of rope will never achieve a strictly horizontal pull.
In moderate conditions, the ratio of rode to water depth should be 4:1 – where there is sufficient swing-room, a greater scope is always better. In rougher conditions it should be up to twice this with the extra length giving more stretch and a smaller angle to the bottom to resist the anchor breaking out. For example, if the water is deep, and the anchor roller is above the water, then the 'depth' is 9 meters (~30 feet). The amount of rode to let out in moderate conditions is thus 36 meters (120 feet). (For this reason, it is important to have a reliable and accurate method of measuring the depth of water.)
When using a rope rode, there is a simple way to estimate the scope: The ratio of bow height of the rode to length of rode above the water while lying back hard on the anchor is the same or less than the scope ratio. The basis for this is simple geometry (Intercept Theorem): The ratio between two sides of a triangle stays the same regardless of the size of the triangle as long as the angles do not change.
Generally, the rode should be between 5 and 10 times the depth to the seabed, giving a scope of 5:1 or 10:1; the larger the number, the shallower the angle is between the cable and the seafloor, and the less upwards force is acting on the anchor. A 10:1 scope gives the greatest holding power, but also allows for much more drifting about due to the longer amount of cable paid out. Anchoring with sufficient scope and/or heavy chain rode brings the direction of strain close to parallel with the seabed. This is particularly important for light, modern anchors designed to bury in the bottom, where scopes of 5:1 to 7:1 are common, whereas heavy anchors and moorings can use a scope of 3:1, or less. Some modern anchors, such as the Ultra holds with a scope of 3:1; but, unless the anchorage is crowded, a longer scope always reduces shock stresses.
A major disadvantage of the concept of scope is that it does not take into account the fact that a chain is forming a catenary when hanging between two points (i.e., bow roller and the point where the chain hits the seabed), and thus is a non-linear curve (in fact, a cosh() function), whereas scope is a linear function. As a consequence, in deep water the scope needed will be less, whilst in very shallow water the scope must be chosen much larger to achieve the same pulling angle at the anchor shank. For this reason, the British Admiralty does not use a linear scope formula, but a square root formula instead.
A couple of online calculators exist to work out the amount of chain and rope needed to achieve a (possibly nearly) horizontal pull at the anchor shank, and the associated anchor load.
As symbol
An anchor frequently appears on the flags and coats of arms of institutions involved with the sea, as well as of port cities and seacoast regions and provinces in various countries. There also exists in heraldry the "Anchored Cross", or Mariner's Cross, a stylized cross in the shape of an anchor. The symbol can be used to signify 'fresh start' or 'hope'.
The Mariner's Cross is also referred to as St. Clement's Cross, in reference to the way this saint was killed (being tied to an anchor and thrown from a boat into the Black Sea in 102). Anchored crosses are occasionally a feature of coats of arms in which context they are referred to by the heraldic terms anchry or ancre.
The Unicode anchor (Miscellaneous Symbols) is represented by: .
See also
"Anchors Aweigh", United States Navy marching song
Anchorage (maritime)
References
Bibliography
Blackwell, Alex & Daria; Happy Hooking – the Art of Anchoring, 2008, 2011, 2019 White Seahorse;
Edwards, Fred; Sailing as a Second Language: An illustrated dictionary, 1988 Highmark Publishing;
Hinz, Earl R.; The Complete Book of Anchoring and Mooring, Rev. 2d ed., 1986, 1994, 2001 Cornell Maritime Press;
Hiscock, Eric C.; Cruising Under Sail, second edition, 1965 Oxford University Press;
Pardey, Lin and Larry; The Capable Cruiser; 1995 Pardey Books/Paradise Cay Publications;
Rousmaniere, John; The Annapolis Book of Seamanship, 1983, 1989 Simon and Schuster;
Smith, Everrett; Cruising World's Guide to Seamanship: Hold me tight, 1992 New York Times Sports/Leisure Magazines
Further reading
William N. Brady (1864). The Kedge-anchor; Or, Young Sailors' Assistant.
First published as The Naval Apprentice's Kedge Anchor. New York, Taylor and Clement, 1841.--The Kedge-anchor; 3rd ed. New York, 1848.--6th ed. New York, 1852.--9th ed. New York, 1857.
External links
Anchor Tests: Soft Sand Over Hard Sand—Practical-Sailor
The Big Anchor Project
Anchor comparison
Heraldic charges
Nautical terminology
Sailboat components
Sailing ship components
Ship anchors
Watercraft components
Weights | Anchor | [
"Physics"
] | 8,492 | [
"Weights",
"Physical objects",
"Matter"
] |
1,365 | https://en.wikipedia.org/wiki/Ammonia | Ammonia is an inorganic chemical compound of nitrogen and hydrogen with the formula . A stable binary hydride and the simplest pnictogen hydride, ammonia is a colourless gas with a distinctive pungent smell. Biologically, it is a common nitrogenous waste, and it contributes significantly to the nutritional needs of terrestrial organisms by serving as a precursor to fertilisers. Around 70% of ammonia produced industrially is used to make fertilisers in various forms and composition, such as urea and diammonium phosphate. Ammonia in pure form is also applied directly into the soil.
Ammonia, either directly or indirectly, is also a building block for the synthesis of many chemicals.
Ammonia occurs in nature and has been detected in the interstellar medium. In many countries, it is classified as an extremely hazardous substance.
Ammonia is produced biologically in a process called nitrogen fixation, but even more is generated industrially by the Haber process. The process helped revolutionize agriculture by providing cheap fertilizers. The global industrial production of ammonia in 2021 was 235 million tonnes. Industrial ammonia is transported by road in tankers, by rail in tank wagons, by sea in gas carriers, or in cylinders.
Ammonia boils at at a pressure of one atmosphere, but the liquid can often be handled in the laboratory without external cooling. Household ammonia or ammonium hydroxide is a solution of ammonia in water.
Etymology
Pliny, in Book XXXI of his Natural History, refers to a salt named hammoniacum, so called because of the proximity of its source to the Temple of Jupiter Amun (Greek Ἄμμων Ammon) in the Roman province of Cyrenaica. However, the description Pliny gives of the salt does not conform to the properties of ammonium chloride. According to Herbert Hoover's commentary in his English translation of Georgius Agricola's De re metallica, it is likely to have been common sea salt. In any case, that salt ultimately gave ammonia and ammonium compounds their name.
Natural occurrence (abiological)
Traces of ammonia/ammonium are found in rainwater. Ammonium chloride (sal ammoniac), and ammonium sulfate are found in volcanic districts. Crystals of ammonium bicarbonate have been found in Patagonia guano.
Ammonia is found throughout the Solar System on Mars, Jupiter, Saturn, Uranus, Neptune, and Pluto, among other places: on smaller, icy bodies such as Pluto, ammonia can act as a geologically important antifreeze, as a mixture of water and ammonia can have a melting point as low as if the ammonia concentration is high enough and thus allow such bodies to retain internal oceans and active geology at a far lower temperature than would be possible with water alone. Substances containing ammonia, or those that are similar to it, are called ammoniacal.
Properties
Ammonia is a colourless gas with a characteristically pungent smell. It is lighter than air, its density being 0.589 times that of air. It is easily liquefied due to the strong hydrogen bonding between molecules. Gaseous ammonia turns to a colourless liquid, which boils at , and freezes to colourless crystals at . Little data is available at very high temperatures and pressures, but the liquid-vapor critical point occurs at 405 K and 11.35 MPa.
Solid
The crystal symmetry is cubic, Pearson symbol cP16, space group P213 No.198, lattice constant 0.5125 nm.
Liquid
Liquid ammonia possesses strong ionising powers reflecting its high ε of 22 at . Liquid ammonia has a very high standard enthalpy change of vapourization (23.5 kJ/mol; for comparison, water's is 40.65 kJ/mol, methane 8.19 kJ/mol and phosphine 14.6 kJ/mol) and can be transported in pressurized or refrigerated vessels; however, at standard temperature and pressure liquid anhydrous ammonia will vaporize.
Solvent properties
Ammonia readily dissolves in water. In an aqueous solution, it can be expelled by boiling. The aqueous solution of ammonia is basic, and may be described as aqueous ammonia or ammonium hydroxide. The maximum concentration of ammonia in water (a saturated solution) has a specific gravity of 0.880 and is often known as '.880 ammonia'.
Liquid ammonia is a widely studied nonaqueous ionising solvent. Its most conspicuous property is its ability to dissolve alkali metals to form highly coloured, electrically conductive solutions containing solvated electrons. Apart from these remarkable solutions, much of the chemistry in liquid ammonia can be classified by analogy with related reactions in aqueous solutions. Comparison of the physical properties of with those of water shows has the lower melting point, boiling point, density, viscosity, dielectric constant and electrical conductivity. These differences are attributed at least in part to the weaker hydrogen bonding in . The ionic self-dissociation constant of liquid at −50 °C is about 10−33.
Liquid ammonia is an ionising solvent, although less so than water, and dissolves a range of ionic compounds, including many nitrates, nitrites, cyanides, thiocyanates, metal cyclopentadienyl complexes and metal bis(trimethylsilyl)amides. Most ammonium salts are soluble and act as acids in liquid ammonia solutions. The solubility of halide salts increases from fluoride to iodide. A saturated solution of ammonium nitrate (Divers' solution, named after Edward Divers) contains 0.83 mol solute per mole of ammonia and has a vapour pressure of less than 1 bar even at . However, few oxyanion salts with other cations dissolve.
Liquid ammonia will dissolve all of the alkali metals and other electropositive metals such as Ca, Sr, Ba, Eu and Yb (also Mg using an electrolytic process). At low concentrations (<0.06 mol/L), deep blue solutions are formed: these contain metal cations and solvated electrons, free electrons that are surrounded by a cage of ammonia molecules.
These solutions are strong reducing agents. At higher concentrations, the solutions are metallic in appearance and in electrical conductivity. At low temperatures, the two types of solution can coexist as immiscible phases.
Redox properties of liquid ammonia
The range of thermodynamic stability of liquid ammonia solutions is very narrow, as the potential for oxidation to dinitrogen, E° (), is only +0.04 V. In practice, both oxidation to dinitrogen and reduction to dihydrogen are slow. This is particularly true of reducing solutions: the solutions of the alkali metals mentioned above are stable for several days, slowly decomposing to the metal amide and dihydrogen. Most studies involving liquid ammonia solutions are done in reducing conditions; although oxidation of liquid ammonia is usually slow, there is still a risk of explosion, particularly if transition metal ions are present as possible catalysts.
Structure
The ammonia molecule has a trigonal pyramidal shape, as predicted by the valence shell electron pair repulsion theory (VSEPR theory) with an experimentally determined bond angle of 106.7°. The central nitrogen atom has five outer electrons with an additional electron from each hydrogen atom. This gives a total of eight electrons, or four electron pairs that are arranged tetrahedrally. Three of these electron pairs are used as bond pairs, which leaves one lone pair of electrons. The lone pair repels more strongly than bond pairs; therefore, the bond angle is not 109.5°, as expected for a regular tetrahedral arrangement, but 106.7°. This shape gives the molecule a dipole moment and makes it polar. The molecule's polarity, and especially its ability to form hydrogen bonds, makes ammonia highly miscible with water. The lone pair makes ammonia a base, a proton acceptor. Ammonia is moderately basic; a 1.0 M aqueous solution has a pH of 11.6, and if a strong acid is added to such a solution until the solution is neutral (), 99.4% of the ammonia molecules are protonated. Temperature and salinity also affect the proportion of ammonium . The latter has the shape of a regular tetrahedron and is isoelectronic with methane.
The ammonia molecule readily undergoes nitrogen inversion at room temperature; a useful analogy is an umbrella turning itself inside out in a strong wind. The energy barrier to this inversion is 24.7 kJ/mol, and the resonance frequency is 23.79 GHz, corresponding to microwave radiation of a wavelength of 1.260 cm. The absorption at this frequency was the first microwave spectrum to be observed and was used in the first maser.
Amphotericity
One of the most characteristic properties of ammonia is its basicity. Ammonia is considered to be a weak base. It combines with acids to form ammonium salts; thus, with hydrochloric acid it forms ammonium chloride (sal ammoniac); with nitric acid, ammonium nitrate, etc. Perfectly dry ammonia gas will not combine with perfectly dry hydrogen chloride gas; moisture is necessary to bring about the reaction.
As a demonstration experiment under air with ambient moisture, opened bottles of concentrated ammonia and hydrochloric acid solutions produce a cloud of ammonium chloride, which seems to appear 'out of nothing' as the salt aerosol forms where the two diffusing clouds of reagents meet between the two bottles.
The salts produced by the action of ammonia on acids are known as the ammonium salts and all contain the ammonium ion ().
Although ammonia is well known as a weak base, it can also act as an extremely weak acid. It is a protic substance and is capable of formation of amides (which contain the ion). For example, lithium dissolves in liquid ammonia to give a blue solution (solvated electron) of lithium amide:
Self-dissociation
Like water, liquid ammonia undergoes molecular autoionisation to form its acid and base conjugates:
Ammonia often functions as a weak base, so it has some buffering ability. Shifts in pH will cause more or fewer ammonium cations () and amide anions () to be present in solution. At standard pressure and temperature,
K = = 10−30.
Combustion
Ammonia does not burn readily or sustain combustion, except under narrow fuel-to-air mixtures of 15–28% ammonia by volume in air. When mixed with oxygen, it burns with a pale yellowish-green flame. Ignition occurs when chlorine is passed into ammonia, forming nitrogen and hydrogen chloride; if chlorine is present in excess, then the highly explosive nitrogen trichloride () is also formed.
The combustion of ammonia to form nitrogen and water is exothermic:
, ΔH°r = −1267.20 kJ (or −316.8 kJ/mol if expressed per mol of )
The standard enthalpy change of combustion, ΔH°c, expressed per mole of ammonia and with condensation of the water formed, is −382.81 kJ/mol. Dinitrogen is the thermodynamic product of combustion: all nitrogen oxides are unstable with respect to and , which is the principle behind the catalytic converter. Nitrogen oxides can be formed as kinetic products in the presence of appropriate catalysts, a reaction of great industrial importance in the production of nitric acid:
A subsequent reaction leads to :
The combustion of ammonia in air is very difficult in the absence of a catalyst (such as platinum gauze or warm chromium(III) oxide), due to the relatively low heat of combustion, a lower laminar burning velocity, high auto-ignition temperature, high heat of vapourization, and a narrow flammability range. However, recent studies have shown that efficient and stable combustion of ammonia can be achieved using swirl combustors, thereby rekindling research interest in ammonia as a fuel for thermal power production. The flammable range of ammonia in dry air is 15.15–27.35% and in 100% relative humidity air is 15.95–26.55%. For studying the kinetics of ammonia combustion, knowledge of a detailed reliable reaction mechanism is required, but this has been challenging to obtain.
Precursor to organonitrogen compounds
Ammonia is a direct or indirect precursor to most manufactured nitrogen-containing compounds. It is the precursor to nitric acid, which is the source for most N-substituted aromatic compounds.
Amines can be formed by the reaction of ammonia with alkyl halides or, more commonly, with alcohols:
Its ring-opening reaction with ethylene oxide give ethanolamine, diethanolamine, and triethanolamine.
Amides can be prepared by the reaction of ammonia with carboxylic acid and their derivatives. For example, ammonia reacts with formic acid (HCOOH) to yield formamide () when heated. Acyl chlorides are the most reactive, but the ammonia must be present in at least a twofold excess to neutralise the hydrogen chloride formed. Esters and anhydrides also react with ammonia to form amides. Ammonium salts of carboxylic acids can be dehydrated to amides by heating to 150–200 °C as long as no thermally sensitive groups are present.
Amino acids, using Strecker amino-acid synthesis
Acrylonitrile, in the Sohio process
Other organonitrogen compounds include alprazolam, ethanolamine, ethyl carbamate and hexamethylenetetramine.
Precursor to inorganic nitrogenous compounds
Nitric acid is generated via the Ostwald process by oxidation of ammonia with air over a platinum catalyst at , ≈9 atm. Nitric oxide and nitrogen dioxide are intermediate in this conversion:
Nitric acid is used for the production of fertilisers, explosives, and many organonitrogen compounds.
The hydrogen in ammonia is susceptible to replacement by a myriad substituents.
Ammonia gas reacts with metallic sodium to give sodamide, .
With chlorine, monochloramine is formed.
Pentavalent ammonia is known as λ5-amine, nitrogen pentahydride decomposes spontaneously into trivalent ammonia (λ3-amine) and hydrogen gas at normal conditions. This substance was once investigated as a possible solid rocket fuel in 1966.
Ammonia is also used to make the following compounds:
Hydrazine, in the Olin Raschig process and the peroxide process
Hydrogen cyanide, in the BMA process and the Andrussow process
Hydroxylamine and ammonium carbonate, in the Raschig process
Urea, in the Bosch–Meiser urea process and in Wöhler synthesis
ammonium perchlorate, ammonium nitrate, and ammonium bicarbonate
Ammonia is a ligand forming metal ammine complexes. For historical reasons, ammonia is named ammine in the nomenclature of coordination compounds. One notable ammine complex is cisplatin (, a widely used anticancer drug. Ammine complexes of chromium(III) formed the basis of Alfred Werner's revolutionary theory on the structure of coordination compounds. Werner noted only two isomers (fac- and mer-) of the complex could be formed, and concluded the ligands must be arranged around the metal ion at the vertices of an octahedron.
Ammonia forms 1:1 adducts with a variety of Lewis acids such as , phenol, and . Ammonia is a hard base (HSAB theory) and its E & C parameters are EB = 2.31 and CB = 2.04. Its relative donor strength toward a series of acids, versus other Lewis bases, can be illustrated by C-B plots.
Detection and determination
Ammonia in solution
Ammonia and ammonium salts can be readily detected, in very minute traces, by the addition of Nessler's solution, which gives a distinct yellow colouration in the presence of the slightest trace of ammonia or ammonium salts. The amount of ammonia in ammonium salts can be estimated quantitatively by distillation of the salts with sodium (NaOH) or potassium hydroxide (KOH), the ammonia evolved being absorbed in a known volume of standard sulfuric acid and the excess of acid then determined volumetrically; or the ammonia may be absorbed in hydrochloric acid and the ammonium chloride so formed precipitated as ammonium hexachloroplatinate, .
Gaseous ammonia
Sulfur sticks are burnt to detect small leaks in industrial ammonia refrigeration systems. Larger quantities can be detected by warming the salts with a caustic alkali or with quicklime, when the characteristic smell of ammonia will be at once apparent. Ammonia is an irritant and irritation increases with concentration; the permissible exposure limit is 25 ppm, and lethal above 500 ppm by volume. Higher concentrations are hardly detected by conventional detectors, the type of detector is chosen according to the sensitivity required (e.g. semiconductor, catalytic, electrochemical). Holographic sensors have been proposed for detecting concentrations up to 12.5% in volume.
In a laboratorial setting, gaseous ammonia can be detected by using concentrated hydrochloric acid or gaseous hydrogen chloride. A dense white fume (which is ammonium chloride vapor) arises from the reaction between ammonia and HCl(g).
Ammoniacal nitrogen (NH3–N)
Ammoniacal nitrogen (NH3–N) is a measure commonly used for testing the quantity of ammonium ions, derived naturally from ammonia, and returned to ammonia via organic processes, in water or waste liquids. It is a measure used mainly for quantifying values in waste treatment and water purification systems, as well as a measure of the health of natural and man-made water reserves. It is measured in units of mg/L (milligram per litre).
History
The ancient Greek historian Herodotus mentioned that there were outcrops of salt in an area of Libya that was inhabited by a people called the 'Ammonians' (now the Siwa oasis in northwestern Egypt, where salt lakes still exist). The Greek geographer Strabo also mentioned the salt from this region. However, the ancient authors Dioscorides, Apicius, Arrian, Synesius, and Aëtius of Amida described this salt as forming clear crystals that could be used for cooking and that were essentially rock salt. Hammoniacus sal appears in the writings of Pliny, although it is not known whether the term is equivalent to the more modern sal ammoniac (ammonium chloride).
The fermentation of urine by bacteria produces a solution of ammonia; hence fermented urine was used in Classical Antiquity to wash cloth and clothing, to remove hair from hides in preparation for tanning, to serve as a mordant in dying cloth, and to remove rust from iron. It was also used by ancient dentists to wash teeth.
In the form of sal ammoniac (نشادر, nushadir), ammonia was important to the Muslim alchemists. It was mentioned in the Book of Stones, likely written in the 9th century and attributed to Jābir ibn Hayyān. It was also important to the European alchemists of the 13th century, being mentioned by Albertus Magnus. It was also used by dyers in the Middle Ages in the form of fermented urine to alter the colour of vegetable dyes. In the 15th century, Basilius Valentinus showed that ammonia could be obtained by the action of alkalis on sal ammoniac. At a later period, when sal ammoniac was obtained by distilling the hooves and horns of oxen and neutralizing the resulting carbonate with hydrochloric acid, the name 'spirit of hartshorn' was applied to ammonia.
Gaseous ammonia was first isolated by Joseph Black in 1756 by reacting sal ammoniac (ammonium chloride) with calcined magnesia (magnesium oxide). It was isolated again by Peter Woulfe in 1767, by Carl Wilhelm Scheele in 1770 and by Joseph Priestley in 1773 and was termed by him 'alkaline air'. Eleven years later in 1785, Claude Louis Berthollet ascertained its composition.
The production of ammonia from nitrogen in the air (and hydrogen) was invented by Fritz Haber and Robert LeRossignol. The patent was sent in 1909 (USPTO Nr 1,202,995) and awarded in 1916. Later, Carl Bosch developed the industrial method for ammonia production (Haber–Bosch process). It was first used on an industrial scale in Germany during World War I, following the allied blockade that cut off the supply of nitrates from Chile. The ammonia was used to produce explosives to sustain war efforts. The Nobel Prize in Chemistry 1918 was awarded to Fritz Haber "for the synthesis of ammonia from its elements".
Before the availability of natural gas, hydrogen as a precursor to ammonia production was produced via the electrolysis of water or using the chloralkali process.
With the advent of the steel industry in the 20th century, ammonia became a byproduct of the production of coking coal.
Applications
Fertiliser
In the US , approximately 88% of ammonia was used as fertilisers either as its salts, solutions or anhydrously. When applied to soil, it helps provide increased yields of crops such as maize and wheat. 30% of agricultural nitrogen applied in the US is in the form of anhydrous ammonia, and worldwide, 110 million tonnes are applied each year.
Solutions of ammonia ranging from 16% to 25% are used in the fermentation industry as a source of nitrogen for microorganisms and to adjust pH during fermentation.
Refrigeration–R717
Because of ammonia's vapourization properties, it is a useful refrigerant. It was commonly used before the popularisation of chlorofluorocarbons (Freons). Anhydrous ammonia is widely used in industrial refrigeration applications and hockey rinks because of its high energy efficiency and low cost. It suffers from the disadvantage of toxicity, and requiring corrosion resistant components, which restricts its domestic and small-scale use. Along with its use in modern vapour-compression refrigeration it is used in a mixture along with hydrogen and water in absorption refrigerators. The Kalina cycle, which is of growing importance to geothermal power plants, depends on the wide boiling range of the ammonia–water mixture.
Ammonia coolant is also used in the radiators aboard the International Space Station in loops that are used to regulate the internal temperature and enable temperature-dependent experiments. The ammonia is under sufficient pressure to remain liquid throughout the process. Single-phase ammonia cooling systems also serve the power electronics in each pair of solar arrays.
The potential importance of ammonia as a refrigerant has increased with the discovery that vented CFCs and HFCs are potent and stable greenhouse gases.
Antimicrobial agent for food products
As early as in 1895, it was known that ammonia was 'strongly antiseptic ... it requires 1.4 grams per litre to preserve beef tea (broth).' In one study, anhydrous ammonia destroyed 99.999% of zoonotic bacteria in three types of animal feed, but not silage. Anhydrous ammonia is currently used commercially to reduce or eliminate microbial contamination of beef.
Lean finely textured beef (popularly known as 'pink slime') in the beef industry is made from fatty beef trimmings (c. 50–70% fat) by removing the fat using heat and centrifugation, then treating it with ammonia to kill E. coli. The process was deemed effective and safe by the US Department of Agriculture based on a study that found that the treatment reduces E. coli to undetectable levels. There have been safety concerns about the process as well as consumer complaints about the taste and smell of ammonia-treated beef.
Fuel
Ammonia has been used as fuel, and is a proposed alternative to fossil fuels and hydrogen. Being liquid at ambient temperature under its own vapour pressure and having high volumetric and gravimetric energy density, ammonia is considered a suitable carrier for hydrogen, and may be cheaper than direct transport of liquid hydrogen.
Compared to hydrogen, ammonia is easier to store. Compared to hydrogen as a fuel, ammonia is much more energy efficient, and could be produced, stored and delivered at a much lower cost than hydrogen, which must be kept compressed or as a cryogenic liquid. The raw energy density of liquid ammonia is 11.5 MJ/L, which is about a third that of diesel.
Ammonia can be converted back to hydrogen to be used to power hydrogen fuel cells, or it may be used directly within high-temperature solid oxide direct ammonia fuel cells to provide efficient power sources that do not emit greenhouse gases. Ammonia to hydrogen conversion can be achieved through the sodium amide process or the catalytic decomposition of ammonia using solid catalysts.
Ammonia engines or ammonia motors, using ammonia as a working fluid, have been proposed and occasionally used. The principle is similar to that used in a fireless locomotive, but with ammonia as the working fluid, instead of steam or compressed air. Ammonia engines were used experimentally in the 19th century by Goldsworthy Gurney in the UK and the St. Charles Avenue Streetcar line in New Orleans in the 1870s and 1880s, and during World War II ammonia was used to power buses in Belgium.
Ammonia is sometimes proposed as a practical alternative to fossil fuel for internal combustion engines. However, ammonia cannot be easily used in existing Otto cycle engines because of its very narrow flammability range. Despite this, several tests have been run. Its high octane rating of 120 and low flame temperature allows the use of high compression ratios without a penalty of high production. Since ammonia contains no carbon, its combustion cannot produce carbon dioxide, carbon monoxide, hydrocarbons, or soot.
Ammonia production currently creates 1.8% of global emissions. 'Green ammonia' is ammonia produced by using green hydrogen (hydrogen produced by electrolysis with electricity from renewable energy), whereas 'blue ammonia' is ammonia produced using blue hydrogen (hydrogen produced by steam methane reforming (= SMR) where the carbon dioxide has been captured and stored (cfr. carbon capture and storage = CCS).
Rocket engines have also been fueled by ammonia. The Reaction Motors XLR99 rocket engine that powered the hypersonic research aircraft used liquid ammonia. Although not as powerful as other fuels, it left no soot in the reusable rocket engine, and its density approximately matches the density of the oxidiser, liquid oxygen, which simplified the aircraft's design.
In 2020, Saudi Arabia shipped 40 metric tons of liquid 'blue ammonia' to Japan for use as a fuel. It was produced as a by-product by petrochemical industries, and can be burned without giving off greenhouse gases. Its energy density by volume is nearly double that of liquid hydrogen. If the process of creating it can be scaled up via purely renewable resources, producing green ammonia, it could make a major difference in avoiding climate change. The company ACWA Power and the city of Neom have announced the construction of a green hydrogen and ammonia plant in 2020.
Green ammonia is considered as a potential fuel for future container ships. In 2020, the companies DSME and MAN Energy Solutions announced the construction of an ammonia-based ship, DSME plans to commercialize it by 2025. The use of ammonia as a potential alternative fuel for aircraft jet engines is also being explored.
Japan intends to implement a plan to develop ammonia co-firing technology that can increase the use of ammonia in power generation, as part of efforts to assist domestic and other Asian utilities to accelerate their transition to carbon neutrality.
In October 2021, the first International Conference on Fuel Ammonia (ICFA2021) was held.
In June 2022, IHI Corporation succeeded in reducing greenhouse gases by over 99% during combustion of liquid ammonia in a 2,000-kilowatt-class gas turbine achieving truly -free power generation.
In July 2022, Quad nations of Japan, the U.S., Australia and India agreed to promote technological development for clean-burning hydrogen and ammonia as fuels at the security grouping's first energy meeting. , however, significant amounts of are produced. Nitrous oxide may also be a problem as it is a "greenhouse gas that is known to possess up to 300 times the Global Warming Potential (GWP) of carbon dioxide".
The IEA forecasts that ammonia will meet approximately 45% of shipping fuel demands by 2050.
At high temperature and in the presence of a suitable catalyst ammonia decomposes into its constituent elements. Decomposition of ammonia is a slightly endothermic process requiring 23 kJ/mol (5.5 kcal/mol) of ammonia, and yields hydrogen and nitrogen gas.
Other
Remediation of gaseous emissions
Ammonia is used to scrub from the burning of fossil fuels, and the resulting product is converted to ammonium sulfate for use as fertiliser. Ammonia neutralises the nitrogen oxide () pollutants emitted by diesel engines. This technology, called SCR (selective catalytic reduction), relies on a vanadia-based catalyst.
Ammonia may be used to mitigate gaseous spills of phosgene.
Stimulant
Ammonia, as the vapour released by smelling salts, has found significant use as a respiratory stimulant. Ammonia is commonly used in the illegal manufacture of methamphetamine through a Birch reduction. The Birch method of making methamphetamine is dangerous because the alkali metal and liquid ammonia are both extremely reactive, and the temperature of liquid ammonia makes it susceptible to explosive boiling when reactants are added.
Textile
Liquid ammonia is used for treatment of cotton materials, giving properties like mercerisation, using alkalis. In particular, it is used for prewashing of wool.
Lifting gas
At standard temperature and pressure, ammonia is less dense than atmosphere and has approximately 45–48% of the lifting power of hydrogen or helium. Ammonia has sometimes been used to fill balloons as a lifting gas. Because of its relatively high boiling point (compared to helium and hydrogen), ammonia could potentially be refrigerated and liquefied aboard an airship to reduce lift and add ballast (and returned to a gas to add lift and reduce ballast).
Fuming
Ammonia has been used to darken quartersawn white oak in Arts & Crafts and Mission-style furniture. Ammonia fumes react with the natural tannins in the wood and cause it to change colour.
Safety
The US Occupational Safety and Health Administration (OSHA) has set a 15-minute exposure limit for gaseous ammonia of 35 ppm by volume in the environmental air and an 8-hour exposure limit of 25 ppm by volume. The National Institute for Occupational Safety and Health (NIOSH) recently reduced the IDLH (Immediately Dangerous to Health or Life, the level to which a healthy worker can be exposed for 30 minutes without suffering irreversible health effects) from 500 ppm to 300 ppm based on recent more conservative interpretations of original research in 1943. The 1 hour IDLH limit is still 500 ppm. Other organisations have varying exposure levels. US Navy Standards [U.S. Bureau of Ships 1962] maximum allowable concentrations (MACs): for continuous exposure (60 days) is 25 ppm; for exposure of 1 hour is 400 ppm.
Ammonia vapour has a sharp, irritating, pungent odor that acts as a warning of potentially dangerous exposure. The average odor threshold is 5 ppm, well below any danger or damage. Exposure to very high concentrations of gaseous ammonia can result in lung damage and death. Ammonia is regulated in the US as a non-flammable gas, but it meets the definition of a material that is toxic by inhalation and requires a hazardous safety permit when transported in quantities greater than .
Liquid ammonia is dangerous because it is hygroscopic and because it can cause caustic burns. See for more information.
Toxicity
The toxicity of ammonia solutions does not usually cause problems for humans and other mammals, as a specific mechanism exists to prevent its build-up in the bloodstream. Ammonia is converted to carbamoyl phosphate by the enzyme carbamoyl phosphate synthetase, and then enters the urea cycle to be either incorporated into amino acids or excreted in the urine. Fish and amphibians lack this mechanism, as they can usually eliminate ammonia from their bodies by direct excretion. Ammonia even at dilute concentrations is highly toxic to aquatic animals, and for this reason it is classified as "dangerous for the environment". Atmospheric ammonia plays a key role in the formation of fine particulate matter.
Ammonia is a constituent of tobacco smoke.
Coking wastewater
Ammonia is present in coking wastewater streams, as a liquid by-product of the production of coke from coal. In some cases, the ammonia is discharged to the marine environment where it acts as a pollutant. The Whyalla Steelworks in South Australia is one example of a coke-producing facility that discharges ammonia into marine waters.
Aquaculture
Ammonia toxicity is believed to be a cause of otherwise unexplained losses in fish hatcheries. Excess ammonia may accumulate and cause alteration of metabolism or increases in the body pH of the exposed organism. Tolerance varies among fish species. At lower concentrations, around 0.05 mg/L, un-ionised ammonia is harmful to fish species and can result in poor growth and feed conversion rates, reduced fecundity and fertility and increase stress and susceptibility to bacterial infections and diseases. Exposed to excess ammonia, fish may suffer loss of equilibrium, hyper-excitability, increased respiratory activity and oxygen uptake and increased heart rate. At concentrations exceeding 2.0 mg/L, ammonia causes gill and tissue damage, extreme lethargy, convulsions, coma, and death. Experiments have shown that the lethal concentration for a variety of fish species ranges from 0.2 to 2.0 mg/L.
During winter, when reduced feeds are administered to aquaculture stock, ammonia levels can be higher. Lower ambient temperatures reduce the rate of algal photosynthesis so less ammonia is removed by any algae present. Within an aquaculture environment, especially at large scale, there is no fast-acting remedy to elevated ammonia levels. Prevention rather than correction is recommended to reduce harm to farmed fish and in open water systems, the surrounding environment.
Storage information
Similar to propane, anhydrous ammonia boils below room temperature when at atmospheric pressure. A storage vessel capable of is suitable to contain the liquid. Ammonia is used in numerous different industrial applications requiring carbon or stainless steel storage vessels. Ammonia with at least 0.2% by weight water content is not corrosive to carbon steel. carbon steel construction storage tanks with 0.2% by weight or more of water could last more than 50 years in service. Experts warn that ammonium compounds not be allowed to come in contact with bases (unless in an intended and contained reaction), as dangerous quantities of ammonia gas could be released.
Laboratory
The hazards of ammonia solutions depend on the concentration: 'dilute' ammonia solutions are usually 5–10% by weight (< 5.62 mol/L); 'concentrated' solutions are usually prepared at >25% by weight. A 25% (by weight) solution has a density of 0.907 g/cm3, and a solution that has a lower density will be more concentrated. The European Union classification of ammonia solutions is given in the table.
The ammonia vapour from concentrated ammonia solutions is severely irritating to the eyes and the respiratory tract, and experts warn that these solutions only be handled in a fume hood. Saturated ('0.880'–see ) solutions can develop a significant pressure inside a closed bottle in warm weather, and experts also warn that the bottle be opened with care. This is not usually a problem for 25% ('0.900') solutions.
Experts warn that ammonia solutions not be mixed with halogens, as toxic and/or explosive products are formed. Experts also warn that prolonged contact of ammonia solutions with silver, mercury or iodide salts can also lead to explosive products: such mixtures are often formed in qualitative inorganic analysis, and that it needs to be lightly acidified but not concentrated (<6% w/v) before disposal once the test is completed.
Laboratory use of anhydrous ammonia (gas or liquid)
Anhydrous ammonia is classified as toxic (T) and dangerous for the environment (N). The gas is flammable (autoignition temperature: 651 °C) and can form explosive mixtures with air (16–25%). The permissible exposure limit (PEL) in the United States is 50 ppm (35 mg/m3), while the IDLH concentration is estimated at 300 ppm. Repeated exposure to ammonia lowers the sensitivity to the smell of the gas: normally the odour is detectable at concentrations of less than 50 ppm, but desensitised individuals may not detect it even at concentrations of 100 ppm. Anhydrous ammonia corrodes copper- and zinc-containing alloys, which makes brass fittings not appropriate for handling the gas. Liquid ammonia can also attack rubber and certain plastics.
Ammonia reacts violently with the halogens. Nitrogen triiodide, a primary high explosive, is formed when ammonia comes in contact with iodine. Ammonia causes the explosive polymerisation of ethylene oxide. It also forms explosive fulminating compounds with compounds of gold, silver, mercury, germanium or tellurium, and with stibine. Violent reactions have also been reported with acetaldehyde, hypochlorite solutions, potassium ferricyanide and peroxides.
Production
Ammonia has one of the highest rates of production of any inorganic chemical. Production is sometimes expressed in terms of 'fixed nitrogen'. Global production was estimated as being 160 million tonnes in 2020 (147 tons of fixed nitrogen). China accounted for 26.5% of that, followed by Russia at 11.0%, the United States at 9.5%, and India at 8.3%.
Before the start of World War I, most ammonia was obtained by the dry distillation of nitrogenous vegetable and animal waste products, including camel dung, where it was distilled by the reduction of nitrous acid and nitrites with hydrogen; in addition, it was produced by the distillation of coal, and also by the decomposition of ammonium salts by alkaline hydroxides such as quicklime:
For small scale laboratory synthesis, one can heat urea and calcium hydroxide or sodium hydroxide:
Haber–Bosch
Electrochemical
The electrochemical synthesis of ammonia involves the reductive formation of lithium nitride, which can be protonated to ammonia, given a proton source. The first use of this chemistry was reported in 1930, where lithium solutions in ethanol were used to produce ammonia at pressures of up to 1000 bar, with ethanol acting as the proton source. Beyond simply mediating proton transfer to the nitrogen reduction reaction, ethanol has been found to play a multifaceted role, influencing electrolyte transformations and contributing to the formation of the solid electrolyte interphase, which enhances overall reaction efficiency
In 1994, Tsuneto et al. used lithium electrodeposition in tetrahydrofuran to synthesize ammonia at more moderate pressures with reasonable Faradaic efficiency. Subsequent studies have further explored the ethanol–tetrahydrofuran system for electrochemical ammonia synthesis.
In 2020, a solvent-agnostic gas diffusion electrode was shown to improve nitrogen transport to the reactive lithium. production rates of up to and Faradaic efficiencies of up to 47.5 ± 4% at ambient temperature and 1 bar pressure were achieved.
In 2021, it was demonstrated that ethanol could be replaced with a tetraalkyl phosphonium salt. The study observed production rates of at 69 ± 1% Faradaic efficiency experiments under 0.5 bar hydrogen and 19.5 bar nitrogen partial pressure at ambient temperature. Technology based on this electrochemistry is being developed for commercial fertiliser and fuel production.
In 2022, ammonia was produced via the lithium mediated process in a continuous-flow electrolyzer also demonstrating the hydrogen gas as proton source. The study synthesized ammonia at 61 ± 1% Faradaic efficiency at a current density of −6 mA/cm2 at 1 bar and room temperature.
Biochemistry and medicine
Ammonia is essential for life. For example, it is required for the formation of amino acids and nucleic acids, fundamental building blocks of life. Ammonia is however quite toxic. Nature thus uses carriers for ammonia. Within a cell, glutamate serves this role. In the bloodstream, glutamine is a source of ammonia.
Ethanolamine, required for cell membranes, is the substrate for ethanolamine ammonia-lyase, which produces ammonia:
Ammonia is both a metabolic waste and a metabolic input throughout the biosphere. It is an important source of nitrogen for living systems. Although atmospheric nitrogen abounds (more than 75%), few living creatures are capable of using atmospheric nitrogen in its diatomic form, gas. Therefore, nitrogen fixation is required for the synthesis of amino acids, which are the building blocks of protein. Some plants rely on ammonia and other nitrogenous wastes incorporated into the soil by decaying matter. Others, such as nitrogen-fixing legumes, benefit from symbiotic relationships with rhizobia bacteria that create ammonia from atmospheric nitrogen.
In humans, inhaling ammonia in high concentrations can be fatal. Exposure to ammonia can cause headaches, edema, impaired memory, seizures and coma as it is neurotoxic in nature.
Biosynthesis
In certain organisms, ammonia is produced from atmospheric nitrogen by enzymes called nitrogenases. The overall process is called nitrogen fixation. Intense effort has been directed toward understanding the mechanism of biological nitrogen fixation. The scientific interest in this problem is motivated by the unusual structure of the active site of the enzyme, which consists of an ensemble.
Ammonia is also a metabolic product of amino acid deamination catalyzed by enzymes such as glutamate dehydrogenase 1. Ammonia excretion is common in aquatic animals. In humans, it is quickly converted to urea (by liver), which is much less toxic, particularly less basic. This urea is a major component of the dry weight of urine. Most reptiles, birds, insects, and snails excrete uric acid solely as nitrogenous waste.
Physiology
Ammonia plays a role in both normal and abnormal animal physiology. It is biosynthesised through normal amino acid metabolism and is toxic in high concentrations. The liver converts ammonia to urea through a series of reactions known as the urea cycle. Liver dysfunction, such as that seen in cirrhosis, may lead to elevated amounts of ammonia in the blood (hyperammonemia). Likewise, defects in the enzymes responsible for the urea cycle, such as ornithine transcarbamylase, lead to hyperammonemia. Hyperammonemia contributes to the confusion and coma of hepatic encephalopathy, as well as the neurological disease common in people with urea cycle defects and organic acidurias.
Ammonia is important for normal animal acid/base balance. After formation of ammonium from glutamine, α-ketoglutarate may be degraded to produce two bicarbonate ions, which are then available as buffers for dietary acids. Ammonium is excreted in the urine, resulting in net acid loss. Ammonia may itself diffuse across the renal tubules, combine with a hydrogen ion, and thus allow for further acid excretion.
Excretion
Ammonium ions are a toxic waste product of metabolism in animals. In fish and aquatic invertebrates, it is excreted directly into the water. In mammals, sharks, and amphibians, it is converted in the urea cycle to urea, which is less toxic and can be stored more efficiently. In birds, reptiles, and terrestrial snails, metabolic ammonium is converted into uric acid, which is solid and can therefore be excreted with minimal water loss.
Extraterrestrial occurrence
Ammonia has been detected in the atmospheres of the giant planets Jupiter, Saturn, Uranus and Neptune, along with other gases such as methane, hydrogen, and helium. The interior of Saturn may include frozen ammonia crystals. It is found on Deimos and Phobos–the two moons of Mars.
Interstellar space
Ammonia was first detected in interstellar space in 1968, based on microwave emissions from the direction of the galactic core. This was the first polyatomic molecule to be so detected. The sensitivity of the molecule to a broad range of excitations and the ease with which it can be observed in a number of regions has made ammonia one of the most important molecules for studies of molecular clouds. The relative intensity of the ammonia lines can be used to measure the temperature of the emitting medium.
The following isotopic species of ammonia have been detected: ,, , , and . The detection of triply deuterated ammonia was considered a surprise as deuterium is relatively scarce. It is thought that the low-temperature conditions allow this molecule to survive and accumulate.
Since its interstellar discovery, has proved to be an invaluable spectroscopic tool in the study of the interstellar medium. With a large number of transitions sensitive to a wide range of excitation conditions, has been widely astronomically detected–its detection has been reported in hundreds of journal articles. Listed below is a sample of journal articles that highlights the range of detectors that have been used to identify ammonia.
The study of interstellar ammonia has been important to a number of areas of research in the last few decades. Some of these are delineated below and primarily involve using ammonia as an interstellar thermometer.
Interstellar formation mechanisms
The interstellar abundance for ammonia has been measured for a variety of environments. The []/[] ratio has been estimated to range from 10−7 in small dark clouds up to 10−5 in the dense core of the Orion molecular cloud complex. Although a total of 18 total production routes have been proposed, the principal formation mechanism for interstellar is the reaction:
The rate constant, k, of this reaction depends on the temperature of the environment, with a value of at 10 K. The rate constant was calculated from the formula . For the primary formation reaction, and . Assuming an abundance of and an electron abundance of 10−7 typical of molecular clouds, the formation will proceed at a rate of in a molecular cloud of total density .
All other proposed formation reactions have rate constants of between two and 13 orders of magnitude smaller, making their contribution to the abundance of ammonia relatively insignificant. As an example of the minor contribution other formation reactions play, the reaction:
has a rate constant of 2.2. Assuming densities of 105 and []/[] ratio of 10−7, this reaction proceeds at a rate of 2.2, more than three orders of magnitude slower than the primary reaction above.
Some of the other possible formation reactions are:
Interstellar destruction mechanisms
There are 113 total proposed reactions leading to the destruction of . Of these, 39 were tabulated in extensive tables of the chemistry among C, N and O compounds. A review of interstellar ammonia cites the following reactions as the principal dissociation mechanisms:
with rate constants of 4.39×10−9 and 2.2×10−9, respectively. The above equations (, ) run at a rate of 8.8×10−9 and 4.4×10−13, respectively. These calculations assumed the given rate constants and abundances of []/[] = 10−5, []/[] = 2×10−5, []/[] = 2×10−9, and total densities of n = 105, typical of cold, dense, molecular clouds. Clearly, between these two primary reactions, equation () is the dominant destruction reaction, with a rate ≈10,000 times faster than equation (). This is due to the relatively high abundance of .
Single antenna detections
Radio observations of from the Effelsberg 100-m Radio Telescope reveal that the ammonia line is separated into two components–a background ridge and an unresolved core. The background corresponds well with the locations previously detected CO. The 25 m Chilbolton telescope in England detected radio signatures of ammonia in H II regions, HNH2O masers, H–H objects, and other objects associated with star formation. A comparison of emission line widths indicates that turbulent or systematic velocities do not increase in the central cores of molecular clouds.
Microwave radiation from ammonia was observed in several galactic objects including W3(OH), Orion A, W43, W51, and five sources in the galactic centre. The high detection rate indicates that this is a common molecule in the interstellar medium and that high-density regions are common in the galaxy.
Interferometric studies
VLA observations of in seven regions with high-velocity gaseous outflows revealed condensations of less than 0.1 pc in L1551, S140, and Cepheus A. Three individual condensations were detected in Cepheus A, one of them with a highly elongated shape. They may play an important role in creating the bipolar outflow in the region.
Extragalactic ammonia was imaged using the VLA in IC 342. The hot gas has temperatures above 70 K, which was inferred from ammonia line ratios and appears to be closely associated with the innermost portions of the nuclear bar seen in CO. was also monitored by VLA toward a sample of four galactic ultracompact HII regions: G9.62+0.19, G10.47+0.03, G29.96-0.02, and G31.41+0.31. Based upon temperature and density diagnostics, it is concluded that in general such clumps are probably the sites of massive star formation in an early evolutionary phase prior to the development of an ultracompact HII region.
Infrared detections
Absorption at 2.97 micrometres due to solid ammonia was recorded from interstellar grains in the Becklin–Neugebauer Object and probably in NGC 2264-IR as well. This detection helped explain the physical shape of previously poorly understood and related ice absorption lines.
A spectrum of the disk of Jupiter was obtained from the Kuiper Airborne Observatory, covering the 100 to 300 cm−1 spectral range. Analysis of the spectrum provides information on global mean properties of ammonia gas and an ammonia ice haze.
A total of 149 dark cloud positions were surveyed for evidence of 'dense cores' by using the (J,K) = (1,1) rotating inversion line of NH3. In general, the cores are not spherically shaped, with aspect ratios ranging from 1.1 to 4.4. It is also found that cores with stars have broader lines than cores without stars.
Ammonia has been detected in the Draco Nebula and in one or possibly two molecular clouds, which are associated with the high-latitude galactic infrared cirrus. The finding is significant because they may represent the birthplaces for the Population I metallicity B-type stars in the galactic halo that could have been borne in the galactic disk.
Observations of nearby dark clouds
By balancing and stimulated emission with spontaneous emission, it is possible to construct a relation between excitation temperature and density. Moreover, since the transitional levels of ammonia can be approximated by a 2-level system at low temperatures, this calculation is fairly simple. This premise can be applied to dark clouds, regions suspected of having extremely low temperatures and possible sites for future star formation. Detections of ammonia in dark clouds show very narrow linesindicative not only of low temperatures, but also of a low level of inner-cloud turbulence. Line ratio calculations provide a measurement of cloud temperature that is independent of previous CO observations. The ammonia observations were consistent with CO measurements of rotation temperatures of ≈10 K. With this, densities can be determined, and have been calculated to range between 104 and 105 cm−3 in dark clouds. Mapping of gives typical clouds sizes of 0.1 pc and masses near 1 solar mass. These cold, dense cores are the sites of future star formation.
UC HII regions
Ultra-compact HII regions are among the best tracers of high-mass star formation. The dense material surrounding UCHII regions is likely primarily molecular. Since a complete study of massive star formation necessarily involves the cloud from which the star formed, ammonia is an invaluable tool in understanding this surrounding molecular material. Since this molecular material can be spatially resolved, it is possible to constrain the heating/ionising sources, temperatures, masses, and sizes of the regions. Doppler-shifted velocity components allow for the separation of distinct regions of molecular gas that can trace outflows and hot cores originating from forming stars.
Extragalactic detection
Ammonia has been detected in external galaxies, and by simultaneously measuring several lines, it is possible to directly measure the gas temperature in these galaxies. Line ratios imply that gas temperatures are warm (≈50 K), originating from dense clouds with sizes of tens of parsecs. This picture is consistent with the picture within our Milky Way galaxyhot dense molecular cores form around newly forming stars embedded in larger clouds of molecular material on the scale of several hundred parsecs (giant molecular clouds; GMCs).
See also
References
Works cited
Further reading
External links
International Chemical Safety Card 0414 (anhydrous ammonia), ilo.org.
International Chemical Safety Card 0215 (aqueous solutions), ilo.org.
Emergency Response to Ammonia Fertiliser Releases (Spills) for the Minnesota Department of Agriculture.ammoniaspills.org
National Institute for Occupational Safety and Health–Ammonia Page, cdc.gov
NIOSH Pocket Guide to Chemical Hazards–Ammonia, cdc.gov
Ammonia, video
Bases (chemistry)
Foul-smelling chemicals
Gaseous signaling molecules
Household chemicals
Industrial gases
Inorganic solvents
Nitrogen cycle
Nitrogen hydrides
Nitrogen(−III) compounds
Refrigerants
Toxicology
Rocket fuels | Ammonia | [
"Chemistry",
"Environmental_science"
] | 11,276 | [
"Toxicology",
"Signal transduction",
"Gaseous signaling molecules",
"Nitrogen cycle",
"Industrial gases",
"Chemical process engineering",
"Bases (chemistry)",
"Metabolism"
] |
1,368 | https://en.wikipedia.org/wiki/Assembly%20language | In computer programming, assembly language (alternatively assembler language or symbolic machine code), often referred to simply as assembly and commonly abbreviated as ASM or asm, is any low-level programming language with a very strong correspondence between the instructions in the language and the architecture's machine code instructions. Assembly language usually has one statement per machine instruction (1:1), but constants, comments, assembler directives, symbolic labels of, e.g., memory locations, registers, and macros are generally also supported.
The first assembly code in which a language is used to represent machine code instructions is found in Kathleen and Andrew Donald Booth's 1947 work, Coding for A.R.C.. Assembly code is converted into executable machine code by a utility program referred to as an assembler. The term "assembler" is generally attributed to Wilkes, Wheeler and Gill in their 1951 book The Preparation of Programs for an Electronic Digital Computer, who, however, used the term to mean "a program that assembles another program consisting of several sections into a single program". The conversion process is referred to as assembly, as in assembling the source code. The computational step when an assembler is processing a program is called assembly time.
Because assembly depends on the machine code instructions, each assembly language is specific to a particular computer architecture.
Sometimes there is more than one assembler for the same architecture, and sometimes an assembler is specific to an operating system or to particular operating systems. Most assembly languages do not provide specific syntax for operating system calls, and most assembly languages can be used universally with any operating system, as the language provides access to all the real capabilities of the processor, upon which all system call mechanisms ultimately rest. In contrast to assembly languages, most high-level programming languages are generally portable across multiple architectures but require interpreting or compiling, much more complicated tasks than assembling.
In the first decades of computing, it was commonplace for both systems programming and application programming to take place entirely in assembly language. While still irreplaceable for some purposes, the majority of programming is now conducted in higher-level interpreted and compiled languages. In "No Silver Bullet", Fred Brooks summarised the effects of the switch away from assembly language programming: "Surely the most powerful stroke for software productivity, reliability, and simplicity has been the progressive use of high-level languages for programming. Most observers credit that development with at least a factor of five in productivity, and with concomitant gains in reliability, simplicity, and comprehensibility."
Today, it is typical to use small amounts of assembly language code within larger systems implemented in a higher-level language, for performance reasons or to interact directly with hardware in ways unsupported by the higher-level language. For instance, just under 2% of version 4.9 of the Linux kernel source code is written in assembly; more than 97% is written in C.
Assembly language syntax
Assembly language uses a mnemonic to represent, e.g., each low-level machine instruction or opcode, each directive, typically also each architectural register, flag, etc. Some of the mnemonics may be built-in and some user-defined. Many operations require one or more operands in order to form a complete instruction. Most assemblers permit named constants, registers, and labels for program and memory locations, and can calculate expressions for operands. Thus, programmers are freed from tedious repetitive calculations and assembler programs are much more readable than machine code. Depending on the architecture, these elements may also be combined for specific instructions or addressing modes using offsets or other data as well as fixed addresses. Many assemblers offer additional mechanisms to facilitate program development, to control the assembly process, and to aid debugging.
Some are column oriented, with specific fields in specific columns; this was very common for machines using punched cards in the 1950s and early 1960s. Some assemblers have free-form syntax, with fields separated by delimiters, e.g., punctuation, white space. Some assemblers are hybrid, with, e.g., labels, in a specific column and other fields separated by delimiters; this became more common than column-oriented syntax in the 1960s.
Terminology
A macro assembler is an assembler that includes a macroinstruction facility so that (parameterized) assembly language text can be represented by a name, and that name can be used to insert the expanded text into other code.
Open code refers to any assembler input outside of a macro definition.
A cross assembler (see also cross compiler) is an assembler that is run on a computer or operating system (the host system) of a different type from the system on which the resulting code is to run (the target system). Cross-assembling facilitates the development of programs for systems that do not have the resources to support software development, such as an embedded system or a microcontroller. In such a case, the resulting object code must be transferred to the target system, via read-only memory (ROM, EPROM, etc.), a programmer (when the read-only memory is integrated in the device, as in microcontrollers), or a data link using either an exact bit-by-bit copy of the object code or a text-based representation of that code (such as Intel hex or Motorola S-record).
A high-level assembler is a program that provides language abstractions more often associated with high-level languages, such as advanced control structures (IF/THEN/ELSE, DO CASE, etc.) and high-level abstract data types, including structures/records, unions, classes, and sets.
A microassembler is a program that helps prepare a microprogram to control the low level operation of a computer.
A meta-assembler is "a program that accepts the syntactic and semantic description of an assembly language, and generates an assembler for that language", or that accepts an assembler source file along with such a description and assembles the source file in accordance with that description. "Meta-Symbol" assemblers for the SDS 9 Series and SDS Sigma series of computers are meta-assemblers. Sperry Univac also provided a Meta-Assembler for the UNIVAC 1100/2200 series.
inline assembler (or embedded assembler) is assembler code contained within a high-level language program. This is most often used in systems programs which need direct access to the hardware.
Key concepts
Assembler
An assembler program creates object code by translating combinations of mnemonics and syntax for operations and addressing modes into their numerical equivalents. This representation typically includes an operation code ("opcode") as well as other control bits and data. The assembler also calculates constant expressions and resolves symbolic names for memory locations and other entities. The use of symbolic references is a key feature of assemblers, saving tedious calculations and manual address updates after program modifications. Most assemblers also include macro facilities for performing textual substitution – e.g., to generate common short sequences of instructions as inline, instead of called subroutines.
Some assemblers may also be able to perform some simple types of instruction set-specific optimizations. One concrete example of this may be the ubiquitous x86 assemblers from various vendors. Called jump-sizing, most of them are able to perform jump-instruction replacements (long jumps replaced by short or relative jumps) in any number of passes, on request. Others may even do simple rearrangement or insertion of instructions, such as some assemblers for RISC architectures that can help optimize a sensible instruction scheduling to exploit the CPU pipeline as efficiently as possible.
Assemblers have been available since the 1950s, as the first step above machine language and before high-level programming languages such as Fortran, Algol, COBOL and Lisp. There have also been several classes of translators and semi-automatic code generators with properties similar to both assembly and high-level languages, with Speedcode as perhaps one of the better-known examples.
There may be several assemblers with different syntax for a particular CPU or instruction set architecture. For instance, an instruction to add memory data to a register in a x86-family processor might be add eax,[ebx], in original Intel syntax, whereas this would be written addl (%ebx),%eax in the AT&T syntax used by the GNU Assembler. Despite different appearances, different syntactic forms generally generate the same numeric machine code. A single assembler may also have different modes in order to support variations in syntactic forms as well as their exact semantic interpretations (such as FASM-syntax, TASM-syntax, ideal mode, etc., in the special case of x86 assembly programming).
Number of passes
There are two types of assemblers based on how many passes through the source are needed (how many times the assembler reads the source) to produce the object file.
One-pass assemblers process the source code once. For symbols used before they are defined, the assembler will emit "errata" after the eventual definition, telling the linker or the loader to patch the locations where the as yet undefined symbols had been used.
Multi-pass assemblers create a table with all symbols and their values in the first passes, then use the table in later passes to generate code.
In both cases, the assembler must be able to determine the size of each instruction on the initial passes in order to calculate the addresses of subsequent symbols. This means that if the size of an operation referring to an operand defined later depends on the type or distance of the operand, the assembler will make a pessimistic estimate when first encountering the operation, and if necessary, pad it with one or more
"no-operation" instructions in a later pass or the errata. In an assembler with peephole optimization, addresses may be recalculated between passes to allow replacing pessimistic code with code tailored to the exact distance from the target.
The original reason for the use of one-pass assemblers was memory size and speed of assembly – often a second pass would require storing the symbol table in memory (to handle forward references), rewinding and rereading the program source on tape, or rereading a deck of cards or punched paper tape. Later computers with much larger memories (especially disc storage), had the space to perform all necessary processing without such re-reading. The advantage of the multi-pass assembler is that the absence of errata makes the linking process (or the program load if the assembler directly produces executable code) faster.
Example: in the following code snippet, a one-pass assembler would be able to determine the address of the backward reference BKWD when assembling statement S2, but would not be able to determine the address of the forward reference FWD when assembling the branch statement S1; indeed, FWD may be undefined. A two-pass assembler would determine both addresses in pass 1, so they would be known when generating code in pass 2.
B
...
EQU *
...
EQU *
...
B
High-level assemblers
More sophisticated high-level assemblers provide language abstractions such as:
High-level procedure/function declarations and invocations
Advanced control structures (IF/THEN/ELSE, SWITCH)
High-level abstract data types, including structures/records, unions, classes, and sets
Sophisticated macro processing (although available on ordinary assemblers since the late 1950s for, e.g., the IBM 700 series and IBM 7000 series, and since the 1960s for IBM System/360 (S/360), amongst other machines)
Object-oriented programming features such as classes, objects, abstraction, polymorphism, and inheritance
See Language design below for more details.
Assembly language
A program written in assembly language consists of a series of mnemonic processor instructions and meta-statements (known variously as declarative operations, directives, pseudo-instructions, pseudo-operations and pseudo-ops), comments and data. Assembly language instructions usually consist of an opcode mnemonic followed by an operand, which might be a list of data, arguments or parameters. Some instructions may be "implied", which means the data upon which the instruction operates is implicitly defined by the instruction itself—such an instruction does not take an operand. The resulting statement is translated by an assembler into machine language instructions that can be loaded into memory and executed.
For example, the instruction below tells an x86/IA-32 processor to move an immediate 8-bit value into a register. The binary code for this instruction is 10110 followed by a 3-bit identifier for which register to use. The identifier for the AL register is 000, so the following machine code loads the AL register with the data 01100001.
10110000 01100001
This binary computer code can be made more human-readable by expressing it in hexadecimal as follows.
B0 61
Here, B0 means "Move a copy of the following value into AL", and 61 is a hexadecimal representation of the value 01100001, which is 97 in decimal. Assembly language for the 8086 family provides the mnemonic MOV (an abbreviation of move) for instructions such as this, so the machine code above can be written as follows in assembly language, complete with an explanatory comment if required, after the semicolon. This is much easier to read and to remember.
MOV AL, 61h ; Load AL with 97 decimal (61 hex)
In some assembly languages (including this one) the same mnemonic, such as MOV, may be used for a family of related instructions for loading, copying and moving data, whether these are immediate values, values in registers, or memory locations pointed to by values in registers or by immediate (a.k.a. direct) addresses. Other assemblers may use separate opcode mnemonics such as L for "move memory to register", ST for "move register to memory", LR for "move register to register", MVI for "move immediate operand to memory", etc.
If the same mnemonic is used for different instructions, that means that the mnemonic corresponds to several different binary instruction codes, excluding data (e.g. the 61h in this example), depending on the operands that follow the mnemonic. For example, for the x86/IA-32 CPUs, the Intel assembly language syntax MOV AL, AH represents an instruction that moves the contents of register AH into register AL. The hexadecimal form of this instruction is:
88 E0
The first byte, 88h, identifies a move between a byte-sized register and either another register or memory, and the second byte, E0h, is encoded (with three bit-fields) to specify that both operands are registers, the source is AH, and the destination is AL.
In a case like this where the same mnemonic can represent more than one binary instruction, the assembler determines which instruction to generate by examining the operands. In the first example, the operand 61h is a valid hexadecimal numeric constant and is not a valid register name, so only the B0 instruction can be applicable. In the second example, the operand AH is a valid register name and not a valid numeric constant (hexadecimal, decimal, octal, or binary), so only the 88 instruction can be applicable.
Assembly languages are always designed so that this sort of lack of ambiguity is universally enforced by their syntax. For example, in the Intel x86 assembly language, a hexadecimal constant must start with a numeral digit, so that the hexadecimal number 'A' (equal to decimal ten) would be written as 0Ah or 0AH, not AH, specifically so that it cannot appear to be the name of register AH. (The same rule also prevents ambiguity with the names of registers BH, CH, and DH, as well as with any user-defined symbol that ends with the letter H and otherwise contains only characters that are hexadecimal digits, such as the word "BEACH".)
Returning to the original example, while the x86 opcode 10110000 (B0) copies an 8-bit value into the AL register, 10110001 (B1) moves it into CL and 10110010 (B2) does so into DL. Assembly language examples for these follow.
MOV AL, 1h ; Load AL with immediate value 1
MOV CL, 2h ; Load CL with immediate value 2
MOV DL, 3h ; Load DL with immediate value 3
The syntax of MOV can also be more complex as the following examples show.
MOV EAX, [EBX] ; Move the 4 bytes in memory at the address contained in EBX into EAX
MOV [ESI+EAX], CL ; Move the contents of CL into the byte at address ESI+EAX
MOV DS, DX ; Move the contents of DX into segment register DS
In each case, the MOV mnemonic is translated directly into one of the opcodes 88-8C, 8E, A0-A3, B0-BF, C6 or C7 by an assembler, and the programmer normally does not have to know or remember which.
Transforming assembly language into machine code is the job of an assembler, and the reverse can at least partially be achieved by a disassembler. Unlike high-level languages, there is a one-to-one correspondence between many simple assembly statements and machine language instructions. However, in some cases, an assembler may provide pseudoinstructions (essentially macros) which expand into several machine language instructions to provide commonly needed functionality. For example, for a machine that lacks a "branch if greater or equal" instruction, an assembler may provide a pseudoinstruction that expands to the machine's "set if less than" and "branch if zero (on the result of the set instruction)". Most full-featured assemblers also provide a rich macro language (discussed below) which is used by vendors and programmers to generate more complex code and data sequences. Since the information about pseudoinstructions and macros defined in the assembler environment is not present in the object program, a disassembler cannot reconstruct the macro and pseudoinstruction invocations but can only disassemble the actual machine instructions that the assembler generated from those abstract assembly-language entities. Likewise, since comments in the assembly language source file are ignored by the assembler and have no effect on the object code it generates, a disassembler is always completely unable to recover source comments.
Each computer architecture has its own machine language. Computers differ in the number and type of operations they support, in the different sizes and numbers of registers, and in the representations of data in storage. While most general-purpose computers are able to carry out essentially the same functionality, the ways they do so differ; the corresponding assembly languages reflect these differences.
Multiple sets of mnemonics or assembly-language syntax may exist for a single instruction set, typically instantiated in different assembler programs. In these cases, the most popular one is usually that supplied by the CPU manufacturer and used in its documentation.
Two examples of CPUs that have two different sets of mnemonics are the Intel 8080 family and the Intel 8086/8088. Because Intel claimed copyright on its assembly language mnemonics (on each page of their documentation published in the 1970s and early 1980s, at least), some companies that independently produced CPUs compatible with Intel instruction sets invented their own mnemonics. The Zilog Z80 CPU, an enhancement of the Intel 8080A, supports all the 8080A instructions plus many more; Zilog invented an entirely new assembly language, not only for the new instructions but also for all of the 8080A instructions. For example, where Intel uses the mnemonics MOV, MVI, LDA, STA, LXI, LDAX, STAX, LHLD, and SHLD for various data transfer instructions, the Z80 assembly language uses the mnemonic LD for all of them. A similar case is the NEC V20 and V30 CPUs, enhanced copies of the Intel 8086 and 8088, respectively. Like Zilog with the Z80, NEC invented new mnemonics for all of the 8086 and 8088 instructions, to avoid accusations of infringement of Intel's copyright. (It is questionable whether such copyrights can be valid, and later CPU companies such as AMD and Cyrix republished Intel's x86/IA-32 instruction mnemonics exactly with neither permission nor legal penalty.) It is doubtful whether in practice many people who programmed the V20 and V30 actually wrote in NEC's assembly language rather than Intel's; since any two assembly languages for the same instruction set architecture are isomorphic (somewhat like English and Pig Latin), there is no requirement to use a manufacturer's own published assembly language with that manufacturer's products.
"Hello, world!" on x86 Linux
In 32-bit assembly language for Linux on an x86 processor, "Hello, world!" can be printed like this.
section .text
global _start
_start:
mov edx,len ; length of string, third argument to write()
mov ecx,msg ; address of string, second argument to write()
mov ebx,1 ; file descriptor (standard output), first argument to write()
mov eax,4 ; system call number for write()
int 0x80 ; system call trap
mov ebx,0 ; exit code, first argument to exit()
mov eax,1 ; system call number for exit()
int 0x80 ; system call trap
section .data
msg db 'Hello, world!', 0xa
len equ $ - msg
Language design
Basic elements
There is a large degree of diversity in the way the authors of assemblers categorize statements and in the nomenclature that they use. In particular, some describe anything other than a machine mnemonic or extended mnemonic as a pseudo-operation (pseudo-op). A typical assembly language consists of 3 types of instruction statements that are used to define program operations:
Opcode mnemonics
Data definitions
Assembly directives
Opcode mnemonics and extended mnemonics
Instructions (statements) in assembly language are generally very simple, unlike those in high-level languages. Generally, a mnemonic is a symbolic name for a single executable machine language instruction (an opcode), and there is at least one opcode mnemonic defined for each machine language instruction. Each instruction typically consists of an operation or opcode plus zero or more operands. Most instructions refer to a single value or a pair of values. Operands can be immediate (value coded in the instruction itself), registers specified in the instruction or implied, or the addresses of data located elsewhere in storage. This is determined by the underlying processor architecture: the assembler merely reflects how this architecture works. Extended mnemonics are often used to specify a combination of an opcode with a specific operand, e.g., the System/360 assemblers use as an extended mnemonic for with a mask of 15 and ("NO OPeration" – do nothing for one step) for with a mask of 0.
Extended mnemonics are often used to support specialized uses of instructions, often for purposes not obvious from the instruction name. For example, many CPU's do not have an explicit NOP instruction, but do have instructions that can be used for the purpose. In 8086 CPUs the instruction is used for , with being a pseudo-opcode to encode the instruction . Some disassemblers recognize this and will decode the instruction as . Similarly, IBM assemblers for System/360 and System/370 use the extended mnemonics and for and with zero masks. For the SPARC architecture, these are known as synthetic instructions.
Some assemblers also support simple built-in macro-instructions that generate two or more machine instructions. For instance, with some Z80 assemblers the instruction is recognized to generate followed by . These are sometimes known as pseudo-opcodes.
Mnemonics are arbitrary symbols; in 1985 the IEEE published Standard 694 for a uniform set of mnemonics to be used by all assemblers. The standard has since been withdrawn.
Data directives
There are instructions used to define data elements to hold data and variables. They define the type of data, the length and the alignment of data. These instructions can also define whether the data is available to outside programs (programs assembled separately) or only to the program in which the data section is defined. Some assemblers classify these as pseudo-ops.
Assembly directives
Assembly directives, also called pseudo-opcodes, pseudo-operations or pseudo-ops, are commands given to an assembler "directing it to perform operations other than assembling instructions". Directives affect how the assembler operates and "may affect the object code, the symbol table, the listing file, and the values of internal assembler parameters". Sometimes the term pseudo-opcode is reserved for directives that generate object code, such as those that generate data.
The names of pseudo-ops often start with a dot to distinguish them from machine instructions. Pseudo-ops can make the assembly of the program dependent on parameters input by a programmer, so that one program can be assembled in different ways, perhaps for different applications. Or, a pseudo-op can be used to manipulate presentation of a program to make it easier to read and maintain. Another common use of pseudo-ops is to reserve storage areas for run-time data and optionally initialize their contents to known values.
Symbolic assemblers let programmers associate arbitrary names (labels or symbols) with memory locations and various constants. Usually, every constant and variable is given a name so instructions can reference those locations by name, thus promoting self-documenting code. In executable code, the name of each subroutine is associated with its entry point, so any calls to a subroutine can use its name. Inside subroutines, GOTO destinations are given labels. Some assemblers support local symbols which are often lexically distinct from normal symbols (e.g., the use of "10$" as a GOTO destination).
Some assemblers, such as NASM, provide flexible symbol management, letting programmers manage different namespaces, automatically calculate offsets within data structures, and assign labels that refer to literal values or the result of simple computations performed by the assembler. Labels can also be used to initialize constants and variables with relocatable addresses.
Assembly languages, like most other computer languages, allow comments to be added to program source code that will be ignored during assembly. Judicious commenting is essential in assembly language programs, as the meaning and purpose of a sequence of binary machine instructions can be difficult to determine. The "raw" (uncommented) assembly language generated by compilers or disassemblers is quite difficult to read when changes must be made.
Macros
Many assemblers support predefined macros, and others support programmer-defined (and repeatedly re-definable) macros involving sequences of text lines in which variables and constants are embedded. The macro definition is most commonly a mixture of assembler statements, e.g., directives, symbolic machine instructions, and templates for assembler statements. This sequence of text lines may include opcodes or directives. Once a macro has been defined its name may be used in place of a mnemonic. When the assembler processes such a statement, it replaces the statement with the text lines associated with that macro, then processes them as if they existed in the source code file (including, in some assemblers, expansion of any macros existing in the replacement text). Macros in this sense date to IBM autocoders of the 1950s.
Macro assemblers typically have directives to, e.g., define macros, define variables, set variables to the result of an arithmetic, logical or string expression, iterate, conditionally generate code. Some of those directives may be restricted to use within a macro definition, e.g., MEXIT in HLASM, while others may be permitted within open code (outside macro definitions), e.g., AIF and COPY in HLASM.
In assembly language, the term "macro" represents a more comprehensive concept than it does in some other contexts, such as the pre-processor in the C programming language, where its #define directive typically is used to create short single line macros. Assembler macro instructions, like macros in PL/I and some other languages, can be lengthy "programs" by themselves, executed by interpretation by the assembler during assembly.
Since macros can have 'short' names but expand to several or indeed many lines of code, they can be used to make assembly language programs appear to be far shorter, requiring fewer lines of source code, as with higher level languages. They can also be used to add higher levels of structure to assembly programs, optionally introduce embedded debugging code via parameters and other similar features.
Macro assemblers often allow macros to take parameters. Some assemblers include quite sophisticated macro languages, incorporating such high-level language elements as optional parameters, symbolic variables, conditionals, string manipulation, and arithmetic operations, all usable during the execution of a given macro, and allowing macros to save context or exchange information. Thus a macro might generate numerous assembly language instructions or data definitions, based on the macro arguments. This could be used to generate record-style data structures or "unrolled" loops, for example, or could generate entire algorithms based on complex parameters. For instance, a "sort" macro could accept the specification of a complex sort key and generate code crafted for that specific key, not needing the run-time tests that would be required for a general procedure interpreting the specification. An organization using assembly language that has been heavily extended using such a macro suite can be considered to be working in a higher-level language since such programmers are not working with a computer's lowest-level conceptual elements. Underlining this point, macros were used to implement an early virtual machine in SNOBOL4 (1967), which was written in the SNOBOL Implementation Language (SIL), an assembly language for a virtual machine. The target machine would translate this to its native code using a macro assembler. This allowed a high degree of portability for the time.
Macros were used to customize large scale software systems for specific customers in the mainframe era and were also used by customer personnel to satisfy their employers' needs by making specific versions of manufacturer operating systems. This was done, for example, by systems programmers working with IBM's Conversational Monitor System / Virtual Machine (VM/CMS) and with IBM's "real time transaction processing" add-ons, Customer Information Control System CICS, and ACP/TPF, the airline/financial system that began in the 1970s and still runs many large computer reservation systems (CRS) and credit card systems today.
It is also possible to use solely the macro processing abilities of an assembler to generate code written in completely different languages, for example, to generate a version of a program in COBOL using a pure macro assembler program containing lines of COBOL code inside assembly time operators instructing the assembler to generate arbitrary code. IBM OS/360 uses macros to perform system generation. The user specifies options by coding a series of assembler macros. Assembling these macros generates a job stream to build the system, including job control language and utility control statements.
This is because, as was realized in the 1960s, the concept of "macro processing" is independent of the concept of "assembly", the former being in modern terms more word processing, text processing, than generating object code. The concept of macro processing appeared, and appears, in the C programming language, which supports "preprocessor instructions" to set variables, and make conditional tests on their values. Unlike certain previous macro processors inside assemblers, the C preprocessor is not Turing-complete because it lacks the ability to either loop or "go to", the latter allowing programs to loop.
Despite the power of macro processing, it fell into disuse in many high level languages (major exceptions being C, C++ and PL/I) while remaining a perennial for assemblers.
Macro parameter substitution is strictly by name: at macro processing time, the value of a parameter is textually substituted for its name. The most famous class of bugs resulting was the use of a parameter that itself was an expression and not a simple name when the macro writer expected a name. In the macro:
foo: macro a
load a*b
the intention was that the caller would provide the name of a variable, and the "global" variable or constant b would be used to multiply "a". If foo is called with the parameter a-c, the macro expansion of load a-c*b occurs. To avoid any possible ambiguity, users of macro processors can parenthesize formal parameters inside macro definitions, or callers can parenthesize the input parameters.
Support for structured programming
Packages of macros have been written providing structured programming elements to encode execution flow. The earliest example of this approach was in the Concept-14 macro set, originally proposed by Harlan Mills (March 1970), and implemented by Marvin Kessler at IBM's Federal Systems Division, which provided IF/ELSE/ENDIF and similar control flow blocks for OS/360 assembler programs. This was a way to reduce or eliminate the use of GOTO operations in assembly code, one of the main factors causing spaghetti code in assembly language. This approach was widely accepted in the early 1980s (the latter days of large-scale assembly language use). IBM's High Level Assembler Toolkit includes such a macro package.
Another design was A-Natural, a "stream-oriented" assembler for 8080/Z80 processors from Whitesmiths Ltd. (developers of the Unix-like Idris operating system, and what was reported to be the first commercial C compiler). The language was classified as an assembler because it worked with raw machine elements such as opcodes, registers, and memory references; but it incorporated an expression syntax to indicate execution order. Parentheses and other special symbols, along with block-oriented structured programming constructs, controlled the sequence of the generated instructions. A-natural was built as the object language of a C compiler, rather than for hand-coding, but its logical syntax won some fans.
There has been little apparent demand for more sophisticated assemblers since the decline of large-scale assembly language development. In spite of that, they are still being developed and applied in cases where resource constraints or peculiarities in the target system's architecture prevent the effective use of higher-level languages.
Assemblers with a strong macro engine allow structured programming via macros, such as the switch macro provided with the Masm32 package (this code is a complete program):
include \masm32\include\masm32rt.inc ; use the Masm32 library
.code
demomain:
REPEAT 20
switch rv(nrandom, 9) ; generate a number between 0 and 8
mov ecx, 7
case 0
print "case 0"
case ecx ; in contrast to most other programming languages,
print "case 7" ; the Masm32 switch allows "variable cases"
case 1 .. 3
.if eax==1
print "case 1"
.elseif eax==2
print "case 2"
.else
print "cases 1 to 3: other"
.endif
case 4, 6, 8
print "cases 4, 6 or 8"
default
mov ebx, 19 ; print 20 stars
.Repeat
print "*"
dec ebx
.Until Sign? ; loop until the sign flag is set
endsw
print chr$(13, 10)
ENDM
exit
end demomain
Use of assembly language
When the stored-program computer was introduced programs were written in machine code, and loaded into the computer from punched paper tape or toggled directly into memory from console switches. Kathleen Booth "is credited with inventing assembly language" based on theoretical work she began in 1947, while working on the ARC2 at Birkbeck, University of London following consultation by Andrew Booth (later her husband) with mathematician John von Neumann and physicist Herman Goldstine at the Institute for Advanced Study.
In late 1948, the Electronic Delay Storage Automatic Calculator (EDSAC) had an assembler (named "initial orders") integrated into its bootstrap program. It used one-letter mnemonics developed by David Wheeler, who is credited by the IEEE Computer Society as the creator of the first "assembler". Reports on the EDSAC introduced the term "assembly" for the process of combining fields into an instruction word. SOAP (Symbolic Optimal Assembly Program) was an assembly language for the IBM 650 computer written by Stan Poley in 1955.
Assembly languages eliminated much of the error-prone, tedious, and time-consuming first-generation programming needed with the earliest computers, freeing programmers from tedium such as remembering numeric codes and calculating addresses. They were once widely used for all sorts of programming. By the late 1950s their use had largely been supplanted by higher-level languages in the search for improved programming productivity. Today, assembly language is still used for direct hardware manipulation, access to specialized processor instructions, or to address critical performance issues. Typical uses are device drivers, low-level embedded systems, and real-time systems (see ).
Numerous programs were written entirely in assembly language. The Burroughs MCP (1961) was the first computer for which an operating system was not developed entirely in assembly language; it was written in Executive Systems Problem Oriented Language (ESPOL), an Algol dialect. Many commercial applications were written in assembly language as well, including a large amount of the IBM mainframe software developed by large corporations. COBOL, FORTRAN and some PL/I eventually displaced assembly language, although a number of large organizations retained assembly-language application infrastructures well into the 1990s.
Assembly language was the primary development language for 8-bit home computers such as the Apple II, Atari 8-bit computers, ZX Spectrum, and Commodore 64. Interpreted BASIC on these systems did not offer maximum execution speed and full use of facilities to take full advantage of the available hardware. Assembly language was the default choice for programming 8-bit consoles such as the Atari 2600 and Nintendo Entertainment System.
Key software for IBM PC compatibles such as MS-DOS, Turbo Pascal, and the Lotus 1-2-3 spreadsheet was written in assembly language. As computer speed grew exponentially, assembly language became a tool for speeding up parts of programs, such as the rendering of Doom, rather than a dominant development language. In the 1990s, assembly language was used to maximise performance from systems such as the Sega Saturn, and as the primary language for arcade hardware using the TMS34010 integrated CPU/GPU such as Mortal Kombat and NBA Jam.
Current usage
There has been debate over the usefulness and performance of assembly language relative to high-level languages.
Although assembly language has specific niche uses where it is important (see below), there are other tools for optimization.
, the TIOBE index of programming language popularity ranks assembly language at 11, ahead of Visual Basic, for example. Assembler can be used to optimize for speed or optimize for size. In the case of speed optimization, modern optimizing compilers are claimed to render high-level languages into code that can run as fast as hand-written assembly, despite some counter-examples. The complexity of modern processors and memory sub-systems makes effective optimization increasingly difficult for compilers and assembly programmers alike. Increasing processor performance has meant that most CPUs sit idle most of the time, with delays caused by predictable bottlenecks such as cache misses, I/O operations and paging, making raw code execution speed a non-issue for many programmers.
There are still certain computer programming domains in which the use of assembly programming is more common:
Writing code for systems with that have limited high-level language options such as the Atari 2600, Commodore 64, and graphing calculators. Programs for these computers of the 1970s and 1980s are often written in the context of demoscene or retrogaming subcultures.
Code that must interact directly with the hardware, for example in device drivers and interrupt handlers.
In an embedded processor or DSP, high-repetition interrupts require the shortest number of cycles per interrupt, such as an interrupt that occurs 1000 or 10000 times a second.
Programs that need to use processor-specific instructions not implemented in a compiler. A common example is the bitwise rotation instruction at the core of many encryption algorithms, as well as querying the parity of a byte or the 4-bit carry of an addition.
Stand-alone executables that are required to execute without recourse to the run-time components or libraries associated with a high-level language, such as the firmware for telephones, automobile fuel and ignition systems, air-conditioning control systems,and security systems.
Programs with performance-sensitive inner loops, where assembly language provides optimization opportunities that are difficult to achieve in a high-level language. For example, linear algebra with BLAS or discrete cosine transformation (e.g. SIMD assembly version from x264).
Programs that create vectorized functions for programs in higher-level languages such as C. In the higher-level language this is sometimes aided by compiler intrinsic functions which map directly to SIMD mnemonics, but nevertheless result in a one-to-one assembly conversion specific for the given vector processor.
Real-time programs such as simulations, flight navigation systems, and medical equipment. For example, in a fly-by-wire system, telemetry must be interpreted and acted upon within strict time constraints. Such systems must eliminate sources of unpredictable delays, which may be created by interpreted languages, automatic garbage collection, paging operations, or preemptive multitasking. Choosing assembly or lower-level languages for such systems gives programmers greater visibility and control over processing details.
Cryptographic algorithms that must always take strictly the same time to execute, preventing timing attacks.
Video encoders and decoders such as rav1e (an encoder for AV1) and dav1d (the reference decoder for AV1) contain assembly to leverage AVX2 and ARM Neon instructions when available.
Modify and extend legacy code written for IBM mainframe computers.
Situations where complete control over the environment is required, in extremely high-security situations where nothing can be taken for granted.
Computer viruses, bootloaders, certain device drivers, or other items very close to the hardware or low-level operating system.
Instruction set simulators for monitoring, tracing and debugging where additional overhead is kept to a minimum.
Situations where no high-level language exists, on a new or specialized processor for which no cross compiler is available.
Reverse engineering and modifying program files such as:
existing binaries that may or may not have originally been written in a high-level language, for example when trying to recreate programs for which source code is not available or has been lost, or cracking copy protection of proprietary software.
Video games (also termed ROM hacking), which is possible via several methods. The most widely employed method is altering program code at the assembly language level.
Assembly language is still taught in most computer science and electronic engineering programs. Although few programmers today regularly work with assembly language as a tool, the underlying concepts remain important. Such fundamental topics as binary arithmetic, memory allocation, stack processing, character set encoding, interrupt processing, and compiler design would be hard to study in detail without a grasp of how a computer operates at the hardware level. Since a computer's behaviour is fundamentally defined by its instruction set, the logical way to learn such concepts is to study an assembly language. Most modern computers have similar instruction sets. Therefore, studying a single assembly language is sufficient to learn the basic concepts, recognize situations where the use of assembly language might be appropriate, and to see how efficient executable code can be created from high-level languages.
Typical applications
Assembly language is typically used in a system's boot code, the low-level code that initializes and tests the system hardware prior to booting the operating system and is often stored in ROM. (BIOS on IBM-compatible PC systems and CP/M is an example.)
Assembly language is often used for low-level code, for instance for operating system kernels, which cannot rely on the availability of pre-existing system calls and must indeed implement them for the particular processor architecture on which the system will be running.
Some compilers translate high-level languages into assembly first before fully compiling, allowing the assembly code to be viewed for debugging and optimization purposes.
Some compilers for relatively low-level languages, such as Pascal or C, allow the programmer to embed assembly language directly in the source code (so called inline assembly). Programs using such facilities can then construct abstractions using different assembly language on each hardware platform. The system's portable code can then use these processor-specific components through a uniform interface.
Assembly language is useful in reverse engineering. Many programs are distributed only in machine code form which is straightforward to translate into assembly language by a disassembler, but more difficult to translate into a higher-level language through a decompiler. Tools such as the Interactive Disassembler make extensive use of disassembly for such a purpose. This technique is used by hackers to crack commercial software, and competitors to produce software with similar results from competing companies.
Assembly language is used to enhance speed of execution, especially in early personal computers with limited processing power and RAM.
Assemblers can be used to generate blocks of data, with no high-level language overhead, from formatted and commented source code, to be used by other code.
See also
Compiler
Comparison of assemblers
Disassembler
Hexadecimal
Instruction set architecture
Little man computer – an educational computer model with a base-10 assembly language
Nibble
Typed assembly language
Notes
References
Further reading
(2+xiv+270+6 pages)
("An online book full of helpful ASM info, tutorials and code examples" by the ASM Community, archived at the internet archive.)
External links
Assembly Language and Learning Assembly Language pages on WikiWikiWeb
Assembly Language Programming Examples
Assembly language
Computer-related introductions in 1949
Embedded systems
Low-level programming languages
Programming language implementation
Programming languages created in 1949 | Assembly language | [
"Technology",
"Engineering"
] | 9,799 | [
"Embedded systems",
"Computer science",
"Computer engineering",
"Computer systems"
] |
1,372 | https://en.wikipedia.org/wiki/Amber | Amber is fossilized tree resin. Examples of it have been appreciated for its color and natural beauty since the Neolithic times, and worked as a gemstone since antiquity. Amber is used in jewelry and as a healing agent in folk medicine.
There are five classes of amber, defined on the basis of their chemical constituents. Because it originates as a soft, sticky tree resin, amber sometimes contains animal and plant material as inclusions. Amber occurring in coal seams is also called resinite, and the term ambrite is applied to that found specifically within New Zealand coal seams.
Etymology
The English word amber derives from Arabic via Middle Latin ambar and Middle French ambre. The word referred to what is now known as ambergris (ambre gris or "gray amber"), a solid waxy substance derived from the sperm whale. The word, in its sense of "ambergris," was adopted in Middle English in the 14th century.
In the Romance languages, the sense of the word was extended to Baltic amber (fossil resin) from as early as the late 13th century. At first called white or yellow amber (ambre jaune), this meaning was adopted in English by the early 15th century. As the use of ambergris waned, this became the main sense of the word.
The two substances ("yellow amber" and "gray amber") conceivably became associated or confused because they both were found washed up on beaches. Ambergris is less dense than water and floats, whereas amber is too dense to float, though less dense than stone.
The classical names for amber, Ancient Greek (ēlektron) and one of* its Latin names, electrum, are connected to a term ἠλέκτωρ (ēlektōr) meaning "beaming Sun". According to myth, when Phaëton son of Helios (the Sun) was killed, his mourning sisters became poplar trees, and their tears became elektron, amber. The word elektron gave rise to the words electric, electricity, and their relatives because of amber's ability to bear a charge of static electricity. (*In Latin the name succinum was unambiguously used for amber while electrum was also used for an alloy of gold and silver).
Varietal names
A number of regional and varietal names have been applied to ambers over the centuries, including Allingite, Beckerite, Gedanite, Kochenite, Krantzite, and Stantienite.
History
Theophrastus discussed amber in the 4th century BCE, as did Pytheas (), whose work "On the Ocean" is lost, but was referenced by Pliny, according to whose Natural History:
Earlier Pliny says that Pytheas refers to a large island—three days' sail from the Scythian coast and called Balcia by Xenophon of Lampsacus (author of a fanciful travel book in Greek)—as Basilia—a name generally equated with Abalus. Given the presence of amber, the island could have been Heligoland, Zealand, the shores of Gdańsk Bay, the Sambia Peninsula or the Curonian Lagoon, which were historically the richest sources of amber in northern Europe. There were well-established trade routes for amber connecting the Baltic with the Mediterranean (known as the "Amber Road"). Pliny states explicitly that the Germans exported amber to Pannonia, from where the Veneti distributed it onwards.
The ancient Italic peoples of southern Italy used to work amber; the National Archaeological Museum of Siritide (Museo Archeologico Nazionale della Siritide) at Policoro in the province of Matera (Basilicata) displays important surviving examples. It has been suggested that amber used in antiquity, as at Mycenae and in the prehistory of the Mediterranean, came from deposits in Sicily.
Pliny also cites the opinion of Nicias ( 470–413 BCE), according to whom amber Besides the fanciful explanations according to which amber is "produced by the Sun", Pliny cites opinions that are well aware of its origin in tree resin, citing the native Latin name of succinum (sūcinum, from sucus "juice"). In Book 37, section XI of Natural History, Pliny wrote:
He also states that amber is also found in Egypt and India, and he even refers to the electrostatic properties of amber, by saying that "in Syria the women make the whorls of their spindles of this substance, and give it the name of harpax [from ἁρπάζω, "to drag"] from the circumstance that it attracts leaves towards it, chaff, and the light fringe of tissues".
The Romans traded for amber from the shores of the southern Baltic at least as far back as the time of Nero.
Amber has a long history of use in China, with the first written record from 200 BCE. Early in the 19th century, the first reports of amber found in North America came from discoveries in New Jersey along Crosswicks Creek near Trenton, at Camden, and near Woodbury.
Composition and formation
Amber is heterogeneous in composition, but consists of several resinous more or less soluble in alcohol, ether and chloroform, associated with an insoluble bituminous substance. Amber is a macromolecule formed by free radical polymerization of several precursors in the labdane family, for example, communic acid, communol, and biformene. These labdanes are diterpenes (C20H32) and trienes, equipping the organic skeleton with three alkene groups for polymerization. As amber matures over the years, more polymerization takes place as well as isomerization reactions, crosslinking and cyclization.
Most amber has a hardness between 2.0 and 2.5 on the Mohs scale, a refractive index of 1.5–1.6, a specific gravity between 1.06 and 1.10, and a melting point of 250–300 °C. Heated above , amber decomposes, yielding an oil of amber, and leaves a black residue which is known as "amber colophony", or "amber pitch"; when dissolved in oil of turpentine or in linseed oil this forms "amber varnish" or "amber lac".
Molecular polymerization, resulting from high pressures and temperatures produced by overlying sediment, transforms the resin first into copal. Sustained heat and pressure drives off terpenes and results in the formation of amber. For this to happen, the resin must be resistant to decay. Many trees produce resin, but in the majority of cases this deposit is broken down by physical and biological processes. Exposure to sunlight, rain, microorganisms, and extreme temperatures tends to disintegrate the resin. For the resin to survive long enough to become amber, it must be resistant to such forces or be produced under conditions that exclude them. Fossil resins from Europe fall into two categories, the Baltic ambers and another that resembles the Agathis group. Fossil resins from the Americas and Africa are closely related to the modern genus Hymenaea, while Baltic ambers are thought to be fossil resins from plants of the family Sciadopityaceae that once lived in north Europe.
The abnormal development of resin in living trees (succinosis) can result in the formation of amber. Impurities are quite often present, especially when the resin has dropped onto the ground, so the material may be useless except for varnish-making. Such impure amber is called firniss. Such inclusion of other substances can cause the amber to have an unexpected color. Pyrites may give a bluish color. Bony amber owes its cloudy opacity to numerous tiny bubbles inside the resin. However, so-called black amber is really a kind of jet. In darkly clouded and even opaque amber, inclusions can be imaged using high-energy, high-contrast, high-resolution X-rays.
Extraction and processing
Distribution and mining
Amber is globally distributed in or around all continents, mainly in rocks of Cretaceous age or younger. Historically, the coast west of Königsberg in Prussia was the world's leading source of amber. The first mentions of amber deposits there date back to the 12th century. Juodkrantė in Lithuania was established in the mid-19th century as a mining town of amber. About 90% of the world's extractable amber is still located in that area, which was transferred to the Russian Soviet Federative Socialist Republic of the USSR in 1946, becoming the Kaliningrad Oblast.
Pieces of amber torn from the seafloor are cast up by the waves and collected by hand, dredging, or diving. Elsewhere, amber is mined, both in open works and underground galleries. Then nodules of blue earth have to be removed and an opaque crust must be cleaned off, which can be done in revolving barrels containing sand and water. Erosion removes this crust from sea-worn amber. Dominican amber is mined through bell pitting, which is dangerous because of the risk of tunnel collapse.
An important source of amber is Kachin State in northern Myanmar, which has been a major source of amber in China for at least 1,800 years. Contemporary mining of this deposit has attracted attention for unsafe working conditions and its role in funding internal conflict in the country. Amber from the Rivne Oblast of Ukraine, referred to as Rivne amber, is mined illegally by organised crime groups, who deforest the surrounding areas and pump water into the sediments to extract the amber, causing severe environmental deterioration.
Treatment
The Vienna amber factories, which use pale amber to manufacture pipes and other smoking tools, turn it on a lathe and polish it with whitening and water or with rotten stone and oil. The final luster is given by polishing with flannel.
When gradually heated in an oil bath, amber "becomes soft and flexible. Two pieces of amber may be united by smearing the surfaces with linseed oil, heating them, and then pressing them together while hot. Cloudy amber may be clarified in an oil bath, as the oil fills the numerous pores that cause the turbidity. Small fragments, formerly thrown away or used only for varnish are now used on a large scale in the formation of "ambroid" or "pressed amber". The pieces are carefully heated with exclusion of air and then compressed into a uniform mass by intense hydraulic pressure, the softened amber being forced through holes in a metal plate. The product is extensively used for the production of cheap jewelry and articles for smoking. This pressed amber yields brilliant interference colors in polarized light."
Amber has often been imitated by other resins like copal and kauri gum, as well as by celluloid and even glass. Baltic amber is sometimes colored artificially but also called "true amber".
Appearance
Amber occurs in a range of different colors. As well as the usual yellow-orange-brown that is associated with the color "amber", amber can range from a whitish color through a pale lemon yellow, to brown and almost black. Other uncommon colors include red amber (sometimes known as "cherry amber"), green amber, and even blue amber, which is rare and highly sought after.
Yellow amber is a hard fossil resin from evergreen trees, and despite the name it can be translucent, yellow, orange, or brown colored. Known to the Iranians by the Pahlavi compound word kah-ruba (from kah "straw" plus rubay "attract, snatch", referring to its electrical properties), which entered Arabic as kahraba' or kahraba (which later became the Arabic word for electricity, كهرباء kahrabā), it too was called amber in Europe (Old French and Middle English ambre). Found along the southern shore of the Baltic Sea, yellow amber reached the Middle East and western Europe via trade. Its coastal acquisition may have been one reason yellow amber came to be designated by the same term as ambergris. Moreover, like ambergris, the resin could be burned as an incense. The resin's most popular use was, however, for ornamentation—easily cut and polished, it could be transformed into beautiful jewelry. Much of the most highly prized amber is transparent, in contrast to the very common cloudy amber and opaque amber. Opaque amber contains numerous minute bubbles. This kind of amber is known as "bony amber".
Although all Dominican amber is fluorescent, the rarest Dominican amber is blue amber. It turns blue in natural sunlight and any other partially or wholly ultraviolet light source. In long-wave UV light it has a very strong reflection, almost white. Only about is found per year, which makes it valuable and expensive.
Sometimes amber retains the form of drops and stalactites, just as it exuded from the ducts and receptacles of the injured trees. It is thought that, in addition to exuding onto the surface of the tree, amber resin also originally flowed into hollow cavities or cracks within trees, thereby leading to the development of large lumps of amber of irregular form.
Classification
Amber can be classified into several forms. Most fundamentally, there are two types of plant resin with the potential for fossilization. Terpenoids, produced by conifers and angiosperms, consist of ring structures formed of isoprene (C5H8) units. Phenolic resins are today only produced by angiosperms, and tend to serve functional uses. The extinct medullosans produced a third type of resin, which is often found as amber within their veins. The composition of resins is highly variable; each species produces a unique blend of chemicals which can be identified by the use of pyrolysis–gas chromatography–mass spectrometry. The overall chemical and structural composition is used to divide ambers into five classes. There is also a separate classification of amber gemstones, according to the way of production.
Class I
This class is by far the most abundant. It comprises labdatriene carboxylic acids such as communic or ozic acids. It is further split into three sub-classes. Classes Ia and Ib utilize regular labdanoid diterpenes (e.g. communic acid, communol, biformenes), while Ic uses enantio labdanoids (ozic acid, ozol, enantio biformenes).
Class Ia includes Succinite (= 'normal' Baltic amber) and Glessite. They have a communic acid base, and they also include much succinic acid. Baltic amber yields on dry distillation succinic acid, the proportion varying from about 3% to 8%, and being greatest in the pale opaque or bony varieties. The aromatic and irritating fumes emitted by burning amber are mainly from this acid. Baltic amber is distinguished by its yield of succinic acid, hence the name succinite. Succinite has a hardness between 2 and 3, which is greater than many other fossil resins. Its specific gravity varies from 1.05 to 1.10. It can be distinguished from other ambers via infrared spectroscopy through a specific carbonyl absorption peak. Infrared spectroscopy can detect the relative age of an amber sample. Succinic acid may not be an original component of amber but rather a degradation product of abietic acid.
Class Ib ambers are based on communic acid; however, they lack succinic acid.
Class Ic is mainly based on enantio-labdatrienonic acids, such as ozic and zanzibaric acids. Its most familiar representative is Dominican amber,. which is mostly transparent and often contains a higher number of fossil inclusions. This has enabled the detailed reconstruction of the ecosystem of a long-vanished tropical forest. Resin from the extinct species Hymenaea protera is the source of Dominican amber and probably of most amber found in the tropics. It is not "succinite" but "retinite".
Class II
These ambers are formed from resins with a sesquiterpenoid base, such as cadinene.
Class III
These ambers are polystyrenes.
Class IV
Class IV is something of a catch-all: its ambers are not polymerized, but mainly consist of cedrene-based sesquiterpenoids.
Class V
Class V resins are considered to be produced by a pine or pine relative. They comprise a mixture of diterpinoid resins and n-alkyl compounds. Their main variety is Highgate copalite.
Geological record
The oldest amber recovered dates to the late Carboniferous period (). Its chemical composition makes it difficult to match the amber to its producers – it is most similar to the resins produced by flowering plants; however, the first flowering plants appeared in the Early Cretaceous, about 200 million years after the oldest amber known to date, and they were not common until the Late Cretaceous. Amber becomes abundant long after the Carboniferous, in the Early Cretaceous, when it is found in association with insects. The oldest amber with arthropod inclusions comes from the Late Triassic (late Carnian 230 Ma) of Italy, where four microscopic (0.2–0.1 mm) mites, Triasacarus, Ampezzoa, Minyacarus and Cheirolepidoptus, and a poorly preserved nematoceran fly were found in millimetre-sized droplets of amber. The oldest amber with significant numbers of arthropod inclusions comes from Lebanon. This amber, referred to as Lebanese amber, is roughly 125–135 million years old, is considered of high scientific value, providing evidence of some of the oldest sampled ecosystems.
In Lebanon, more than 450 outcrops of Lower Cretaceous amber were discovered by Dany Azar, a Lebanese paleontologist and entomologist. Among these outcrops, 20 have yielded biological inclusions comprising the oldest representatives of several recent families of terrestrial arthropods. Even older Jurassic amber has been found recently in Lebanon as well. Many remarkable insects and spiders were recently discovered in the amber of Jordan including the oldest zorapterans, clerid beetles, umenocoleid roaches, and achiliid planthoppers.
Burmese amber from the Hukawng Valley in northern Myanmar is the only commercially exploited Cretaceous amber. Uranium–lead dating of zircon crystals associated with the deposit have given an estimated depositional age of approximately 99 million years ago. Over 1,300 species have been described from the amber, with over 300 in 2019 alone.
Baltic amber is found as irregular nodules in marine glauconitic sand, known as blue earth, occurring in Upper Eocene strata of Sambia in Prussia. It appears to have been partly derived from older Eocene deposits and it occurs also as a derivative phase in later formations, such as glacial drift. Relics of an abundant flora occur as inclusions trapped within the amber while the resin was yet fresh, suggesting relations with the flora of eastern Asia and the southern part of North America. Heinrich Göppert named the common amber-yielding pine of the Baltic forests Pinites succiniter, but as the wood does not seem to differ from that of the existing genus it has been also called Pinus succinifera. It is improbable that the production of amber was limited to a single species; and indeed a large number of conifers belonging to different genera are represented in the amber-flora.
Paleontological significance
Amber is a unique preservational mode, preserving otherwise unfossilizable parts of organisms; as such it is helpful in the reconstruction of ecosystems as well as organisms; the chemical composition of the resin, however, is of limited utility in reconstructing the phylogenetic affinity of the resin producer. Amber sometimes contains animals or plant matter that became caught in the resin as it was secreted. Insects, spiders and even their webs, annelids, frogs, crustaceans, bacteria and amoebae, marine microfossils, wood, flowers and fruit, hair, feathers and other small organisms have been recovered in Cretaceous ambers (deposited c. ). There is even an ammonite Puzosia (Bhimaites) and marine gastropods found in Burmese amber.
The preservation of prehistoric organisms in amber forms a key plot point in Michael Crichton's 1990 novel Jurassic Park and the 1993 movie adaptation by Steven Spielberg. In the story, scientists are able to extract the preserved blood of dinosaurs from prehistoric mosquitoes trapped in amber, from which they genetically clone living dinosaurs. Scientifically this is as yet impossible, since no amber with fossilized mosquitoes has ever yielded preserved blood. Amber is, however, conducive to preserving DNA, since it dehydrates and thus stabilizes organisms trapped inside. One projection in 1999 estimated that DNA trapped in amber could last up to 100 million years, far beyond most estimates of around 1 million years in the most ideal conditions, although a later 2013 study was unable to extract DNA from insects trapped in much more recent Holocene copal. In 1938, 12-year-old David Attenborough (brother of Richard who played John Hammond in Jurassic Park) was given a piece of amber containing prehistoric creatures from his adoptive sister; it would be the focus of his 2004 BBC documentary The Amber Time Machine.
Use
Amber has been used since prehistory (Solutrean) in the manufacture of jewelry and ornaments, and also in folk medicine.
Jewelry
Amber has been used as jewelry since the Stone Age, from 13,000 years ago. Amber ornaments have been found in Mycenaean tombs and elsewhere across Europe. To this day it is used in the manufacture of smoking and glassblowing mouthpieces. Amber's place in culture and tradition lends it a tourism value; Palanga Amber Museum is dedicated to the fossilized resin.
Historical medicinal uses
Amber has long been used in folk medicine for its purported healing properties. Amber and extracts were used from the time of Hippocrates in ancient Greece for a wide variety of treatments through the Middle Ages and up until the early twentieth century. Traditional Chinese medicine uses amber to "tranquilize the mind".
Amber necklaces are a traditional European remedy for colic or teething pain with purported analgesic properties of succinic acid, although there is no evidence that this is an effective remedy or delivery method. The American Academy of Pediatrics and the FDA have warned strongly against their use, as they present both a choking and a strangulation hazard.
Scent of amber and amber perfumery
In ancient China, it was customary to burn amber during large festivities. If amber is heated under the right conditions, oil of amber is produced, and in past times this was combined carefully with nitric acid to create "artificial musk" – a resin with a peculiar musky odor. Although when burned, amber does give off a characteristic "pinewood" fragrance, modern products, such as perfume, do not normally use actual amber because fossilized amber produces very little scent. In perfumery, scents referred to as "amber" are often created and patented to emulate the opulent golden warmth of the fossil.
The scent of amber was originally derived from emulating the scent of ambergris and/or the plant resin labdanum, but since sperm whales are endangered, the scent of amber is now largely derived from labdanum. The term "amber" is loosely used to describe a scent that is warm, musky, rich and honey-like, and also somewhat earthy. Benzoin is usually part of the recipe. Vanilla and cloves are sometimes used to enhance the aroma. "Amber" perfumes may be created using combinations of labdanum, benzoin resin, copal (a type of tree resin used in incense manufacture), vanilla, Dammara resin and/or synthetic materials.
In Arab Muslim tradition, popular scents include amber, jasmine, musk and oud (agarwood).
Imitation substances
Young resins used as imitations:
Kauri resin from Agathis australis trees in New Zealand.
The copals (subfossil resins). The African and American (Colombia) copals from Leguminosae trees family (genus Hymenaea). Amber of the Dominican or Mexican type (Class I of fossil resins). Copals from Manilia (Indonesia) and from New Zealand from trees of the genus Agathis (family Araucariaceae)
Other fossil resins: burmite in Burma, rumenite in Romania, and simetite in Sicily.
Other natural resins — cellulose or chitin, etc.
Plastics used as imitations:
Stained glass (inorganic material) and other ceramic materials
Celluloid
Cellulose nitrate (first obtained in 1833) — a product of treatment of cellulose with nitration mixture.
Acetylcellulose (not in the use at present)
Galalith or "artificial horn" (condensation product of casein and formaldehyde), other trade names: Alladinite, Erinoid, Lactoid.
Casein — a conjugated protein forming from the casein precursor – caseinogen.
Resolane (phenolic resins or phenoplasts, not in the use at present)
Bakelite resine (resol, phenolic resins), product from Africa are known under the misleading name "African amber".
Carbamide resins — melamine, formaldehyde and urea-formaldehyde resins.
Epoxy novolac (phenolic resins), unofficial name "antique amber", not in the use at present
Polyesters (Polish amber imitation) with styrene. For example, unsaturated polyester resins (polymals) are produced by Chemical Industrial Works "Organika" in Sarzyna, Poland; estomal are produced by Laminopol firm. Polybern or sticked amber is artificial resins the curled chips are obtained, whereas in the case of amber – small scraps. "African amber" (polyester, synacryl is then probably other name of the same resine) are produced by Reichhold firm; Styresol trade mark or alkid resin (used in Russia, Reichhold, Inc. patent, 1948.
Polyethylene
Epoxy resins
Polystyrene and polystyrene-like polymers (vinyl polymers).
The resins of acrylic type (vinyl polymers), especially polymethyl methacrylate PMMA (trade mark Plexiglass, metaplex).
See also
Ammolite
Illyrian amber jewellery
List of types of amber
Petrified wood
Pearl
Poly(methyl methacrylate)
Precious coral
References
Bibliography
External links
Farlang many full text historical references on Amber Theophrastus, George Frederick Kunz, and special on Baltic amber.
IPS Publications on amber inclusions International Paleoentomological Society: Scientific Articles on amber and its inclusions
Webmineral on Amber Physical properties and mineralogical information
Mindat Amber Image and locality information on amber
NY Times 40 million year old extinct bee in Dominican amber
Fossil resins
Amorphous solids
Traditional medicine | Amber | [
"Physics"
] | 5,680 | [
"Amorphous solids",
"Unsolved problems in physics",
"Amber"
] |
1,394 | https://en.wikipedia.org/wiki/Algol | Algol , designated Beta Persei (β Persei, abbreviated Beta Per, β Per), known colloquially as the Demon Star, is a bright multiple star in the constellation of Perseus and one of the first non-nova variable stars to be discovered.
Algol is a three-star system, consisting of Beta Persei Aa1, Aa2, and Ab – in which the hot luminous primary β Persei Aa1 and the larger, but cooler and fainter, β Persei Aa2 regularly pass in front of each other, causing eclipses. Thus Algol's magnitude is usually near-constant at 2.1, but regularly dips to 3.4 every 2.86 days during the roughly 10-hour-long partial eclipses. The secondary eclipse when the brighter primary star occults the fainter secondary is very shallow and can only be detected photoelectrically.
Algol gives its name to its class of eclipsing variable, known as Algol variables.
Observation history
An ancient Egyptian calendar of lucky and unlucky days composed some 3,200 years ago is said to be the oldest historical documentation of the discovery of Algol.
The association of Algol with a demon-like creature (Gorgon in the Greek tradition, ghoul in the Arabic tradition) suggests that its variability was known long before the 17th century, but there is still no indisputable evidence for this. The Arabic astronomer al-Sufi said nothing about any variability of the star in his Book of Fixed Stars published c.964.
The variability of Algol was noted in 1667 by Italian astronomer Geminiano Montanari, but the periodic nature of its variations in brightness was not recognized until more than a century later, when the British amateur astronomer John Goodricke also proposed a mechanism for the star's variability. In May 1783, he presented his findings to the Royal Society, suggesting that the periodic variability was caused by a dark body passing in front of the star (or else that the star itself has a darker region that is periodically turned toward the Earth). For his report he was awarded the Copley Medal.
In 1881, the Harvard astronomer Edward Charles Pickering presented evidence that Algol was actually an eclipsing binary. This was confirmed a few years later, in 1889, when the Potsdam astronomer Hermann Carl Vogel found periodic doppler shifts in the spectrum of Algol, inferring variations in the radial velocity of this binary system. Thus, Algol became one of the first known spectroscopic binaries. Joel Stebbins at the University of Illinois Observatory used an early selenium cell photometer to produce the first-ever photoelectric study of a variable star. The light curve revealed the second minimum and the reflection effect between the two stars. Some difficulties in explaining the observed spectroscopic features led to the conjecture that a third star may be present in the system; four decades later this conjecture was found to be correct.
System
Algol is a multiple-star system with three confirmed and two suspected stellar components. From the point of view of the Earth, Algol Aa1 and Algol Aa2 form an eclipsing binary because their orbital plane contains the line of sight to the Earth. The eclipsing binary pair is separated by only 0.062 astronomical units (au) from each other, whereas the third star in the system (Algol Ab) is at an average distance of 2.69 au from the pair, and the mutual orbital period of the trio is 681 Earth days. The total mass of the system is about 5.8 solar masses, and the mass ratios of Aa1, Aa2, and Ab are about 4.5 to 1 to 2.
The three components of the bright triple star used to be, and still sometimes are, referred to as β Per A, B, and C. The Washington Double Star Catalog lists them as Aa1, Aa2, and Ab, with two very faint stars B and C about one arcmin distant. A further five faint stars are also listed as companions.
The close pair consists of a B8 main sequence star and a much less massive K0 subgiant, which is highly distorted by the more massive star. These two orbit every 2.9 days and undergo the eclipses that cause Algol to vary in brightness. The third star orbits these two every 680 days and is an A or F-type main sequence star. It has been classified as an Am star, but this is now considered doubtful.
Studies of Algol led to the Algol paradox in the theory of stellar evolution: although components of a binary star form at the same time, and massive stars evolve much faster than the less massive stars, the more massive component Algol Aa1 is still in the main sequence, but the less massive Algol Aa2 is a subgiant star at a later evolutionary stage. The paradox can be solved by mass transfer: when the more massive star became a subgiant, it filled its Roche lobe, and most of the mass was transferred to the other star, which is still in the main sequence. In some binaries similar to Algol, a gas flow can be seen. The gas flow between the primary and secondary stars in Algol has been imaged using Doppler Tomography.
This system also exhibits x-ray and radio wave flares. The x-ray flares are thought to be caused by the magnetic fields of the A and B components interacting with the mass transfer. The radio-wave flares might be created by magnetic cycles similar to those of sunspots, but because the magnetic fields of these stars are up to ten times stronger than the field of the Sun, these radio flares are more powerful and more persistent. The secondary component was identified as the radio emitting source in Algol using Very-long-baseline interferometry by Lestrade and co-authors.
Magnetic activity cycles in the chromospherically active secondary component induce changes in its radius of gyration that have been linked to recurrent orbital period variations on the order of ≈ via the Applegate mechanism. Mass transfer between the components is small in the Algol system but could be a significant source of period change in other Algol-type binaries.
The distance to Algol has been measured using very-long baseline interferometry, giving a value of 94 light-years. About 7.3 million years ago it passed within 9.8 light-years of the Solar System and its apparent magnitude was about −2.5, which is considerably brighter than the star Sirius is today. Because the total mass of the Algol system is about 5.8 solar masses, at the closest approach this might have given enough gravity to perturb the Oort cloud of the Solar System somewhat and hence increase the number of comets entering the inner Solar System. However, the actual increase in net cometary collisions is thought to have been quite small.
Names
Beta Persei is the star's Bayer designation.
The official name Algol
The name Algol derives from Arabic raʾs al-ghūl : head (raʾs) of the ogre (al-ghūl) (see "ghoul"). The English name Demon Star was taken from the Arabic name. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included Algol for this star. It is so entered on the IAU Catalog of Star Names.
Ghost and demon star
Algol was called Rōsh ha Sāṭān or "Satan's Head" in Hebrew folklore, as stated by Edmund Chilmead, who called it "Divels head" or Rosch hassatan. A Latin name for Algol from the 16th century was Caput Larvae or "the Spectre's Head". Hipparchus and Pliny made this a separate, though connected, constellation.
First star of Medusa's head
Earlier the name of the constellation Perseus was Perseus and Medusa's Head where an asterism representing the head of Medusa after Perseus has cut it off already known in ancient Rome. Medusa is a gorgon so the star is also called Gorgonea Prima meaning the first star of the gorgon.
Chinese names
In Chinese, (), meaning Mausoleum, refers to an asterism consisting of β Persei, 9 Persei, τ Persei, ι Persei, κ Persei, ρ Persei, 16 Persei and 12 Persei. Consequently, the Chinese name for β Persei itself is (, English: The Fifth Star of Mausoleum.). According to R.H. Allen the star bore the grim name of Tseih She (), meaning "Piled up Corpses" but this appears to be a misidentification, and Dié Shī is correctly π Persei, which is inside the Mausoleum.
Observing Algol
The Algol system usually has an apparent magnitude of 2.1, similar to those of Mirfak (α Persei) at 1.9 and Almach (γ Andromedae) at 2.2, with whom it forms a right triangle. During eclipses it dims to 3.4, making it as faint as nearby ρ Persei at 3.3.
Listed are the first eclipse dates and times of each month, with all times in UT. β Persei Aa2 eclipses β Persei Aa1 every 2.867321 days (2 days 20 hours 49 min). To determine subsequent eclipses, add this interval to each listed date and time. For example, the Jan 2 eclipse at 8h will result in consecutive eclipse times on Jan 5 at 5h, Jan 8 at 1h, Jan 10 at 22h, and so on (all times approximate).
Cultural significance
Historically, the star has received a strong association with bloody violence across a wide variety of cultures. In the Tetrabiblos, the 2nd-century astrological text of the Alexandrian astronomer Ptolemy, Algol is referred to as "the Gorgon of Perseus" and associated with death by decapitation: a theme which mirrors the myth of the hero Perseus's victory over the snake-haired Gorgon Medusa. In the astrology of fixed stars, Algol is considered one of the unluckiest stars in the sky, and was listed as one of the 15 Behenian stars.
See also
Jaana Toivari-Viitala, egyptologist who contributed to understanding Ancient Egypt and the star
References
External links
Algol variables
Persei, Beta
B-type main-sequence stars
Persei, 26
019356
014576
K-type subgiants
Perseus (constellation)
Algol
Triple star systems
Astronomical objects known since antiquity
0936
BD+40 0673
Am stars
F-type main-sequence stars | Algol | [
"Astronomy"
] | 2,268 | [
"Perseus (constellation)",
"Constellations"
] |
1,400 | https://en.wikipedia.org/wiki/Anno%20Domini | The terms (AD) and before Christ (BC) are used when designating years in the Gregorian and Julian calendars. The term is Medieval Latin and means "in the year of the Lord" but is often presented using "our Lord" instead of "the Lord", taken from the full original phrase "anno Domini nostri Jesu Christi", which translates to "in the year of our Lord Jesus Christ". The form "BC" is specific to English, and equivalent abbreviations are used in other languages: the Latin form, rarely used in English, is (ACN) or (AC).
This calendar era takes as its epoch the traditionally reckoned year of the conception or birth of Jesus. Years AD are counted forward since that epoch and years BC are counted backward from the epoch. There is no year zero in this scheme; thus the year AD 1 immediately follows the year 1 BC. This dating system was devised in 525 by Dionysius Exiguus but was not widely used until the 9th century. (Modern scholars believe that the actual date of birth of Jesus was about 5 BC.)
Terminology that is viewed by some as being more neutral and inclusive of non-Christian people is to call this the Common Era (abbreviated as CE), with the preceding years referred to as Before the Common Era (BCE). Astronomical year numbering and ISO 8601 avoid words or abbreviations related to Christianity, but use the same numbers for AD years (but not for BC years in the case of astronomical years; e.g., 1 BC is year 0, 45 BC is year −44).
Usage
Traditionally, English follows Latin usage by placing the "AD" abbreviation before the year number, though it is also found after the year. In contrast, "BC" is always placed after the year number (for example: 70 BC but AD 70), which preserves syntactic order. The abbreviation "AD" is also widely used after the number of a century or millennium, as in "fourth century AD" or "second millennium AD" (although conservative usage formerly rejected such expressions). Since "BC" is the English abbreviation for Before Christ, it is sometimes incorrectly concluded that AD means After Death (i.e., after the death of Jesus), which would mean that the approximately 33 years commonly associated with the life of Jesus would be included in neither the BC nor the AD time scales.
History
The anno Domini dating system was devised in 525 by Dionysius Exiguus to enumerate years in his Easter table. His system was to replace the Diocletian era that had been used in older Easter tables, as he did not wish to continue the memory of a tyrant who persecuted Christians. The last year of the old table, Diocletian Anno Martyrium 247, was immediately followed by the first year of his table, anno Domini 532. When Dionysius devised his table, Julian calendar years were identified by naming the consuls who held office that year— Dionysius himself stated that the "present year" was "the consulship of Probus Junior", which was 525 years "since the incarnation of our Lord Jesus Christ". Thus, Dionysius implied that Jesus' incarnation occurred 525 years earlier, without stating the specific year during which his birth or conception occurred. "However, nowhere in his exposition of his table does Dionysius relate his epoch to any other dating system, whether consulate, Olympiad, year of the world, or regnal year of Augustus; much less does he explain or justify the underlying date."
Bonnie J. Blackburn and Leofranc Holford-Strevens briefly present arguments for 2 BC, 1 BC, or AD 1 as the year Dionysius intended for the Nativity or incarnation. Among the sources of confusion are:
In modern times, incarnation is synonymous with the conception, but some ancient writers, such as Bede, considered incarnation to be synonymous with the Nativity.
The civil or consular year began on 1 January, but the Diocletian year began on 29 August (30 August in the year before a Julian leap year).
There were inaccuracies in the lists of consuls.
There were confused summations of emperors' regnal years.
It is not known how Dionysius established the year of Jesus's birth. One major theory is that Dionysius based his calculation on the Gospel of Luke, which states that Jesus was "about thirty years old" shortly after "the fifteenth year of the reign of Tiberius Caesar", and hence subtracted thirty years from that date, or that Dionysius counted back 532 years from the first year of his new table. This method was probably the one used by ancient historians such as Tertullian, Eusebius or Epiphanius, all of whom agree that Jesus was born in 2 BC, probably following this statement of Jesus' age (i.e. subtracting thirty years from AD 29). Alternatively, Dionysius may have used an earlier unknown source. The Chronograph of 354 states that Jesus was born during the consulship of Caesar and Paullus (AD 1), but the logic behind this is also unknown.
It has also been speculated by Georges Declercq that Dionysius' desire to replace Diocletian years with a calendar based on the incarnation of Christ was intended to prevent people from believing the imminent end of the world. At the time, it was believed by some that the resurrection of the dead and end of the world would occur 500 years after the birth of Jesus. The old Anno Mundi calendar theoretically commenced with the creation of the world based on information in the Old Testament. It was believed that, based on the Anno Mundi calendar, Jesus was born in the year 5500 (5500 years after the world was created) with the year 6000 of the Anno Mundi calendar marking the end of the world. Anno Mundi 6000 (approximately AD 500) was thus equated with the end of the world but this date had already passed in the time of Dionysius.
The "Historia Brittonum" attributed to Nennius written in the 9th century makes extensive use of the Anno Passionis (AP) dating system which was in common use as well as the newer AD dating system. The AP dating system took its start from 'The Year of The Passion'. It is generally accepted by experts there is a 27-year difference between AP and AD reference.
The date of birth of Jesus of Nazareth is not stated in the gospels or in any secular text, but most scholars assume a date of birth between 6 BC and 4 BC. The historical evidence is too fragmentary to allow a definitive dating, but the date is estimated through two different approaches—one by analyzing references to known historical events mentioned in the Nativity accounts in the Gospels of Luke and Matthew and the second by working backwards from the estimation of the start of the ministry of Jesus.
Popularization
The Anglo-Saxon historian Bede, who was familiar with the work of Dionysius Exiguus, used anno Domini dating in his Ecclesiastical History of the English People, which he completed in AD 731. In the History he also used the Latin phrase ante [...] incarnationis dominicae tempus anno sexagesimo ("in the sixtieth year before the time of the Lord's incarnation"), which is equivalent to the English "before Christ", to identify years before the first year of this era. Both Dionysius and Bede regarded anno Domini as beginning at the incarnation of Jesus Christ, but "the distinction between Incarnation and Nativity was not drawn until the late 9th century, when in some places the Incarnation epoch was identified with Christ's conception, i. e., the Annunciation on March 25" ("Annunciation style" dating).
On the continent of Europe, anno Domini was introduced as the era of choice of the Carolingian Renaissance by the English cleric and scholar Alcuin in the late eighth century. Its endorsement by Emperor Charlemagne and his successors popularizing the use of the epoch and spreading it throughout the Carolingian Empire ultimately lies at the core of the system's prevalence. According to the Catholic Encyclopedia, popes continued to date documents according to regnal years for some time, but usage of AD gradually became more common in Catholic countries from the 11th to the 14th centuries. In 1422, Portugal became the last Western European country to switch to the system begun by Dionysius. Eastern Orthodox countries only began to adopt AD instead of the Byzantine calendar in 1700 when Russia did so, with others adopting it in the 19th and 20th centuries.
Although anno Domini was in widespread use by the 9th century, the term "Before Christ" (or its equivalent) did not become common until much later. Bede used the expression "anno [...] ante incarnationem Dominicam" (in the year before the incarnation of the Lord) twice. "Anno ante Christi nativitatem" (in the year before the birth of Christ) is found in 1474 in a work by a German monk. In 1627, the French Jesuit theologian Denis Pétau (Dionysius Petavius in Latin), with his work De doctrina temporum, popularized the usage ante Christum (Latin for "Before Christ") to mark years prior to AD.
New year
When the reckoning from Jesus' incarnation began replacing the previous dating systems in western Europe, various people chose different Christian feast days to begin the year: Christmas, Annunciation, or Easter. Thus, depending on the time and place, the year number changed on different days in the year, which created slightly different styles in chronology:
From 25 March 753 AUC (1 BC), i.e., notionally from the incarnation of Jesus. That first "Annunciation style" appeared in Arles at the end of the 9th century then spread to Burgundy and northern Italy. It was not commonly used and was called calculus pisanus since it was adopted in Pisa and survived there until 1750.
From 25 December 753 AUC (1 BC), i.e., notionally from the birth of Jesus. It was called "Nativity style" and had been spread by Bede together with the anno Domini in the early Middle Ages. That reckoning of the Year of Grace from Christmas was used in France, England and most of western Europe (except Spain) until the 12th century (when it was replaced by Annunciation style) and in Germany until the second quarter of the 13th century.
From 25 March 754 AUC (AD 1). That second "Annunciation style" may have originated in Fleury Abbey in the early 11th century, but it was spread by the Cistercians. Florence adopted that style in opposition to that of Pisa, so it got the name of calculus florentinus. It soon spread in France and also in England where it became common in the late 12th century and lasted until 1752.
From Easter. That mos gallicanus (French custom) bound to a moveable feast was introduced in France by king Philip Augustus (r. 1180–1223), maybe to establish a new style in the provinces reconquered from England. However, it never spread beyond the ruling élite.
With these various styles, the same day could, in some cases, be dated in 1099, 1100 or 1101.
Other Christian and European eras
During the first six centuries of what would come to be known as the Christian era, European countries used various systems to count years. Systems in use included consular dating, imperial regnal year dating, and Creation dating.
Although the last non-imperial consul, Basilius, was appointed in 541 by Emperor Justinian I, later emperors through to Constans II (641–668) were appointed consuls on the first of January after their accession. All of these emperors, except Justinian, used imperial post-consular years for the years of their reign, along with their regnal years. Long unused, this practice was not formally abolished until Novell XCIV of the law code of Leo VI did so in 888.
Another calculation had been developed by the Alexandrian monk Annianus around the year AD 400, placing the Annunciation on 25 March AD 9 (Julian)—eight to ten years after the date that Dionysius was to imply. Although this incarnation was popular during the early centuries of the Byzantine Empire, years numbered from it, an Era of Incarnation, were exclusively used and are still used in Ethiopia. This accounts for the seven- or eight-year discrepancy between the Gregorian and Ethiopian calendars.
Byzantine chroniclers like Maximus the Confessor, George Syncellus, and Theophanes dated their years from Annianus' creation of the world. This era, called Anno Mundi, "year of the world" (abbreviated AM), by modern scholars, began its first year on 25 March 5492 BC. Later Byzantine chroniclers used Anno Mundi years from 1 September 5509 BC, the Byzantine Era. No single Anno Mundi epoch was dominant throughout the Christian world. Eusebius of Caesarea in his Chronicle used an era beginning with the birth of Abraham, dated in 2016 BC (AD 1 = 2017 Anno Abrahami).
Spain and Portugal continued to date by the Spanish Era (also called Era of the Caesars), which began counting from 38 BC, well into the Middle Ages. In 1422, Portugal became the last Catholic country to adopt the anno Domini system.
The Era of Martyrs, which numbered years from the accession of Diocletian in 284, who launched the most severe persecution of Christians, was used by the Church of Alexandria and is still officially used by the Coptic Orthodox and Coptic Catholic churches. It was also used by the Ethiopian and Eritrean churches. Another system was to date from the crucifixion of Jesus, which as early as Hippolytus and Tertullian was believed to have occurred in the consulate of the Gemini (AD 29), which appears in some medieval manuscripts.
CE and BCE
Alternative names for the anno Domini era include vulgaris aerae (found 1615 in Latin),
"Vulgar Era" (in English, as early as 1635),
"Christian Era" (in English, in 1652),
"Common Era" (in English, 1708),
and "Current Era".
Since 1856, the alternative abbreviations CE and BCE (sometimes written C.E. and B.C.E.) are sometimes used in place of AD and BC.
The "Common/Current Era" ("CE") terminology is often preferred by those who desire a term that does not explicitly make religious references but still uses the same epoch as the anno Domini notation.
For example, Cunningham and Starr (1998) write that "B.C.E./C.E. […] do not presuppose faith in Christ and hence are more appropriate for interfaith dialog than the conventional B.C./A.D." Upon its foundation, the Republic of China adopted the Minguo Era but used the Western calendar for international purposes. The translated term was (). Later, in 1949, the People's Republic of China adopted () for all purposes domestic and foreign.
No year zero: start and end of a century
In the AD year numbering system, whether applied to the Julian or Gregorian calendars, AD 1 is immediately preceded by 1 BC, with nothing in between them (there was no year zero). There are debates as to whether a new decade, century, or millennium begins on a year ending in zero or one.
For computational reasons, astronomical year numbering and the ISO 8601 standard designate years so that AD 1 = year 1, 1 BC = year 0, 2 BC = year −1, etc. In common usage, ancient dates are expressed in the Julian calendar, but ISO 8601 uses the Gregorian calendar and astronomers may use a variety of time scales depending on the application. Thus dates using the year 0 or negative years may require further investigation before being converted to BC or AD.
See also
Before Present
Holocene calendar
Notes
References
Citations
Sources
Bede. (731). Historiam ecclesiasticam gentis Anglorum . Retrieved 2007-12-07.
Corrected reprinting of original 1999 edition.
(despite beginning with 2, it is English)
Declercq, G. "Dionysius Exiguus and the Introduction of the Christian Era". Sacris Erudiri 41 (2002): 165–246. An annotated version of part of Anno Domini.
Doggett. (1992). "Calendars" (Ch. 12), in P. Kenneth Seidelmann (Ed.) Explanatory supplement to the astronomical almanac. Sausalito, CA: University Science Books. .
Patrick, J. (1908). "General Chronology" . In The Catholic Encyclopedia. New York: Robert Appleton Company. Retrieved 2008-07-16 from New Advent: Catholic Encyclopedia: General Chronology
External links
Calendar Converter
6th-century Christianity
Calendar eras
Christian terminology
Chronology
Latin religious words and phrases
Timelines of Christianity | Anno Domini | [
"Physics"
] | 3,579 | [
"Spacetime",
"Chronology",
"Physical quantities",
"Time"
] |
1,412 | https://en.wikipedia.org/wiki/Amine | In chemistry, amines (, ) are compounds and functional groups that contain a basic nitrogen atom with a lone pair. Formally, amines are derivatives of ammonia ((in which the bond angle between the nitrogen and hydrogen is 170°), wherein one or more hydrogen atoms have been replaced by a substituent such as an alkyl or aryl group (these may respectively be called alkylamines and arylamines; amines in which both types of substituent are attached to one nitrogen atom may be called alkylarylamines). Important amines include amino acids, biogenic amines, trimethylamine, and aniline. Inorganic derivatives of ammonia are also called amines, such as monochloramine ().
The substituent is called an amino group.
The chemical notation for amines contains the letter "R", where "R" is not an element, but an "R-group", which in amines could be a single hydrogen or carbon atom, or could be a hydrocarbon chain.
Compounds with a nitrogen atom attached to a carbonyl group, thus having the structure , are called amides and have different chemical properties from amines.
Classification of amines
Amines can be classified according to the nature and number of substituents on nitrogen. Aliphatic amines contain only H and alkyl substituents. Aromatic amines have the nitrogen atom connected to an aromatic ring.
Amines, alkyl and aryl alike, are organized into three subcategories (see table) based on the number of carbon atoms adjacent to the nitrogen (how many hydrogen atoms of the ammonia molecule are replaced by hydrocarbon groups):
Primary (1°) amines—Primary amines arise when one of three hydrogen atoms in ammonia is replaced by an alkyl or aromatic group. Important primary alkyl amines include methylamine, most amino acids, and the buffering agent tris, while primary aromatic amines include aniline.
Secondary (2°) amines—Secondary amines have two organic substituents (alkyl, aryl or both) bound to the nitrogen together with one hydrogen. Important representatives include dimethylamine, while an example of an aromatic amine would be diphenylamine.
Tertiary (3°) amines—In tertiary amines, nitrogen has three organic substituents. Examples include trimethylamine, which has a distinctively fishy smell, and EDTA.
A fourth subcategory is determined by the connectivity of the substituents attached to the nitrogen:
Cyclic amines—Cyclic amines are either secondary or tertiary amines. Examples of cyclic amines include the 3-membered ring aziridine and the six-membered ring piperidine. N-methylpiperidine and N-phenylpiperidine are examples of cyclic tertiary amines.
It is also possible to have four organic substituents on the nitrogen. These species are not amines but are quaternary ammonium cations and have a charged nitrogen center. Quaternary ammonium salts exist with many kinds of anions.
Naming conventions
Amines are named in several ways. Typically, the compound is given the prefix "amino-" or the suffix "-amine". The prefix "N-" shows substitution on the nitrogen atom. An organic compound with multiple amino groups is called a diamine, triamine, tetraamine and so forth.
Lower amines are named with the suffix -amine.
Higher amines have the prefix amino as a functional group. IUPAC however does not recommend this convention, but prefers the alkanamine form, e.g. butan-2-amine.
Physical properties
Hydrogen bonding significantly influences the properties of primary and secondary amines. For example, methyl and ethyl amines are gases under standard conditions, whereas the corresponding methyl and ethyl alcohols are liquids. Amines possess a characteristic ammonia smell, liquid amines have a distinctive "fishy" and foul smell.
The nitrogen atom features a lone electron pair that can bind H+ to form an ammonium ion R3NH+. The lone electron pair is represented in this article by two dots above or next to the N. The water solubility of simple amines is enhanced by hydrogen bonding involving these lone electron pairs. Typically salts of ammonium compounds exhibit the following order of solubility in water: primary ammonium () > secondary ammonium () > tertiary ammonium (R3NH+). Small aliphatic amines display significant solubility in many solvents, whereas those with large substituents are lipophilic. Aromatic amines, such as aniline, have their lone pair electrons conjugated into the benzene ring, thus their tendency to engage in hydrogen bonding is diminished. Their boiling points are high and their solubility in water is low.
Spectroscopic identification
Typically the presence of an amine functional group is deduced by a combination of techniques, including mass spectrometry as well as NMR and IR spectroscopies. 1H NMR signals for amines disappear upon treatment of the sample with D2O. In their infrared spectrum primary amines exhibit two N-H bands, whereas secondary amines exhibit only one. In their IR spectra, primary and secondary amines exhibit distinctive N-H stretching bands near 3300 cm−1. Somewhat less distinctive are the bands appearing below 1600 cm−1, which are weaker and overlap with C-C and C-H modes. For the case of propyl amine, the H-N-H scissor mode appears near 1600 cm−1, the C-N stretch near 1000 cm−1, and the R2N-H bend near 810 cm−1.
Structure
Alkyl amines
Alkyl amines characteristically feature tetrahedral nitrogen centers. C-N-C and C-N-H angles approach the idealized angle of 109°. C-N distances are slightly shorter than C-C distances. The energy barrier for the nitrogen inversion of the stereocenter is about 7 kcal/mol for a trialkylamine. The interconversion has been compared to the inversion of an open umbrella into a strong wind.
Amines of the type NHRR' and NRR′R″ are chiral: the nitrogen center bears four substituents counting the lone pair. Because of the low barrier to inversion, amines of the type NHRR' cannot be obtained in optical purity. For chiral tertiary amines, NRR′R″ can only be resolved when the R, R', and R″ groups are constrained in cyclic structures such as N-substituted aziridines (quaternary ammonium salts are resolvable).
Aromatic amines
In aromatic amines ("anilines"), nitrogen is often nearly planar owing to conjugation of the lone pair with the aryl substituent. The C-N distance is correspondingly shorter. In aniline, the C-N distance is the same as the C-C distances.
Basicity
Like ammonia, amines are bases. Compared to alkali metal hydroxides, amines are weaker.
The basicity of amines depends on:
The electronic properties of the substituents (alkyl groups enhance the basicity, aryl groups diminish it).
The degree of solvation of the protonated amine, which includes steric hindrance by the groups on nitrogen.
Electronic effects
Owing to inductive effects, the basicity of an amine might be expected to increase with the number of alkyl groups on the amine. Correlations are complicated owing to the effects of solvation which are opposite the trends for inductive effects. Solvation effects also dominate the basicity of aromatic amines (anilines). For anilines, the lone pair of electrons on nitrogen delocalizes into the ring, resulting in decreased basicity. Substituents on the aromatic ring, and their positions relative to the amino group, also affect basicity as seen in the table.
Solvation effects
Solvation significantly affects the basicity of amines. N-H groups strongly interact with water, especially in ammonium ions. Consequently, the basicity of ammonia is enhanced by 1011 by solvation. The intrinsic basicity of amines, i.e. the situation where solvation is unimportant, has been evaluated in the gas phase. In the gas phase, amines exhibit the basicities predicted from the electron-releasing effects of the organic substituents. Thus tertiary amines are more basic than secondary amines, which are more basic than primary amines, and finally ammonia is least basic. The order of pKb's (basicities in water) does not follow this order. Similarly aniline is more basic than ammonia in the gas phase, but ten thousand times less so in aqueous solution.
In aprotic polar solvents such as DMSO, DMF, and acetonitrile the energy of solvation is not as high as in protic polar solvents like water and methanol. For this reason, the basicity of amines in these aprotic solvents is almost solely governed by the electronic effects.
Synthesis
From alcohols
Industrially significant alkyl amines are prepared from ammonia by alkylation with alcohols:
ROH + NH3 -> RNH2 + H2O
From alkyl and aryl halides
Unlike the reaction of amines with alcohols the reaction of amines and ammonia with alkyl halides is used for synthesis in the laboratory:
RX + 2 R'NH2 -> RR'NH + [RR'NH2]X
In such reactions, which are more useful for alkyl iodides and bromides, the degree of alkylation is difficult to control such that one obtains mixtures of primary, secondary, and tertiary amines, as well as quaternary ammonium salts.
Selectivity can be improved via the Delépine reaction, although this is rarely employed on an industrial scale. Selectivity is also assured in the Gabriel synthesis, which involves organohalide reacting with potassium phthalimide.
Aryl halides are much less reactive toward amines and for that reason are more controllable. A popular way to prepare aryl amines is the Buchwald-Hartwig reaction.
From alkenes
Disubstituted alkenes react with HCN in the presence of strong acids to give formamides, which can be decarbonylated. This method, the Ritter reaction, is used industrially to produce tertiary amines such as tert-octylamine.
Hydroamination of alkenes is also widely practiced. The reaction is catalyzed by zeolite-based solid acids.
Reductive routes
Via the process of hydrogenation, unsaturated N-containing functional groups are reduced to amines using hydrogen in the presence of a nickel catalyst. Suitable groups include nitriles, azides, imines including oximes, amides, and nitro. In the case of nitriles, reactions are sensitive to acidic or alkaline conditions, which can cause hydrolysis of the group. is more commonly employed for the reduction of these same groups on the laboratory scale.
Many amines are produced from aldehydes and ketones via reductive amination, which can either proceed catalytically or stoichiometrically.
Aniline () and its derivatives are prepared by reduction of the nitroaromatics. In industry, hydrogen is the preferred reductant, whereas, in the laboratory, tin and iron are often employed.
Specialized methods
Many methods exist for the preparation of amines, many of these methods being rather specialized.
Reactions
Alkylation, acylation, and sulfonation, etc.
Aside from their basicity, the dominant reactivity of amines is their nucleophilicity. Most primary amines are good ligands for metal ions to give coordination complexes. Amines are alkylated by alkyl halides. Acyl chlorides and acid anhydrides react with primary and secondary amines to form amides (the "Schotten–Baumann reaction").
Similarly, with sulfonyl chlorides, one obtains sulfonamides. This transformation, known as the Hinsberg reaction, is a chemical test for the presence of amines.
Because amines are basic, they neutralize acids to form the corresponding ammonium salts . When formed from carboxylic acids and primary and secondary amines, these salts thermally dehydrate to form the corresponding amides.
Amines undergo sulfamation upon treatment with sulfur trioxide or sources thereof:
R2NH + SO3 -> R2NSO3H
Diazotization
Amines reacts with nitrous acid to give diazonium salts. The alkyl diazonium salts are of little importance because they are too unstable. The most important members are derivatives of aromatic amines such as aniline ("phenylamine") (A = aryl or naphthyl):
ANH2 + HNO2 + HX -> AN2+ + X- + 2 H2O
Anilines and naphthylamines form more stable diazonium salts, which can be isolated in the crystalline form. Diazonium salts undergo a variety of useful transformations involving replacement of the group with anions. For example, cuprous cyanide gives the corresponding nitriles:
AN2+ + Y- -> AY + N2
Aryldiazoniums couple with electron-rich aromatic compounds such as a phenol to form azo compounds. Such reactions are widely applied to the production of dyes.
Conversion to imines
Imine formation is an important reaction. Primary amines react with ketones and aldehydes to form imines. In the case of formaldehyde (R' H), these products typically exist as cyclic trimers: RNH2 + R'_2C=O -> R'_2C=NR + H2O Reduction of these imines gives secondary amines: R'_2C=NR + H2 -> R'_2CH-NHR
Similarly, secondary amines react with ketones and aldehydes to form enamines: R2NH + R'(R''CH2)C=O -> R''CH=C(NR2)R' + H2O
Mercuric ions reversibly oxidize tertiary amines with an α hydrogen to iminium ions: Hg^2+ + R2NCH2R' <=> Hg + [R2N=CHR']+ + H+
Overview
An overview of the reactions of amines is given below:
Biological activity
Amines are ubiquitous in biology. The breakdown of amino acids releases amines, famously in the case of decaying fish which smell of trimethylamine. Many neurotransmitters are amines, including epinephrine, norepinephrine, dopamine, serotonin, and histamine. Protonated amino groups () are the most common positively charged moieties in proteins, specifically in the amino acid lysine. The anionic polymer DNA is typically bound to various amine-rich proteins. Additionally, the terminal charged primary ammonium on lysine forms salt bridges with carboxylate groups of other amino acids in polypeptides, which is one of the primary influences on the three-dimensional structures of proteins.
Amine hormones
Hormones derived from the modification of amino acids are referred to as amine hormones. Typically, the original structure of the amino acid is modified such that a –COOH, or carboxyl, group is removed, whereas the , or amine, group remains. Amine hormones are synthesized from the amino acids tryptophan or tyrosine.
Application of amines
Dyes
Primary aromatic amines are used as a starting material for the manufacture of azo dyes. It reacts with nitrous acid to form diazonium salt, which can undergo coupling reaction to form an azo compound. As azo-compounds are highly coloured, they are widely used in dyeing industries, such as:
Methyl orange
Direct brown 138
Sunset yellow FCF
Ponceau
Drugs
Most drugs and drug candidates contain amine functional groups:
Chlorpheniramine is an antihistamine that helps to relieve allergic disorders due to cold, hay fever, itchy skin, insect bites and stings.
Chlorpromazine is a tranquilizer that sedates without inducing sleep. It is used to relieve anxiety, excitement, restlessness or even mental disorder.
Ephedrine and phenylephrine, as amine hydrochlorides, are used as decongestants.
Amphetamine, methamphetamine, and methcathinone are psychostimulant amines that are listed as controlled substances by the US DEA.
Thioridazine, an antipsychotic drug, is an amine which is believed to exhibit its antipsychotic effects, in part, due to its effects on other amines.
Amitriptyline, imipramine, lofepramine and clomipramine are tricyclic antidepressants and tertiary amines.
Nortriptyline, desipramine, and amoxapine are tricyclic antidepressants and secondary amines. (The tricyclics are grouped by the nature of the final amino group on the side chain.)
Substituted tryptamines and phenethylamines are key basic structures for a large variety of psychedelic drugs.
Opiate analgesics such as morphine, codeine, and heroin are tertiary amines.
Gas treatment
Aqueous monoethanolamine (MEA), diglycolamine (DGA), diethanolamine (DEA), diisopropanolamine (DIPA) and methyldiethanolamine (MDEA) are widely used industrially for removing carbon dioxide (CO2) and hydrogen sulfide (H2S) from natural gas and refinery process streams. They may also be used to remove CO2 from combustion gases and flue gases and may have potential for abatement of greenhouse gases. Related processes are known as sweetening.
Epoxy resin curing agents
Amines are often used as epoxy resin curing agents. These include dimethylethylamine, cyclohexylamine, and a variety of diamines such as 4,4-diaminodicyclohexylmethane. Multifunctional amines such as tetraethylenepentamine and triethylenetetramine are also widely used in this capacity. The reaction proceeds by the lone pair of electrons on the amine nitrogen attacking the outermost carbon on the oxirane ring of the epoxy resin. This relieves ring strain on the epoxide and is the driving force of the reaction. Molecules with tertiary amine functionality are often used to accelerate the epoxy-amine curing reaction and include substances such as 2,4,6-Tris(dimethylaminomethyl)phenol. It has been stated that this is the most widely used room temperature accelerator for two-component epoxy resin systems.
Safety
Low molecular weight simple amines, such as ethylamine, are only weakly toxic with between 100 and 1000 mg/kg. They are skin irritants, especially as some are easily absorbed through the skin. Amines are a broad class of compounds, and more complex members of the class can be extremely bioactive, for example strychnine.
See also
Acid-base extraction
Amine value
Amine gas treating
Ammine
Biogenic amine
Ligand isomerism
Official naming rules for amines as determined by the International Union of Pure and Applied Chemistry (IUPAC)
References
Further reading
External links
Synthesis of amines
Factsheet, amines in food
Functional groups | Amine | [
"Chemistry"
] | 4,216 | [
"Amines",
"Bases (chemistry)",
"Functional groups"
] |
1,418 | https://en.wikipedia.org/wiki/Absolute%20zero | Absolute zero is the lowest limit of the thermodynamic temperature scale; a state at which the enthalpy and entropy of a cooled ideal gas reach their minimum value. The fundamental particles of nature have minimum vibrational motion, retaining only quantum mechanical, zero-point energy-induced particle motion. The theoretical temperature is determined by extrapolating the ideal gas law; by international agreement, absolute zero is taken as 0 kelvin (International System of Units), which is −273.15 degrees on the Celsius scale, and equals −459.67 degrees on the Fahrenheit scale (United States customary units or imperial units). The Kelvin and Rankine temperature scales set their zero points at absolute zero by definition.
It is commonly thought of as the lowest temperature possible, but it is not the lowest enthalpy state possible, because all real substances begin to depart from the ideal gas when cooled as they approach the change of state to liquid, and then to solid; and the sum of the enthalpy of vaporization (gas to liquid) and enthalpy of fusion (liquid to solid) exceeds the ideal gas's change in enthalpy to absolute zero. In the quantum-mechanical description, matter at absolute zero is in its ground state, the point of lowest internal energy.
The laws of thermodynamics show that absolute zero cannot be reached using only thermodynamic means, because the temperature of the substance being cooled approaches the temperature of the cooling agent asymptotically. Even a system at absolute zero, if it could somehow be achieved, would still possess quantum mechanical zero-point energy, the energy of its ground state at absolute zero; the kinetic energy of the ground state cannot be removed.
Scientists and technologists routinely achieve temperatures close to absolute zero, where matter exhibits quantum effects such as superconductivity, superfluidity, and Bose–Einstein condensation.
Thermodynamics near absolute zero
At temperatures near , nearly all molecular motion ceases and ΔS = 0 for any adiabatic process, where S is the entropy. In such a circumstance, pure substances can (ideally) form perfect crystals with no structural imperfections as T → 0. Max Planck's strong form of the third law of thermodynamics states the entropy of a perfect crystal vanishes at absolute zero. The original Nernst heat theorem makes the weaker and less controversial claim that the entropy change for any isothermal process approaches zero as T → 0:
The implication is that the entropy of a perfect crystal approaches a constant value. An adiabat is a state with constant entropy, typically represented on a graph as a curve in a manner similar to isotherms and isobars.
The Nernst postulate identifies the isotherm T = 0 as coincident with the adiabat S = 0, although other isotherms and adiabats are distinct. As no two adiabats intersect, no other adiabat can intersect the T = 0 isotherm. Consequently no adiabatic process initiated at nonzero temperature can lead to zero temperature (≈ Callen, pp. 189–190).
A perfect crystal is one in which the internal lattice structure extends uninterrupted in all directions. The perfect order can be represented by translational symmetry along three (not usually orthogonal) axes. Every lattice element of the structure is in its proper place, whether it is a single atom or a molecular grouping. For substances that exist in two (or more) stable crystalline forms, such as diamond and graphite for carbon, there is a kind of chemical degeneracy. The question remains whether both can have zero entropy at T = 0 even though each is perfectly ordered.
Perfect crystals never occur in practice; imperfections, and even entire amorphous material inclusions, can and do get "frozen in" at low temperatures, so transitions to more stable states do not occur.
Using the Debye model, the specific heat and entropy of a pure crystal are proportional to T 3, while the enthalpy and chemical potential are proportional to T 4 (Guggenheim, p. 111). These quantities drop toward their T = 0 limiting values and approach with zero slopes. For the specific heats at least, the limiting value itself is definitely zero, as borne out by experiments to below 10 K. Even the less detailed Einstein model shows this curious drop in specific heats. In fact, all specific heats vanish at absolute zero, not just those of crystals. Likewise for the coefficient of thermal expansion. Maxwell's relations show that various other quantities also vanish. These phenomena were unanticipated.
Since the relation between changes in Gibbs free energy (G), the enthalpy (H) and the entropy is
thus, as T decreases, ΔG and ΔH approach each other (so long as ΔS is bounded). Experimentally, it is found that all spontaneous processes (including chemical reactions) result in a decrease in G as they proceed toward equilibrium. If ΔS and/or T are small, the condition ΔG < 0 may imply that ΔH < 0, which would indicate an exothermic reaction. However, this is not required; endothermic reactions can proceed spontaneously if the TΔS term is large enough.
Moreover, the slopes of the derivatives of ΔG and ΔH converge and are equal to zero at T = 0. This ensures that ΔG and ΔH are nearly the same over a considerable range of temperatures and justifies the approximate empirical Principle of Thomsen and Berthelot, which states that the equilibrium state to which a system proceeds is the one that evolves the greatest amount of heat, i.e., an actual process is the most exothermic one (Callen, pp. 186–187).
One model that estimates the properties of an electron gas at absolute zero in metals is the Fermi gas. The electrons, being fermions, must be in different quantum states, which leads the electrons to get very high typical velocities, even at absolute zero. The maximum energy that electrons can have at absolute zero is called the Fermi energy. The Fermi temperature is defined as this maximum energy divided by the Boltzmann constant, and is on the order of 80,000 K for typical electron densities found in metals. For temperatures significantly below the Fermi temperature, the electrons behave in almost the same way as at absolute zero. This explains the failure of the classical equipartition theorem for metals that eluded classical physicists in the late 19th century.
Relation with Bose–Einstein condensate
A Bose–Einstein condensate (BEC) is a state of matter of a dilute gas of weakly interacting bosons confined in an external potential and cooled to temperatures very near absolute zero. Under such conditions, a large fraction of the bosons occupy the lowest quantum state of the external potential, at which point quantum effects become apparent on a macroscopic scale.
This state of matter was first predicted by Satyendra Nath Bose and Albert Einstein in 1924–1925. Bose first sent a paper to Einstein on the quantum statistics of light quanta (now called photons). Einstein was impressed, translated the paper from English to German and submitted it for Bose to the Zeitschrift für Physik, which published it. Einstein then extended Bose's ideas to material particles (or matter) in two other papers.
Seventy years later, in 1995, the first gaseous condensate was produced by Eric Cornell and Carl Wieman at the University of Colorado at Boulder NIST-JILA lab, using a gas of rubidium atoms cooled to ().
In 2003, researchers at the Massachusetts Institute of Technology (MIT) achieved a temperature of () in a BEC of sodium atoms. The associated black-body (peak emittance) wavelength of 6.4 megameters is roughly the radius of Earth.
In 2021, University of Bremen physicists achieved a BEC with a temperature of only , the current coldest temperature record.
Absolute temperature scales
Absolute, or thermodynamic, temperature is conventionally measured in kelvin (Celsius-scaled increments) and in the Rankine scale (Fahrenheit-scaled increments) with increasing rarity. Absolute temperature measurement is uniquely determined by a multiplicative constant which specifies the size of the degree, so the ratios of two absolute temperatures, T2/T1, are the same in all scales. The most transparent definition of this standard comes from the Maxwell–Boltzmann distribution. It can also be found in Fermi–Dirac statistics (for particles of half-integer spin) and Bose–Einstein statistics (for particles of integer spin). All of these define the relative numbers of particles in a system as decreasing exponential functions of energy (at the particle level) over kT, with k representing the Boltzmann constant and T representing the temperature observed at the macroscopic level.
Negative temperatures
Temperatures that are expressed as negative numbers on the familiar Celsius or Fahrenheit scales are simply colder than the zero points of those scales. Certain systems can achieve truly negative temperatures; that is, their thermodynamic temperature (expressed in kelvins) can be of a negative quantity. A system with a truly negative temperature is not colder than absolute zero. Rather, a system with a negative temperature is hotter than any system with a positive temperature, in the sense that if a negative-temperature system and a positive-temperature system come in contact, heat flows from the negative to the positive-temperature system.
Most familiar systems cannot achieve negative temperatures because adding energy always increases their entropy. However, some systems have a maximum amount of energy that they can hold, and as they approach that maximum energy their entropy actually begins to decrease. Because temperature is defined by the relationship between energy and entropy, such a system's temperature becomes negative, even though energy is being added. As a result, the Boltzmann factor for states of systems at negative temperature increases rather than decreases with increasing state energy. Therefore, no complete system, i.e. including the electromagnetic modes, can have negative temperatures, since there is no highest energy state, so that the sum of the probabilities of the states would diverge for negative temperatures. However, for quasi-equilibrium systems (e.g. spins out of equilibrium with the electromagnetic field) this argument does not apply, and negative effective temperatures are attainable.
On 3 January 2013, physicists announced that for the first time they had created a quantum gas made up of potassium atoms with a negative temperature in motional degrees of freedom.
History
One of the first to discuss the possibility of an absolute minimal temperature was Robert Boyle. His 1665 New Experiments and Observations touching Cold, articulated the dispute known as the primum frigidum. The concept was well known among naturalists of the time. Some contended an absolute minimum temperature occurred within earth (as one of the four classical elements), others within water, others air, and some more recently within nitre. But all of them seemed to agree that, "There is some body or other that is of its own nature supremely cold and by participation of which all other bodies obtain that quality."
Limit to the "degree of cold"
The question of whether there is a limit to the degree of coldness possible, and, if so, where the zero must be placed, was first addressed by the French physicist Guillaume Amontons in 1703, in connection with his improvements in the air thermometer. His instrument indicated temperatures by the height at which a certain mass of air sustained a column of mercury—the pressure, or "spring" of the air varying with temperature. Amontons therefore argued that the zero of his thermometer would be that temperature at which the spring of the air was reduced to nothing. He used a scale that marked the boiling point of water at +73 and the melting point of ice at +, so that the zero was equivalent to about −240 on the Celsius scale. Amontons held that the absolute zero cannot be reached, so never attempted to compute it explicitly. The value of −240 °C, or "431 divisions [in Fahrenheit's thermometer] below the cold of freezing water" was published by George Martine in 1740.
This close approximation to the modern value of −273.15 °C for the zero of the air thermometer was further improved upon in 1779 by Johann Heinrich Lambert, who observed that might be regarded as absolute cold.
Values of this order for the absolute zero were not, however, universally accepted about this period. Pierre-Simon Laplace and Antoine Lavoisier, in their 1780 treatise on heat, arrived at values ranging from 1,500 to 3,000 below the freezing point of water, and thought that in any case it must be at least 600 below. John Dalton in his Chemical Philosophy gave ten calculations of this value, and finally adopted −3,000 °C as the natural zero of temperature.
Charles's law
From 1787 to 1802, it was determined by Jacques Charles (unpublished), John Dalton, and Joseph Louis Gay-Lussac that, at constant pressure, ideal gases expanded or contracted their volume linearly (Charles's law) by about 1/273 parts per degree Celsius of temperature's change up or down, between 0° and 100° C. This suggested that the volume of a gas cooled at about −273 °C would reach zero.
Lord Kelvin's work
After James Prescott Joule had determined the mechanical equivalent of heat, Lord Kelvin approached the question from an entirely different point of view, and in 1848 devised a scale of absolute temperature that was independent of the properties of any particular substance and was based on Carnot's theory of the Motive Power of Heat and data published by Henri Victor Regnault. It followed from the principles on which this scale was constructed that its zero was placed at −273 °C, at almost precisely the same point as the zero of the air thermometer, where the air volume would reach "nothing". This value was not immediately accepted; values ranging from to , derived from laboratory measurements and observations of astronomical refraction, remained in use in the early 20th century.
The race to absolute zero
With a better theoretical understanding of absolute zero, scientists were eager to reach this temperature in the lab. By 1845, Michael Faraday had managed to liquefy most gases then known to exist, and reached a new record for lowest temperatures by reaching . Faraday believed that certain gases, such as oxygen, nitrogen, and hydrogen, were permanent gases and could not be liquefied. Decades later, in 1873 Dutch theoretical scientist Johannes Diderik van der Waals demonstrated that these gases could be liquefied, but only under conditions of very high pressure and very low temperatures. In 1877, Louis Paul Cailletet in France and Raoul Pictet in Switzerland succeeded in producing the first droplets of liquid air at . This was followed in 1883 by the production of liquid oxygen by the Polish professors Zygmunt Wróblewski and Karol Olszewski.
Scottish chemist and physicist James Dewar and Dutch physicist Heike Kamerlingh Onnes took on the challenge to liquefy the remaining gases, hydrogen and helium. In 1898, after 20 years of effort, Dewar was the first to liquefy hydrogen, reaching a new low-temperature record of . However, Kamerlingh Onnes, his rival, was the first to liquefy helium, in 1908, using several precooling stages and the Hampson–Linde cycle. He lowered the temperature to the boiling point of helium . By reducing the pressure of the liquid helium, he achieved an even lower temperature, near 1.5 K. These were the coldest temperatures achieved on Earth at the time and his achievement earned him the Nobel Prize in 1913. Kamerlingh Onnes would continue to study the properties of materials at temperatures near absolute zero, describing superconductivity and superfluids for the first time.
Very low temperatures
The average temperature of the universe today is approximately , based on measurements of cosmic microwave background radiation. Standard models of the future expansion of the universe predict that the average temperature of the universe is decreasing over time. This temperature is calculated as the mean density of energy in space; it should not be confused with the mean electron temperature (total energy divided by particle count) which has increased over time.
Absolute zero cannot be achieved, although it is possible to reach temperatures close to it through the use of evaporative cooling, cryocoolers, dilution refrigerators, and nuclear adiabatic demagnetization. The use of laser cooling has produced temperatures of less than a billionth of a kelvin. At very low temperatures in the vicinity of absolute zero, matter exhibits many unusual properties, including superconductivity, superfluidity, and Bose–Einstein condensation. To study such phenomena, scientists have worked to obtain even lower temperatures.
In November 2000, nuclear spin temperatures below were reported for an experiment at the Helsinki University of Technology's Low Temperature Lab in Espoo, Finland. However, this was the temperature of one particular degree of freedom—a quantum property called nuclear spin—not the overall average thermodynamic temperature for all possible degrees in freedom.
In February 2003, the Boomerang Nebula was observed to have been releasing gases at a speed of for the last 1,500 years. This has cooled it down to approximately 1 K, as deduced by astronomical observation, which is the lowest natural temperature ever recorded.
In November 2003, 90377 Sedna was discovered and is one of the coldest known objects in the Solar System, with an average surface temperature of , due to its extremely far orbit of 903 astronomical units.
In May 2005, the European Space Agency proposed research in space to achieve femtokelvin temperatures.
In May 2006, the Institute of Quantum Optics at the University of Hannover gave details of technologies and benefits of femtokelvin research in space.
In January 2013, physicist Ulrich Schneider of the University of Munich in Germany reported to have achieved temperatures formally below absolute zero ("negative temperature") in gases. The gas is artificially forced out of equilibrium into a high potential energy state, which is, however, cold. When it then emits radiation it approaches the equilibrium, and can continue emitting despite reaching formal absolute zero; thus, the temperature is formally negative.
In September 2014, scientists in the CUORE collaboration at the Laboratori Nazionali del Gran Sasso in Italy cooled a copper vessel with a volume of one cubic meter to for 15 days, setting a record for the lowest temperature in the known universe over such a large contiguous volume.
In June 2015, experimental physicists at MIT cooled molecules in a gas of sodium potassium to a temperature of 500 nanokelvin, and it is expected to exhibit an exotic state of matter by cooling these molecules somewhat further.
In 2017, Cold Atom Laboratory (CAL), an experimental instrument was developed for launch to the International Space Station (ISS) in 2018. The instrument has created extremely cold conditions in the microgravity environment of the ISS leading to the formation of Bose–Einstein condensates. In this space-based laboratory, temperatures as low as are projected to be achievable, and it could further the exploration of unknown quantum mechanical phenomena and test some of the most fundamental laws of physics.
The current world record for effective temperatures was set in 2021 at through matter-wave lensing of rubidium Bose–Einstein condensates.
See also
Kelvin (unit of temperature)
Charles's law
Heat
International Temperature Scale of 1990
Orders of magnitude (temperature)
Thermodynamic temperature
Triple point
Ultracold atom
Kinetic energy
Entropy
Planck temperature and Hagedorn temperature, hypothetical upper limits to the thermodynamic temperature scale
References
Further reading
BIPM Mise en pratique - Kelvin - Appendix 2 - SI Brochure.
External links
"Absolute zero": a two part NOVA episode originally aired January 2008
"What is absolute zero?" Lansing State Journal
Cold
Cryogenics
Temperature | Absolute zero | [
"Physics",
"Chemistry"
] | 4,146 | [
"Scalar physical quantities",
"Thermodynamic properties",
"Temperature",
"Applied and interdisciplinary physics",
"Physical quantities",
"SI base quantities",
"Intensive quantities",
"Cryogenics",
"Thermodynamics",
"Wikipedia categories named after physical quantities"
] |
1,419 | https://en.wikipedia.org/wiki/Adiabatic%20process | An adiabatic process (adiabatic ) is a type of thermodynamic process that occurs without transferring heat between the thermodynamic system and its environment. Unlike an isothermal process, an adiabatic process transfers energy to the surroundings only as work and/or mass flow. As a key concept in thermodynamics, the adiabatic process supports the theory that explains the first law of thermodynamics. The opposite term to "adiabatic" is diabatic.
Some chemical and physical processes occur too rapidly for energy to enter or leave the system as heat, allowing a convenient "adiabatic approximation". For example, the adiabatic flame temperature uses this approximation to calculate the upper limit of flame temperature by assuming combustion loses no heat to its surroundings.
In meteorology, adiabatic expansion and cooling of moist air, which can be triggered by winds flowing up and over a mountain for example, can cause the water vapor pressure to exceed the saturation vapor pressure. Expansion and cooling beyond the saturation vapor pressure is often idealized as a pseudo-adiabatic process whereby excess vapor instantly precipitates into water droplets. The change in temperature of an air undergoing pseudo-adiabatic expansion differs from air undergoing adiabatic expansion because latent heat is released by precipitation.
Description
A process without transfer of heat to or from a system, so that , is called adiabatic, and such a system is said to be adiabatically isolated. The simplifying assumption frequently made is that a process is adiabatic. For example, the compression of a gas within a cylinder of an engine is assumed to occur so rapidly that on the time scale of the compression process, little of the system's energy can be transferred out as heat to the surroundings. Even though the cylinders are not insulated and are quite conductive, that process is idealized to be adiabatic. The same can be said to be true for the expansion process of such a system.
The assumption of adiabatic isolation is useful and often combined with other such idealizations to calculate a good first approximation of a system's behaviour. For example, according to Laplace, when sound travels in a gas, there is no time for heat conduction in the medium, and so the propagation of sound is adiabatic. For such an adiabatic process, the modulus of elasticity (Young's modulus) can be expressed as , where is the ratio of specific heats at constant pressure and at constant volume () and is the pressure of the gas.
Various applications of the adiabatic assumption
For a closed system, one may write the first law of thermodynamics as , where denotes the change of the system's internal energy, the quantity of energy added to it as heat, and the work done by the system on its surroundings.
If the system has such rigid walls that work cannot be transferred in or out (), and the walls are not adiabatic and energy is added in the form of heat (), and there is no phase change, then the temperature of the system will rise.
If the system has such rigid walls that pressure–volume work cannot be done, but the walls are adiabatic (), and energy is added as isochoric (constant volume) work in the form of friction or the stirring of a viscous fluid within the system (), and there is no phase change, then the temperature of the system will rise.
If the system walls are adiabatic () but not rigid (), and, in a fictive idealized process, energy is added to the system in the form of frictionless, non-viscous pressure–volume work (), and there is no phase change, then the temperature of the system will rise. Such a process is called an isentropic process and is said to be "reversible". Ideally, if the process were reversed the energy could be recovered entirely as work done by the system. If the system contains a compressible gas and is reduced in volume, the uncertainty of the position of the gas is reduced, and seemingly would reduce the entropy of the system, but the temperature of the system will rise as the process is isentropic (). Should the work be added in such a way that friction or viscous forces are operating within the system, then the process is not isentropic, and if there is no phase change, then the temperature of the system will rise, the process is said to be "irreversible", and the work added to the system is not entirely recoverable in the form of work.
If the walls of a system are not adiabatic, and energy is transferred in as heat, entropy is transferred into the system with the heat. Such a process is neither adiabatic nor isentropic, having , and according to the second law of thermodynamics.
Naturally occurring adiabatic processes are irreversible (entropy is produced).
The transfer of energy as work into an adiabatically isolated system can be imagined as being of two idealized extreme kinds. In one such kind, no entropy is produced within the system (no friction, viscous dissipation, etc.), and the work is only pressure-volume work (denoted by ). In nature, this ideal kind occurs only approximately because it demands an infinitely slow process and no sources of dissipation.
The other extreme kind of work is isochoric work (), for which energy is added as work solely through friction or viscous dissipation within the system. A stirrer that transfers energy to a viscous fluid of an adiabatically isolated system with rigid walls, without phase change, will cause a rise in temperature of the fluid, but that work is not recoverable. Isochoric work is irreversible. The second law of thermodynamics observes that a natural process, of transfer of energy as work, always consists at least of isochoric work and often both of these extreme kinds of work. Every natural process, adiabatic or not, is irreversible, with , as friction or viscosity are always present to some extent.
Adiabatic compression and expansion
The adiabatic compression of a gas causes a rise in temperature of the gas. Adiabatic expansion against pressure, or a spring, causes a drop in temperature. In contrast, free expansion is an isothermal process for an ideal gas.
Adiabatic compression occurs when the pressure of a gas is increased by work done on it by its surroundings, e.g., a piston compressing a gas contained within a cylinder and raising the temperature where in many practical situations heat conduction through walls can be slow compared with the compression time. This finds practical application in diesel engines which rely on the lack of heat dissipation during the compression stroke to elevate the fuel vapor temperature sufficiently to ignite it.
Adiabatic compression occurs in the Earth's atmosphere when an air mass descends, for example, in a Katabatic wind, Foehn wind, or Chinook wind flowing downhill over a mountain range. When a parcel of air descends, the pressure on the parcel increases. Because of this increase in pressure, the parcel's volume decreases and its temperature increases as work is done on the parcel of air, thus increasing its internal energy, which manifests itself by a rise in the temperature of that mass of air. The parcel of air can only slowly dissipate the energy by conduction or radiation (heat), and to a first approximation it can be considered adiabatically isolated and the process an adiabatic process.
Adiabatic expansion occurs when the pressure on an adiabatically isolated system is decreased, allowing it to expand in size, thus causing it to do work on its surroundings. When the pressure applied on a parcel of gas is reduced, the gas in the parcel is allowed to expand; as the volume increases, the temperature falls as its internal energy decreases. Adiabatic expansion occurs in the Earth's atmosphere with orographic lifting and lee waves, and this can form pilei or lenticular clouds.
Due in part to adiabatic expansion in mountainous areas, snowfall infrequently occurs in some parts of the Sahara desert.
Adiabatic expansion does not have to involve a fluid. One technique used to reach very low temperatures (thousandths and even millionths of a degree above absolute zero) is via adiabatic demagnetisation, where the change in magnetic field on a magnetic material is used to provide adiabatic expansion. Also, the contents of an expanding universe can be described (to first order) as an adiabatically expanding fluid. (See heat death of the universe.)
Rising magma also undergoes adiabatic expansion before eruption, particularly significant in the case of magmas that rise quickly from great depths such as kimberlites.
In the Earth's convecting mantle (the asthenosphere) beneath the lithosphere, the mantle temperature is approximately an adiabat. The slight decrease in temperature with shallowing depth is due to the decrease in pressure the shallower the material is in the Earth.
Such temperature changes can be quantified using the ideal gas law, or the hydrostatic equation for atmospheric processes.
In practice, no process is truly adiabatic. Many processes rely on a large difference in time scales of the process of interest and the rate of heat dissipation across a system boundary, and thus are approximated by using an adiabatic assumption. There is always some heat loss, as no perfect insulators exist.
Ideal gas (reversible process)
The mathematical equation for an ideal gas undergoing a reversible (i.e., no entropy generation) adiabatic process can be represented by the polytropic process equation
where is pressure, is volume, and is the adiabatic index or heat capacity ratio defined as
Here is the specific heat for constant pressure, is the specific heat for constant volume, and is the number of degrees of freedom (3 for a monatomic gas, 5 for a diatomic gas or a gas of linear molecules such as carbon dioxide).
For a monatomic ideal gas, , and for a diatomic gas (such as nitrogen and oxygen, the main components of air), . Note that the above formula is only applicable to classical ideal gases (that is, gases far above absolute zero temperature) and not Bose–Einstein or Fermi gases.
One can also use the ideal gas law to rewrite the above relationship between and as
where T is the absolute or thermodynamic temperature.
Example of adiabatic compression
The compression stroke in a gasoline engine can be used as an example of adiabatic compression. The model assumptions are: the uncompressed volume of the cylinder is one litre (1 L = 1000 cm3 = 0.001 m3); the gas within is the air consisting of molecular nitrogen and oxygen only (thus a diatomic gas with 5 degrees of freedom, and so ); the compression ratio of the engine is 10:1 (that is, the 1 L volume of uncompressed gas is reduced to 0.1 L by the piston); and the uncompressed gas is at approximately room temperature and pressure (a warm room temperature of ~27 °C, or 300 K, and a pressure of 1 bar = 100 kPa, i.e. typical sea-level atmospheric pressure).
so the adiabatic constant for this example is about 6.31 Pa m4.2.
The gas is now compressed to a 0.1 L (0.0001 m3) volume, which we assume happens quickly enough that no heat enters or leaves the gas through the walls. The adiabatic constant remains the same, but with the resulting pressure unknown
We can now solve for the final pressure
or 25.1 bar. This pressure increase is more than a simple 10:1 compression ratio would indicate; this is because the gas is not only compressed, but the work done to compress the gas also increases its internal energy, which manifests itself by a rise in the gas temperature and an additional rise in pressure above what would result from a simplistic calculation of 10 times the original pressure.
We can solve for the temperature of the compressed gas in the engine cylinder as well, using the ideal gas law, PV = nRT (n is amount of gas in moles and R the gas constant for that gas). Our initial conditions being 100 kPa of pressure, 1 L volume, and 300 K of temperature, our experimental constant (nR) is:
We know the compressed gas has = 0.1 L and = , so we can solve for temperature:
That is a final temperature of 753 K, or 479 °C, or 896 °F, well above the ignition point of many fuels. This is why a high-compression engine requires fuels specially formulated to not self-ignite (which would cause engine knocking when operated under these conditions of temperature and pressure), or that a supercharger with an intercooler to provide a pressure boost but with a lower temperature rise would be advantageous. A diesel engine operates under even more extreme conditions, with compression ratios of 16:1 or more being typical, in order to provide a very high gas pressure, which ensures immediate ignition of the injected fuel.
Adiabatic free expansion of a gas
For an adiabatic free expansion of an ideal gas, the gas is contained in an insulated container and then allowed to expand in a vacuum. Because there is no external pressure for the gas to expand against, the work done by or on the system is zero. Since this process does not involve any heat transfer or work, the first law of thermodynamics then implies that the net internal energy change of the system is zero. For an ideal gas, the temperature remains constant because the internal energy only depends on temperature in that case. Since at constant temperature, the entropy is proportional to the volume, the entropy increases in this case, therefore this process is irreversible.
Derivation of P–V relation for adiabatic compression and expansion
The definition of an adiabatic process is that heat transfer to the system is zero, . Then, according to the first law of thermodynamics,
where is the change in the internal energy of the system and is work done by the system. Any work () done must be done at the expense of internal energy , since no heat is being supplied from the surroundings. Pressure–volume work done by the system is defined as
However, does not remain constant during an adiabatic process but instead changes along with .
It is desired to know how the values of and relate to each other as the adiabatic process proceeds. For an ideal gas (recall ideal gas law ) the internal energy is given by
where is the number of degrees of freedom divided by 2, is the universal gas constant and is the number of moles in the system (a constant).
Differentiating equation (a3) yields
Equation (a4) is often expressed as because .
Now substitute equations (a2) and (a4) into equation (a1) to obtain
factorize :
and divide both sides by :
After integrating the left and right sides from to and from to and changing the sides respectively,
Exponentiate both sides, substitute with , the heat capacity ratio
and eliminate the negative sign to obtain
Therefore,
and
At the same time, the work done by the pressure–volume changes as a result from this process, is equal to
Since we require the process to be adiabatic, the following equation needs to be true
By the previous derivation,
Rearranging (b4) gives
Substituting this into (b2) gives
Integrating, we obtain the expression for work,
Substituting in the second term,
Rearranging,
Using the ideal gas law and assuming a constant molar quantity (as often happens in practical cases),
By the continuous formula,
or
Substituting into the previous expression for ,
Substituting this expression and (b1) in (b3) gives
Simplifying,
Derivation of discrete formula and work expression
The change in internal energy of a system, measured from state 1 to state 2, is equal to
At the same time, the work done by the pressure–volume changes as a result from this process, is equal to
Since we require the process to be adiabatic, the following equation needs to be true
By the previous derivation,
Rearranging (c4) gives
Substituting this into (c2) gives
Integrating we obtain the expression for work,
Substituting in second term,
Rearranging,
Using the ideal gas law and assuming a constant molar quantity (as often happens in practical cases),
By the continuous formula,
or
Substituting into the previous expression for ,
Substituting this expression and (c1) in (c3) gives
Simplifying,
Graphing adiabats
An adiabat is a curve of constant entropy in a diagram. Some properties of adiabats on a P–V diagram are indicated. These properties may be read from the classical behaviour of ideal gases, except in the region where PV becomes small (low temperature), where quantum effects become important.
Every adiabat asymptotically approaches both the V axis and the P axis (just like isotherms).
Each adiabat intersects each isotherm exactly once.
An adiabat looks similar to an isotherm, except that during an expansion, an adiabat loses more pressure than an isotherm, so it has a steeper inclination (more vertical).
If isotherms are concave towards the north-east direction (45° from V-axis), then adiabats are concave towards the east north-east (31° from V-axis).
If adiabats and isotherms are graphed at regular intervals of entropy and temperature, respectively (like altitude on a contour map), then as the eye moves towards the axes (towards the south-west), it sees the density of isotherms stay constant, but it sees the density of adiabats grow. The exception is very near absolute zero, where the density of adiabats drops sharply and they become rare (see Nernst's theorem).
Etymology
The term adiabatic () is an anglicization of the Greek term ἀδιάβατος "impassable" (used by Xenophon of rivers). It is used in the thermodynamic sense by Rankine (1866), and adopted by Maxwell in 1871 (explicitly attributing the term to Rankine).
The etymological origin corresponds here to an impossibility of transfer of energy as heat and of transfer of matter across the wall.
The Greek word ἀδιάβατος is formed from privative ἀ- ("not") and διαβατός, "passable", in turn deriving from διά ("through"), and βαῖνειν ("to walk, go, come").
Furthermore, in atmospheric thermodynamics, a diabatic process is one in which heat is exchanged. An adiabatic process is the opposite – a process in which no heat is exchanged.
Conceptual significance in thermodynamic theory
The adiabatic process has been important for thermodynamics since its early days. It was important in the work of Joule because it provided a way of nearly directly relating quantities of heat and work.
Energy can enter or leave a thermodynamic system enclosed by walls that prevent mass transfer only as heat or work. Therefore, a quantity of work in such a system can be related almost directly to an equivalent quantity of heat in a cycle of two limbs. The first limb is an isochoric adiabatic work process increasing the system's internal energy; the second, an isochoric and workless heat transfer returning the system to its original state. Accordingly, Rankine measured quantity of heat in units of work, rather than as a calorimetric quantity. In 1854, Rankine used a quantity that he called "the thermodynamic function" that later was called entropy, and at that time he wrote also of the "curve of no transmission of heat", which he later called an adiabatic curve. Besides its two isothermal limbs, Carnot's cycle has two adiabatic limbs.
For the foundations of thermodynamics, the conceptual importance of this was emphasized by Bryan, by Carathéodory, and by Born. The reason is that calorimetry presupposes a type of temperature as already defined before the statement of the first law of thermodynamics, such as one based on empirical scales. Such a presupposition involves making the distinction between empirical temperature and absolute temperature. Rather, the definition of absolute thermodynamic temperature is best left till the second law is available as a conceptual basis.
In the eighteenth century, the law of conservation of energy was not yet fully formulated or established, and the nature of heat was debated. One approach to these problems was to regard heat, measured by calorimetry, as a primary substance that is conserved in quantity. By the middle of the nineteenth century, it was recognized as a form of energy, and the law of conservation of energy was thereby also recognized. The view that eventually established itself, and is currently regarded as right, is that the law of conservation of energy is a primary axiom, and that heat is to be analyzed as consequential. In this light, heat cannot be a component of the total energy of a single body because it is not a state variable but, rather, a variable that describes a transfer between two bodies. The adiabatic process is important because it is a logical ingredient of this current view.
Divergent usages of the word adiabatic
This present article is written from the viewpoint of macroscopic thermodynamics, and the word adiabatic is used in this article in the traditional way of thermodynamics, introduced by Rankine. It is pointed out in the present article that, for example, if a compression of a gas is rapid, then there is little time for heat transfer to occur, even when the gas is not adiabatically isolated by a definite wall. In this sense, a rapid compression of a gas is sometimes approximately or loosely said to be adiabatic, though often far from isentropic, even when the gas is not adiabatically isolated by a definite wall.
Some authors, like Pippard, recommend using "adiathermal" to refer to processes where no heat-exchange occurs (such as Joule expansion), and "adiabatic" to reversible quasi-static adiathermal processes (so that rapid compression of a gas is not "adiabatic"). And Laidler has summarized the complicated etymology of "adiabatic".
Quantum mechanics and quantum statistical mechanics, however, use the word adiabatic in a very different sense, one that can at times seem almost opposite to the classical thermodynamic sense. In quantum theory, the word adiabatic can mean something perhaps near isentropic, or perhaps near quasi-static, but the usage of the word is very different between the two disciplines.
On the one hand, in quantum theory, if a perturbative element of compressive work is done almost infinitely slowly (that is to say quasi-statically), it is said to have been done adiabatically. The idea is that the shapes of the eigenfunctions change slowly and continuously, so that no quantum jump is triggered, and the change is virtually reversible. While the occupation numbers are unchanged, nevertheless there is change in the energy levels of one-to-one corresponding, pre- and post-compression, eigenstates. Thus a perturbative element of work has been done without heat transfer and without introduction of random change within the system. For example, Max Born writes
On the other hand, in quantum theory, if a perturbative element of compressive work is done rapidly, it changes the occupation numbers and energies of the eigenstates in proportion to the transition moment integral and in accordance with time-dependent perturbation theory, as well as perturbing the functional form of the eigenstates themselves. In that theory, such a rapid change is said not to be adiabatic, and the contrary word diabatic is applied to it.
Recent research suggests that the power absorbed from the perturbation corresponds to the rate of these non-adiabatic transitions. This corresponds to the classical process of energy transfer in the form of heat, but with the relative time scales reversed in the quantum case. Quantum adiabatic processes occur over relatively long time scales, while classical adiabatic processes occur over relatively short time scales. It should also be noted that the concept of 'heat' (in reference to the quantity of thermal energy transferred) breaks down at the quantum level, and the specific form of energy (typically electromagnetic) must be considered instead. The small or negligible absorption of energy from the perturbation in a quantum adiabatic process provides a good justification for identifying it as the quantum analogue of adiabatic processes in classical thermodynamics, and for the reuse of the term.
In classical thermodynamics, such a rapid change would still be called adiabatic because the system is adiabatically isolated, and there is no transfer of energy as heat. The strong irreversibility of the change, due to viscosity or other entropy production, does not impinge on this classical usage.
Thus for a mass of gas, in macroscopic thermodynamics, words are so used that a compression is sometimes loosely or approximately said to be adiabatic if it is rapid enough to avoid significant heat transfer, even if the system is not adiabatically isolated. But in quantum statistical theory, a compression is not called adiabatic if it is rapid, even if the system is adiabatically isolated in the classical thermodynamic sense of the term. The words are used differently in the two disciplines, as stated just above.
See also
Fire piston
Heat burst
Related physics topics
First law of thermodynamics
Entropy (classical thermodynamics)
Adiabatic conductivity
Adiabatic lapse rate
Total air temperature
Magnetic refrigeration
Berry phase
Related thermodynamic processes
Cyclic process
Isobaric process
Isenthalpic process
Isentropic process
Isochoric process
Isothermal process
Polytropic process
Quasistatic process
References
General
Nave, Carl Rod. "Adiabatic Processes". HyperPhysics.
Thorngren, Dr. Jane R. "Adiabatic Processes". Daphne – A Palomar College Web Server, 21 July 1995. .
External links
Article in HyperPhysics Encyclopaedia
Thermodynamic processes
Atmospheric thermodynamics
Entropy | Adiabatic process | [
"Physics",
"Chemistry",
"Mathematics"
] | 5,595 | [
"Thermodynamic properties",
"Physical quantities",
"Thermodynamic processes",
"Quantity",
"Entropy",
"Thermodynamics",
"Asymmetry",
"Wikipedia categories named after physical quantities",
"Symmetry",
"Dynamical systems"
] |
1,422 | https://en.wikipedia.org/wiki/Amide | In organic chemistry, an amide, also known as an organic amide or a carboxamide, is a compound with the general formula , where R, R', and R″ represent any group, typically organyl groups or hydrogen atoms. The amide group is called a peptide bond when it is part of the main chain of a protein, and an isopeptide bond when it occurs in a side chain, as in asparagine and glutamine. It can be viewed as a derivative of a carboxylic acid () with the hydroxyl group () replaced by an amine group (); or, equivalently, an acyl (alkanoyl) group () joined to an amine group.
Common of amides are formamide (), acetamide (), benzamide (), and dimethylformamide (). Some uncommon examples of amides are N-chloroacetamide () and chloroformamide ().
Amides are qualified as primary, secondary, and tertiary according to the number of carbon atoms bounded to the nitrogen atom.
Nomenclature
The core of amides is called the amide group (specifically, carboxamide group).
In the usual nomenclature, one adds the term "amide" to the stem of the parent acid's name. For instance, the amide derived from acetic acid is named acetamide (CH3CONH2). IUPAC recommends ethanamide, but this and related formal names are rarely encountered. When the amide is derived from a primary or secondary amine, the substituents on nitrogen are indicated first in the name. Thus, the amide formed from dimethylamine and acetic acid is N,N-dimethylacetamide (CH3CONMe2, where Me = CH3). Usually even this name is simplified to dimethylacetamide. Cyclic amides are called lactams; they are necessarily secondary or tertiary amides.
Applications
Amides are pervasive in nature and technology. Proteins and important plastics like nylons, aramids, Twaron, and Kevlar are polymers whose units are connected by amide groups (polyamides); these linkages are easily formed, confer structural rigidity, and resist hydrolysis. Amides include many other important biological compounds, as well as many drugs like paracetamol, penicillin and LSD. Low-molecular-weight amides, such as dimethylformamide, are common solvents.
Structure and bonding
The lone pair of electrons on the nitrogen atom is delocalized into the Carbonyl group, thus forming a partial double bond between nitrogen and carbon. In fact the O, C and N atoms have molecular orbitals occupied by delocalized electrons, forming a conjugated system. Consequently, the three bonds of the nitrogen in amides is not pyramidal (as in the amines) but planar. This planar restriction prevents rotations about the N linkage and thus has important consequences for the mechanical properties of bulk material of such molecules, and also for the configurational properties of macromolecules built by such bonds. The inability to rotate distinguishes amide groups from ester groups which allow rotation and thus create more flexible bulk material.
The C-C(O)NR2 core of amides is planar. The C=O distance is shorter than the C-N distance by almost 10%. The structure of an amide can be described also as a resonance between two alternative structures: neutral (A) and zwitterionic (B).
It is estimated that for acetamide, structure A makes a 62% contribution to the structure, while structure B makes a 28% contribution (these figures do not sum to 100% because there are additional less-important resonance forms that are not depicted above). There is also a hydrogen bond present between the hydrogen and nitrogen atoms in the active groups. Resonance is largely prevented in the very strained quinuclidone.
In their IR spectra, amides exhibit a moderately intense νCO band near 1650 cm−1. The energy of this band is about 60 cm−1 lower than for the νCO of esters and ketones. This difference reflects the contribution of the zwitterionic resonance structure.
Basicity
Compared to amines, amides are very weak bases. While the conjugate acid of an amine has a pKa of about 9.5, the conjugate acid of an amide has a pKa around −0.5. Therefore, compared to amines, amides do not have acid–base properties that are as noticeable in water. This relative lack of basicity is explained by the withdrawing of electrons from the amine by the carbonyl. On the other hand, amides are much stronger bases than carboxylic acids, esters, aldehydes, and ketones (their conjugate acids' pKas are between −6 and −10).
The proton of a primary or secondary amide does not dissociate readily; its pKa is usually well above 15. Conversely, under extremely acidic conditions, the carbonyl oxygen can become protonated with a pKa of roughly −1. It is not only because of the positive charge on the nitrogen but also because of the negative charge on the oxygen gained through resonance.
Hydrogen bonding and solubility
Because of the greater electronegativity of oxygen than nitrogen, the carbonyl (C=O) is a stronger dipole than the N–C dipole. The presence of a C=O dipole and, to a lesser extent a N–C dipole, allows amides to act as H-bond acceptors. In primary and secondary amides, the presence of N–H dipoles allows amides to function as H-bond donors as well. Thus amides can participate in hydrogen bonding with water and other protic solvents; the oxygen atom can accept hydrogen bonds from water and the N–H hydrogen atoms can donate H-bonds. As a result of interactions such as these, the water solubility of amides is greater than that of corresponding hydrocarbons. These hydrogen bonds also have an important role in the secondary structure of proteins.
The solubilities of amides and esters are roughly comparable. Typically amides are less soluble than comparable amines and carboxylic acids since these compounds can both donate and accept hydrogen bonds. Tertiary amides, with the important exception of N,N-dimethylformamide, exhibit low solubility in water.
Reactions
Amides do not readily participate in nucleophilic substitution reactions. Amides are stable to water, and are roughly 100 times more stable towards hydrolysis than esters. Amides can, however, be hydrolyzed to carboxylic acids in the presence of acid or base. The stability of amide bonds has biological implications, since the amino acids that make up proteins are linked with amide bonds. Amide bonds are resistant enough to hydrolysis to maintain protein structure in aqueous environments but are susceptible to catalyzed hydrolysis.
Primary and secondary amides do not react usefully with carbon nucleophiles. Instead, Grignard reagents and organolithiums deprotonate an amide N-H bond. Tertiary amides do not experience this problem, and react with carbon nucleophiles to give ketones; the amide anion (NR2−) is a very strong base and thus a very poor leaving group, so nucleophilic attack only occurs once. When reacted with carbon nucleophiles, N,N-dimethylformamide (DMF) can be used to introduce a formyl group.
Here, phenyllithium 1 attacks the carbonyl group of DMF 2, giving tetrahedral intermediate 3. Because the dimethylamide anion is a poor leaving group, the intermediate does not collapse and another nucleophilic addition does not occur. Upon acidic workup, the alkoxide is protonated to give 4, then the amine is protonated to give 5. Elimination of a neutral molecule of dimethylamine and loss of a proton give benzaldehyde, 6.
Hydrolysis
Amides hydrolyse in hot alkali as well as in strong acidic conditions. Acidic conditions yield the carboxylic acid and the ammonium ion while basic hydrolysis yield the carboxylate ion and ammonia. The protonation of the initially generated amine under acidic conditions and the deprotonation of the initially generated carboxylic acid under basic conditions render these processes non-catalytic and irreversible. Electrophiles other than protons react with the carbonyl oxygen. This step often precedes hydrolysis, which is catalyzed by both Brønsted acids and Lewis acids. Peptidase enzymes and some synthetic catalysts often operate by attachment of electrophiles to the carbonyl oxygen.
Synthesis
From carboxylic acids and related compounds
Amides are usually prepared by coupling a carboxylic acid with an amine. The direct reaction generally requires high temperatures to drive off the water:
Esters are far superior substrates relative to carboxylic acids.
Further "activating" both acid chlorides (Schotten-Baumann reaction) and anhydrides (Lumière–Barbier method) react with amines to give amides:
Peptide synthesis use coupling agents such as HATU, HOBt, or PyBOP.
From nitriles
The hydrolysis of nitriles is conducted on an industrial scale to produce fatty amides. Laboratory procedures are also available.
Specialty routes
Many specialized methods also yield amides. A variety of reagents, e.g. tris(2,2,2-trifluoroethyl) borate have been developed for specialized applications.
See also
Amidogen
Amino radical
Amidicity
Imidic acid
Metal amides
References
External links
IUPAC Compendium of Chemical Terminology
Functional groups | Amide | [
"Chemistry"
] | 2,111 | [
"Functional groups"
] |
1,456 | https://en.wikipedia.org/wiki/AWK | AWK () is a domain-specific language designed for text processing and typically used as a data extraction and reporting tool. Like sed and grep, it is a filter, and it is a standard feature of most Unix-like operating systems.
The AWK language is a data-driven scripting language consisting of a set of actions to be taken against streams of textual data – either run directly on files or used as part of a pipeline – for purposes of extracting or transforming text, such as producing formatted reports. The language extensively uses the string datatype, associative arrays (that is, arrays indexed by key strings), and regular expressions. While AWK has a limited intended application domain and was especially designed to support one-liner programs, the language is Turing-complete, and even the early Bell Labs users of AWK often wrote well-structured large AWK programs.
AWK was created at Bell Labs in the 1970s, and its name is derived from the surnames of its authors: Alfred Aho (author of egrep), Peter Weinberger (who worked on tiny relational databases), and Brian Kernighan. The acronym is pronounced the same as the name of the bird species auk, which is illustrated on the cover of The AWK Programming Language. When written in all lowercase letters, as awk, it refers to the Unix or Plan 9 program that runs scripts written in the AWK programming language.
History
According to Brian Kernighan, one of the goals of AWK was to have a tool that would easily manipulate both numbers and strings. AWK was also inspired by Marc Rochkind's programming language that was used to search for patterns in input data, and was implemented using yacc.
As one of the early tools to appear in Version 7 Unix, AWK added computational features to a Unix pipeline besides the Bourne shell, the only scripting language available in a standard Unix environment. It is one of the mandatory utilities of the Single UNIX Specification, and is required by the Linux Standard Base specification.
In 1983, AWK was one of several UNIX tools available for Charles River Data Systems' UNOS operating system under Bell Laboratories license.
AWK was significantly revised and expanded in 1985–88, resulting in the GNU AWK implementation written by Paul Rubin, Jay Fenlason, and Richard Stallman, released in 1988. GNU AWK may be the most widely deployed version because it is included with GNU-based Linux packages. GNU AWK has been maintained solely by Arnold Robbins since 1994. Brian Kernighan's nawk (New AWK) source was first released in 1993 unpublicized, and publicly since the late 1990s; many BSD systems use it to avoid the GPL license.
AWK was preceded by sed (1974). Both were designed for text processing. They share the line-oriented, data-driven paradigm, and are particularly suited to writing one-liner programs, due to the implicit main loop and current line variables. The power and terseness of early AWK programs – notably the powerful regular expression handling and conciseness due to implicit variables, which facilitate one-liners – together with the limitations of AWK at the time, were important inspirations for the Perl language (1987). In the 1990s, Perl became very popular, competing with AWK in the niche of Unix text-processing languages.
Structure of AWK programs
An AWK program is a series of pattern action pairs, written as:
condition { action }
condition { action }
...
where condition is typically an expression and action is a series of commands. The input is split into records, where by default records are separated by newline characters so that the input is split into lines. The program tests each record against each of the conditions in turn, and executes the action for each expression that is true. Either the condition or the action may be omitted. The condition defaults to matching every record. The default action is to print the record. This is the same pattern-action structure as sed.
In addition to a simple AWK expression, such as foo == 1 or /^foo/, the condition can be BEGIN or END causing the action to be executed before or after all records have been read, or pattern1, pattern2 which matches the range of records starting with a record that matches pattern1 up to and including the record that matches pattern2 before again trying to match against pattern1 on subsequent lines.
In addition to normal arithmetic and logical operators, AWK expressions include the tilde operator, ~, which matches a regular expression against a string. As handy syntactic sugar, /regexp/ without using the tilde operator matches against the current record; this syntax derives from sed, which in turn inherited it from the ed editor, where / is used for searching. This syntax of using slashes as delimiters for regular expressions was subsequently adopted by Perl and ECMAScript, and is now common. The tilde operator was also adopted by Perl.
Commands
AWK commands are the statements that are substituted for action in the examples above. AWK commands can include function calls, variable assignments, calculations, or any combination thereof. AWK contains built-in support for many functions; many more are provided by the various flavors of AWK. Also, some flavors support the inclusion of dynamically linked libraries, which can also provide more functions.
The print command
The print command is used to output text. The output text is always terminated with a predefined string called the output record separator (ORS) whose default value is a newline. The simplest form of this command is:
print
This displays the contents of the current record. In AWK, records are broken down into fields, and these can be displayed separately:
print $1
Displays the first field of the current record
print $1, $3
Displays the first and third fields of the current record, separated by a predefined string called the output field separator (OFS) whose default value is a single space character
Although these fields ($X) may bear resemblance to variables (the $ symbol indicates variables in the usual Unix shells and in Perl), they actually refer to the fields of the current record. A special case, $0, refers to the entire record. In fact, the commands "print" and "print $0" are identical in functionality.
The print command can also display the results of calculations and/or function calls:
/regex_pattern/ {
# Actions to perform in the event the record (line) matches the above regex_pattern
print 3+2
print foobar(3)
print foobar(variable)
print sin(3-2)
}
Output may be sent to a file:
/regex_pattern/ {
# Actions to perform in the event the record (line) matches the above regex_pattern
print "expression" > "file name"
}
or through a pipe:
/regex_pattern/ {
# Actions to perform in the event the record (line) matches the above regex_pattern
print "expression" | "command"
}
Built-in variables
AWK's built-in variables include the field variables: $1, $2, $3, and so on ($0 represents the entire record). They hold the text or values in the individual text-fields in a record.
Other variables include:
NR: Number of Records. Keeps a current count of the number of input records read so far from all data files. It starts at zero, but is never automatically reset to zero.
FNR: File Number of Records. Keeps a current count of the number of input records read so far in the current file. This variable is automatically reset to zero each time a new file is started.
NF: Number of Fields. Contains the number of fields in the current input record. The last field in the input record can be designated by $NF, the 2nd-to-last field by $(NF-1), the 3rd-to-last field by $(NF-2), etc.
FILENAME: Contains the name of the current input-file.
FS: Field Separator. Contains the "field separator" used to divide fields in the input record. The default, "white space", allows any sequence of space and tab characters. FS can be reassigned with another character or character sequence to change the field separator.
RS: Record Separator. Stores the current "record separator" character. Since, by default, an input line is the input record, the default record separator character is a "newline".
OFS: Output Field Separator. Stores the "output field separator", which separates the fields when awk prints them. The default is a "space" character.
ORS: Output Record Separator. Stores the "output record separator", which separates the output records when awk prints them. The default is a "newline" character.
OFMT: Output Format. Stores the format for numeric output. The default format is "%.6g".
Variables and syntax
Variable names can use any of the characters [A-Za-z0-9_], with the exception of language keywords, and cannot begin with a numeric digit. The operators + - * / represent addition, subtraction, multiplication, and division, respectively. For string concatenation, simply place two variables (or string constants) next to each other. It is optional to use a space in between if string constants are involved, but two variable names placed adjacent to each other require a space in between. Double quotes delimit string constants. Statements need not end with semicolons. Finally, comments can be added to programs by using # as the first character on a line, or behind a command or sequence of commands.
User-defined functions
In a format similar to C, function definitions consist of the keyword function, the function name, argument names and the function body. Here is an example of a function.
function add_three(number) {
return number + 3
}
This statement can be invoked as follows:
(pattern) {
print add_three(36) # Outputs '''39'''
}
Functions can have variables that are in the local scope. The names of these are added to the end of the argument list, though values for these should be omitted when calling the function. It is convention to add some whitespace in the argument list before the local variables, to indicate where the parameters end and the local variables begin.
Examples
Hello, World!
Here is the customary "Hello, World!" program written in AWK:
BEGIN {
print "Hello, world!"
exit
}
Print lines longer than 80 characters
Print all lines longer than 80 characters. The default action is to print the current line.
length($0) > 80
Count words
Count words in the input and print the number of lines, words, and characters (like wc):
{
words += NF
chars += length + 1 # add one to account for the newline character at the end of each record (line)
}
END { print NR, words, chars }
As there is no pattern for the first line of the program, every line of input matches by default, so the increment actions are executed for every line. words += NF is shorthand for words = words + NF.
Sum last word
{ s += $NF }
END { print s + 0 }
s is incremented by the numeric value of $NF, which is the last word on the line as defined by AWK's field separator (by default, white-space). NF is the number of fields in the current line, e.g. 4. Since $4 is the value of the fourth field, $NF is the value of the last field in the line regardless of how many fields this line has, or whether it has more or fewer fields than surrounding lines. $ is actually a unary operator with the highest operator precedence. (If the line has no fields, then NF is 0, $0 is the whole line, which in this case is empty apart from possible white-space, and so has the numeric value 0.)
At the end of the input the END pattern matches, so s is printed. However, since there may have been no lines of input at all, in which case no value has ever been assigned to s, it will by default be an empty string. Adding zero to a variable is an AWK idiom for coercing it from a string to a numeric value. (Concatenating an empty string is to coerce from a number to a string, e.g. s "". Note, there's no operator to concatenate strings, they're just placed adjacently.) With the coercion the program prints "0" on an empty input, without it, an empty line is printed.
Match a range of input lines
NR % 4 == 1, NR % 4 == 3 { printf "%6d %s\n", NR, $0 }
The action statement prints each line numbered. The printf function emulates the standard C printf and works similarly to the print command described above. The pattern to match, however, works as follows: NR is the number of records, typically lines of input, AWK has so far read, i.e. the current line number, starting at 1 for the first line of input. % is the modulo operator. NR % 4 == 1 is true for the 1st, 5th, 9th, etc., lines of input. Likewise, NR % 4 == 3 is true for the 3rd, 7th, 11th, etc., lines of input. The range pattern is false until the first part matches, on line 1, and then remains true up to and including when the second part matches, on line 3. It then stays false until the first part matches again on line 5.
Thus, the program prints lines 1,2,3, skips line 4, and then 5,6,7, and so on. For each line, it prints the line number (on a 6 character-wide field) and then the line contents. For example, when executed on this input:
Rome
Florence
Milan
Naples
Turin
Venice
The previous program prints:
1 Rome
2 Florence
3 Milan
5 Turin
6 Venice
Printing the initial or the final part of a file
As a special case, when the first part of a range pattern is constantly true, e.g. 1, the range will start at the beginning of the input. Similarly, if the second part is constantly false, e.g. 0, the range will continue until the end of input. For example,
/^--cut here--$/, 0
prints lines of input from the first line matching the regular expression ^--cut here--$, that is, a line containing only the phrase "--cut here--", to the end.
Calculate word frequencies
Word frequency using associative arrays:
BEGIN {
FS="[^a-zA-Z]+"
}
{
for (i=1; i<=NF; i++)
words[tolower($i)]++
}
END {
for (i in words)
print i, words[i]
}
The BEGIN block sets the field separator to any sequence of non-alphabetic characters. Separators can be regular expressions. After that, we get to a bare action, which performs the action on every input line. In this case, for every field on the line, we add one to the number of times that word, first converted to lowercase, appears. Finally, in the END block, we print the words with their frequencies. The line
for (i in words)
creates a loop that goes through the array words, setting i to each subscript of the array. This is different from most languages, where such a loop goes through each value in the array. The loop thus prints out each word followed by its frequency count. tolower was an addition to the One True awk (see below) made after the book was published.
Match pattern from command line
This program can be represented in several ways. The first one uses the Bourne shell to make a shell script that does everything. It is the shortest of these methods:
#!/bin/sh
pattern="$1"
shift
awk '/'"$pattern"'/ { print FILENAME ":" $0 }' "$@"
The $pattern in the awk command is not protected by single quotes so that the shell does expand the variable but it needs to be put in double quotes to properly handle patterns containing spaces. A pattern by itself in the usual way checks to see if the whole line ($0) matches. FILENAME contains the current filename. awk has no explicit concatenation operator; two adjacent strings concatenate them. $0 expands to the original unchanged input line.
There are alternate ways of writing this. This shell script accesses the environment directly from within awk:
#!/bin/sh
export pattern="$1"
shift
awk '$0 ~ ENVIRON["pattern"] { print FILENAME ":" $0 }' "$@"
This is a shell script that uses ENVIRON, an array introduced in a newer version of the One True awk after the book was published. The subscript of ENVIRON is the name of an environment variable; its result is the variable's value. This is like the getenv function in various standard libraries and POSIX. The shell script makes an environment variable pattern containing the first argument, then drops that argument and has awk look for the pattern in each file.
~ checks to see if its left operand matches its right operand; !~ is its inverse. A regular expression is just a string and can be stored in variables.
The next way uses command-line variable assignment, in which an argument to awk can be seen as an assignment to a variable:
#!/bin/sh
pattern="$1"
shift
awk '$0 ~ pattern { print FILENAME ":" $0 }' pattern="$pattern" "$@"
Or You can use the -v var=value command line option (e.g. awk -v pattern="$pattern" ...).
Finally, this is written in pure awk, without help from a shell or without the need to know too much about the implementation of the awk script (as the variable assignment on command line one does), but is a bit lengthy:
BEGIN {
pattern = ARGV[1]
for (i = 1; i < ARGC; i++) # remove first argument
ARGV[i] = ARGV[i + 1]
ARGC--
if (ARGC == 1) { # the pattern was the only thing, so force read from standard input (used by book)
ARGC = 2
ARGV[1] = "-"
}
}
$0 ~ pattern { print FILENAME ":" $0 }
The BEGIN is necessary not only to extract the first argument, but also to prevent it from being interpreted as a filename after the BEGIN block ends. ARGC, the number of arguments, is always guaranteed to be ≥1, as ARGV[0] is the name of the command that executed the script, most often the string "awk". ARGV[ARGC] is the empty string, "". # initiates a comment that expands to the end of the line.
Note the if block. awk only checks to see if it should read from standard input before it runs the command. This means that
awk 'prog'
only works because the fact that there are no filenames is only checked before prog is run! If you explicitly set ARGC to 1 so that there are no arguments, awk will simply quit because it feels there are no more input files. Therefore, you need to explicitly say to read from standard input with the special filename -.
Self-contained AWK scripts
On Unix-like operating systems self-contained AWK scripts can be constructed using the shebang syntax.
For example, a script that sends the content of a given file to standard output may be built by creating a file named print.awk with the following content:
#!/usr/bin/awk -f
{ print $0 }
It can be invoked with: ./print.awk <filename>
The -f tells awk that the argument that follows is the file to read the AWK program from, which is the same flag that is used in sed. Since they are often used for one-liners, both these programs default to executing a program given as a command-line argument, rather than a separate file.
Versions and implementations
AWK was originally written in 1977 and distributed with Version 7 Unix.
In 1985 its authors started expanding the language, most significantly by adding user-defined functions. The language is described in the book The AWK Programming Language, published 1988, and its implementation was made available in releases of UNIX System V. To avoid confusion with the incompatible older version, this version was sometimes called "new awk" or nawk. This implementation was released under a free software license in 1996 and is still maintained by Brian Kernighan (see external links below).
Old versions of Unix, such as UNIX/32V, included awkcc, which converted AWK to C. Kernighan wrote a program to turn awk into C++; its state is not known.
BWK awk, also known as nawk, refers to the version by Brian Kernighan. It has been dubbed the "One True AWK" because of the use of the term in association with the book that originally described the language and the fact that Kernighan was one of the original authors of AWK. FreeBSD refers to this version as one-true-awk. This version also has features not in the book, such as tolower and ENVIRON that are explained above; see the FIXES file in the source archive for details. This version is used by, for example, Android, FreeBSD, NetBSD, OpenBSD, macOS, and illumos. Brian Kernighan and Arnold Robbins are the main contributors to a source repository for nawk: .
gawk (GNU awk) is another free-software implementation and the only implementation that makes serious progress implementing internationalization and localization and TCP/IP networking. It was written before the original implementation became freely available. It includes its own debugger, and its profiler enables the user to make measured performance enhancements to a script. It also enables the user to extend functionality with shared libraries. Some Linux distributions include gawk as their default AWK implementation. As of version 5.2 (September 2022) gawk includes a persistent memory feature that can remember script-defined variables and functions from one invocation of a script to the next and pass data between unrelated scripts, as described in the Persistent-Memory gawk User Manual: .
gawk-csv. The CSV extension of gawk provides facilities for inputting and outputting CSV formatted data.
mawk is a very fast AWK implementation by Mike Brennan based on a bytecode interpreter.
libmawk is a fork of mawk, allowing applications to embed multiple parallel instances of awk interpreters.
awka (whose front end is written atop the mawk program) is another translator of AWK scripts into C code. When compiled, statically including the author's libawka.a, the resulting executables are considerably sped up and, according to the author's tests, compare very well with other versions of AWK, Perl, or Tcl. Small scripts will turn into programs of 160–170 kB.
tawk (Thompson AWK) is an AWK compiler for Solaris, DOS, OS/2, and Windows, previously sold by Thompson Automation Software (which has ceased its activities).
Jawk is a project to implement AWK in Java, hosted on SourceForge. Extensions to the language are added to provide access to Java features within AWK scripts (i.e., Java threads, sockets, collections, etc.).
xgawk is a fork of gawk that extends gawk with dynamically loadable libraries. The XMLgawk extension was integrated into the official GNU Awk release 4.1.0.
QSEAWK is an embedded AWK interpreter implementation included in the QSE library that provides embedding application programming interface (API) for C and C++.
libfawk is a very small, function-only, reentrant, embeddable interpreter written in C
BusyBox includes an AWK implementation written by Dmitry Zakharov. This is a very small implementation suitable for embedded systems.
CLAWK by Michael Parker provides an AWK implementation in Common Lisp, based upon the regular expression library of the same author.
goawk is an AWK implementation in Go with a few convenience extensions by Ben Hoyt, hosted on Github.
The gawk manual has a list of more AWK implementations.
Books
See also
Data transformation
Event-driven programming
List of Unix commands
sed
References
Further reading
– Interview with Alfred V. Aho on AWK
AWK – Become an expert in 60 minutes
External links
The Amazing Awk Assembler by Henry Spencer.
awklang.org The site for things related to the awk language
1977 software
Cross-platform software
Domain-specific programming languages
Free and open source interpreters
Pattern matching programming languages
Plan 9 commands
Programming languages created in 1977
Scripting languages
Standard Unix programs
Text-oriented programming languages
Unix SUS2008 utilities
Unix text processing utilities | AWK | [
"Technology"
] | 5,414 | [
"Computing commands",
"Plan 9 commands",
"Standard Unix programs"
] |
1,461 | https://en.wikipedia.org/wiki/Apollo%20program | The Apollo program, also known as Project Apollo, was the United States human spaceflight program led by NASA, which succeeded in landing the first men on the Moon in 1969, following Project Mercury, which put the first Americans in space. It was conceived in 1960 as a three-person spacecraft during President Dwight D. Eisenhower's administration. Apollo was later dedicated to President John F. Kennedy's national goal for the 1960s of "landing a man on the Moon and returning him safely to the Earth" in an address to Congress on May 25, 1961. It was the third US human spaceflight program to fly, preceded by Project Gemini conceived in 1961 to extend spaceflight capability in support of Apollo.
Kennedy's goal was accomplished on the Apollo 11 mission when astronauts Neil Armstrong and Buzz Aldrin landed their Apollo Lunar Module (LM) on July 20, 1969, and walked on the lunar surface, while Michael Collins remained in lunar orbit in the command and service module (CSM), and all three landed safely on Earth in the Pacific Ocean on July 24. Five subsequent Apollo missions also landed astronauts on the Moon, the last, Apollo 17, in December 1972. In these six spaceflights, twelve people walked on the Moon.
Apollo ran from 1961 to 1972, with the first crewed flight in 1968. It encountered a major setback in 1967 when an Apollo 1 cabin fire killed the entire crew during a prelaunch test. After the first successful landing, sufficient flight hardware remained for nine follow-on landings with a plan for extended lunar geological and astrophysical exploration. Budget cuts forced the cancellation of three of these. Five of the remaining six missions achieved successful landings, but the Apollo 13 landing had to be aborted after an oxygen tank exploded en route to the Moon, crippling the CSM. The crew barely managed a safe return to Earth by using the lunar module as a "lifeboat" on the return journey. Apollo used the Saturn family of rockets as launch vehicles, which were also used for an Apollo Applications Program, which consisted of Skylab, a space station that supported three crewed missions in 1973–1974, and the Apollo–Soyuz Test Project, a joint United States-Soviet Union low Earth orbit mission in 1975.
Apollo set several major human spaceflight milestones. It stands alone in sending crewed missions beyond low Earth orbit. Apollo 8 was the first crewed spacecraft to orbit another celestial body, and Apollo 11 was the first crewed spacecraft to land humans on one.
Overall, the Apollo program returned of lunar rocks and soil to Earth, greatly contributing to the understanding of the Moon's composition and geological history. The program laid the foundation for NASA's subsequent human spaceflight capability and funded construction of its Johnson Space Center and Kennedy Space Center. Apollo also spurred advances in many areas of technology incidental to rocketry and human spaceflight, including avionics, telecommunications, and computers.
Name
The program was named after Apollo, the Greek god of light, music, and the Sun, by NASA manager Abe Silverstein, who later said, "I was naming the spacecraft like I'd name my baby." Silverstein chose the name at home one evening, early in 1960, because he felt "Apollo riding his chariot across the Sun was appropriate to the grand scale of the proposed program".
The context of this was that the program focused at its beginning mainly on developing an advanced crewed spacecraft, the Apollo command and service module, succeeding the Mercury program. A lunar landing became the focus of the program only in 1961. Thereafter Project Gemini instead followed the Mercury program to test and study advanced crewed spaceflight technology.
Background
Origin and spacecraft feasibility studies
The Apollo program was conceived during the Eisenhower administration in early 1960, as a follow-up to Project Mercury. While the Mercury capsule could support only one astronaut on a limited Earth orbital mission, Apollo would carry three. Possible missions included ferrying crews to a space station, circumlunar flights, and eventual crewed lunar landings.
In July 1960, NASA Deputy Administrator Hugh L. Dryden announced the Apollo program to industry representatives at a series of Space Task Group conferences. Preliminary specifications were laid out for a spacecraft with a mission module cabin separate from the command module (piloting and reentry cabin), and a propulsion and equipment module. On August 30, a feasibility study competition was announced, and on October 25, three study contracts were awarded to General Dynamics/Convair, General Electric, and the Glenn L. Martin Company. Meanwhile, NASA performed its own in-house spacecraft design studies led by Maxime Faget, to serve as a gauge to judge and monitor the three industry designs.
Political pressure builds
In November 1960, John F. Kennedy was elected president after a campaign that promised American superiority over the Soviet Union in the fields of space exploration and missile defense. Up to the election of 1960, Kennedy had been speaking out against the "missile gap" that he and many other senators said had developed between the Soviet Union and the United States due to the inaction of President Eisenhower. Beyond military power, Kennedy used aerospace technology as a symbol of national prestige, pledging to make the US not "first but, first and, first if, but first period". Despite Kennedy's rhetoric, he did not immediately come to a decision on the status of the Apollo program once he became president. He knew little about the technical details of the space program, and was put off by the massive financial commitment required by a crewed Moon landing. When Kennedy's newly appointed NASA Administrator James E. Webb requested a 30 percent budget increase for his agency, Kennedy supported an acceleration of NASA's large booster program but deferred a decision on the broader issue.
On April 12, 1961, Soviet cosmonaut Yuri Gagarin became the first person to fly in space, reinforcing American fears about being left behind in a technological competition with the Soviet Union. At a meeting of the US House Committee on Science and Astronautics one day after Gagarin's flight, many congressmen pledged their support for a crash program aimed at ensuring that America would catch up. Kennedy was circumspect in his response to the news, refusing to make a commitment on America's response to the Soviets.
On April 20, Kennedy sent a memo to Vice President Lyndon B. Johnson, asking Johnson to look into the status of America's space program, and into programs that could offer NASA the opportunity to catch up. Johnson responded approximately one week later, concluding that "we are neither making maximum effort nor achieving results necessary if this country is to reach a position of leadership." His memo concluded that a crewed Moon landing was far enough in the future that it was likely the United States would achieve it first.
On May 25, 1961, twenty days after the first US crewed spaceflight Freedom 7, Kennedy proposed the crewed Moon landing in a Special Message to the Congress on Urgent National Needs:
NASA expansion
At the time of Kennedy's proposal, only one American had flown in space—less than a month earlier—and NASA had not yet sent an astronaut into orbit. Even some NASA employees doubted whether Kennedy's ambitious goal could be met. By 1963, Kennedy even came close to agreeing to a joint US-USSR Moon mission, to eliminate duplication of effort.
With the clear goal of a crewed landing replacing the more nebulous goals of space stations and circumlunar flights, NASA decided that, in order to make progress quickly, it would discard the feasibility study designs of Convair, GE, and Martin, and proceed with Faget's command and service module design. The mission module was determined to be useful only as an extra room, and therefore unnecessary. They used Faget's design as the specification for another competition for spacecraft procurement bids in October 1961. On November 28, 1961, it was announced that North American Aviation had won the contract, although its bid was not rated as good as the Martin proposal. Webb, Dryden and Robert Seamans chose it in preference due to North American's longer association with NASA and its predecessor.
Landing humans on the Moon by the end of 1969 required the most sudden burst of technological creativity, and the largest commitment of resources ($25 billion; $ in US dollars) ever made by any nation in peacetime. At its peak, the Apollo program employed 400,000 people and required the support of over 20,000 industrial firms and universities.
On July 1, 1960, NASA established the Marshall Space Flight Center (MSFC) in Huntsville, Alabama. MSFC designed the heavy lift-class Saturn launch vehicles, which would be required for Apollo.
Manned Spacecraft Center
It became clear that managing the Apollo program would exceed the capabilities of Robert R. Gilruth's Space Task Group, which had been directing the nation's crewed space program from NASA's Langley Research Center. So Gilruth was given authority to grow his organization into a new NASA center, the Manned Spacecraft Center (MSC). A site was chosen in Houston, Texas, on land donated by Rice University, and Administrator Webb announced the conversion on September 19, 1961. It was also clear NASA would soon outgrow its practice of controlling missions from its Cape Canaveral Air Force Station launch facilities in Florida, so a new Mission Control Center would be included in the MSC.
In September 1962, by which time two Project Mercury astronauts had orbited the Earth, Gilruth had moved his organization to rented space in Houston, and construction of the MSC facility was under way, Kennedy visited Rice to reiterate his challenge in a famous speech:
The MSC was completed in September 1963. It was renamed by the US Congress in honor of Lyndon B. Johnson soon after his death in 1973.
Launch Operations Center
It also became clear that Apollo would outgrow the Canaveral launch facilities in Florida. The two newest launch complexes were already being built for the Saturn I and IB rockets at the northernmost end: LC-34 and LC-37. But an even bigger facility would be needed for the mammoth rocket required for the crewed lunar mission, so land acquisition was started in July 1961 for a Launch Operations Center (LOC) immediately north of Canaveral at Merritt Island. The design, development and construction of the center was conducted by Kurt H. Debus, a member of Wernher von Braun's original V-2 rocket engineering team. Debus was named the LOC's first Director. Construction began in November 1962. Following Kennedy's death, President Johnson issued an executive order on November 29, 1963, to rename the LOC and Cape Canaveral in honor of Kennedy.
The LOC included Launch Complex 39, a Launch Control Center, and a Vertical Assembly Building (VAB). in which the space vehicle (launch vehicle and spacecraft) would be assembled on a mobile launcher platform and then moved by a crawler-transporter to one of several launch pads. Although at least three pads were planned, only two, designated AandB, were completed in October 1965. The LOC also included an Operations and Checkout Building (OCB) to which Gemini and Apollo spacecraft were initially received prior to being mated to their launch vehicles. The Apollo spacecraft could be tested in two vacuum chambers capable of simulating atmospheric pressure at altitudes up to , which is nearly a vacuum.
Organization
Administrator Webb realized that in order to keep Apollo costs under control, he had to develop greater project management skills in his organization, so he recruited George E. Mueller for a high management job. Mueller accepted, on the condition that he have a say in NASA reorganization necessary to effectively administer Apollo. Webb then worked with Associate Administrator (later Deputy Administrator) Seamans to reorganize the Office of Manned Space Flight (OMSF). On July 23, 1963, Webb announced Mueller's appointment as Deputy Associate Administrator for Manned Space Flight, to replace then Associate Administrator D. Brainerd Holmes on his retirement effective September 1. Under Webb's reorganization, the directors of the Manned Spacecraft Center (Gilruth), Marshall Space Flight Center (von Braun), and the Launch Operations Center (Debus) reported to Mueller.
Based on his industry experience on Air Force missile projects, Mueller realized some skilled managers could be found among high-ranking officers in the U.S. Air Force, so he got Webb's permission to recruit General Samuel C. Phillips, who gained a reputation for his effective management of the Minuteman program, as OMSF program controller. Phillips's superior officer Bernard A. Schriever agreed to loan Phillips to NASA, along with a staff of officers under him, on the condition that Phillips be made Apollo Program Director. Mueller agreed, and Phillips managed Apollo from January 1964, until it achieved the first human landing in July 1969, after which he returned to Air Force duty.
Charles Fishman, in One Giant Leap, estimated the number of people and organizations involved into the Apollo program as "410,000 men and women at some 20,000 different companies contributed to the effort".
Choosing a mission mode
Once Kennedy had defined a goal, the Apollo mission planners were faced with the challenge of designing a spacecraft that could meet it while minimizing risk to human life, limiting cost, and not exceeding limits in possible technology and astronaut skill. Four possible mission modes were considered:
Direct Ascent: The spacecraft would be launched as a unit and travel directly to the lunar surface, without first going into lunar orbit. A Earth return ship would land all three astronauts atop a descent propulsion stage, which would be left on the Moon. This design would have required development of the extremely powerful Saturn C-8 or Nova launch vehicle to carry a payload to the Moon.
Earth Orbit Rendezvous (EOR): Multiple rocket launches (up to 15 in some plans) would carry parts of the Direct Ascent spacecraft and propulsion units for translunar injection (TLI). These would be assembled into a single spacecraft in Earth orbit.
Lunar Surface Rendezvous: Two spacecraft would be launched in succession. The first, an automated vehicle carrying propellant for the return to Earth, would land on the Moon, to be followed some time later by the crewed vehicle. Propellant would have to be transferred from the automated vehicle to the crewed vehicle.
Lunar Orbit Rendezvous (LOR): This turned out to be the winning configuration, which achieved the goal with Apollo 11 on July 20, 1969: a single Saturn V launched a spacecraft that was composed of a Apollo command and service module which remained in orbit around the Moon and a two-stage Apollo Lunar Module spacecraft which was flown by two astronauts to the surface, flown back to dock with the command module and was then discarded. Landing the smaller spacecraft on the Moon, and returning an even smaller part () to lunar orbit, minimized the total mass to be launched from Earth, but this was the last method initially considered because of the perceived risk of rendezvous and docking.
In early 1961, direct ascent was generally the mission mode in favor at NASA. Many engineers feared that rendezvous and docking, maneuvers that had not been attempted in Earth orbit, would be nearly impossible in lunar orbit. LOR advocates including John Houbolt at Langley Research Center emphasized the important weight reductions that were offered by the LOR approach. Throughout 1960 and 1961, Houbolt campaigned for the recognition of LOR as a viable and practical option. Bypassing the NASA hierarchy, he sent a series of memos and reports on the issue to Associate Administrator Robert Seamans; while acknowledging that he spoke "somewhat as a voice in the wilderness", Houbolt pleaded that LOR should not be discounted in studies of the question.
Seamans's establishment of an ad hoc committee headed by his special technical assistant Nicholas E. Golovin in July 1961, to recommend a launch vehicle to be used in the Apollo program, represented a turning point in NASA's mission mode decision. This committee recognized that the chosen mode was an important part of the launch vehicle choice, and recommended in favor of a hybrid EOR-LOR mode. Its consideration of LOR—as well as Houbolt's ceaseless work—played an important role in publicizing the workability of the approach. In late 1961 and early 1962, members of the Manned Spacecraft Center began to come around to support LOR, including the newly hired deputy director of the Office of Manned Space Flight, Joseph Shea, who became a champion of LOR. The engineers at Marshall Space Flight Center (MSFC), who were heavily invested in direct ascent, took longer to become convinced of its merits, but their conversion was announced by Wernher von Braun at a briefing on June 7, 1962.
But even after NASA reached internal agreement, it was far from smooth sailing. Kennedy's science advisor Jerome Wiesner, who had expressed his opposition to human spaceflight to Kennedy before the President took office, and had opposed the decision to land people on the Moon, hired Golovin, who had left NASA, to chair his own "Space Vehicle Panel", ostensibly to monitor, but actually to second-guess NASA's decisions on the Saturn V launch vehicle and LOR by forcing Shea, Seamans, and even Webb to defend themselves, delaying its formal announcement to the press on July 11, 1962, and forcing Webb to still hedge the decision as "tentative".
Wiesner kept up the pressure, even making the disagreement public during a two-day September visit by the President to Marshall Space Flight Center. Wiesner blurted out "No, that's no good" in front of the press, during a presentation by von Braun. Webb jumped in and defended von Braun, until Kennedy ended the squabble by stating that the matter was "still subject to final review". Webb held firm and issued a request for proposal to candidate Lunar Excursion Module (LEM) contractors. Wiesner finally relented, unwilling to settle the dispute once and for all in Kennedy's office, because of the President's involvement with the October Cuban Missile Crisis, and fear of Kennedy's support for Webb. NASA announced the selection of Grumman as the LEM contractor in November 1962.
Space historian James Hansen concludes that:
The LOR method had the advantage of allowing the lander spacecraft to be used as a "lifeboat" in the event of a failure of the command ship. Some documents prove this theory was discussed before and after the method was chosen. In 1964 an MSC study concluded, "The LM [as lifeboat]... was finally dropped, because no single reasonable CSM failure could be identified that would prohibit use of the SPS." Ironically, just such a failure happened on Apollo 13 when an oxygen tank explosion left the CSM without electrical power. The lunar module provided propulsion, electrical power and life support to get the crew home safely.
Spacecraft
Faget's preliminary Apollo design employed a cone-shaped command module, supported by one of several service modules providing propulsion and electrical power, sized appropriately for the space station, cislunar, and lunar landing missions. Once Kennedy's Moon landing goal became official, detailed design began of a command and service module (CSM) in which the crew would spend the entire direct-ascent mission and lift off from the lunar surface for the return trip, after being soft-landed by a larger landing propulsion module. The final choice of lunar orbit rendezvous changed the CSM's role to the translunar ferry used to transport the crew, along with a new spacecraft, the Lunar Excursion Module (LEM, later shortened to LM (Lunar Module) but still pronounced ) which would take two individuals to the lunar surface and return them to the CSM.
Command and service module
The command module (CM) was the conical crew cabin, designed to carry three astronauts from launch to lunar orbit and back to an Earth ocean landing. It was the only component of the Apollo spacecraft to survive without major configuration changes as the program evolved from the early Apollo study designs. Its exterior was covered with an ablative heat shield, and had its own reaction control system (RCS) engines to control its attitude and steer its atmospheric entry path. Parachutes were carried to slow its descent to splashdown. The module was tall, in diameter, and weighed approximately .
A cylindrical service module (SM) supported the command module, with a service propulsion engine and an RCS with propellants, and a fuel cell power generation system with liquid hydrogen and liquid oxygen reactants. A high-gain S-band antenna was used for long-distance communications on the lunar flights. On the extended lunar missions, an orbital scientific instrument package was carried. The service module was discarded just before reentry. The module was long and in diameter. The initial lunar flight version weighed approximately fully fueled, while a later version designed to carry a lunar orbit scientific instrument package weighed just over .
North American Aviation won the contract to build the CSM, and also the second stage of the Saturn V launch vehicle for NASA. Because the CSM design was started early before the selection of lunar orbit rendezvous, the service propulsion engine was sized to lift the CSM off the Moon, and thus was oversized to about twice the thrust required for translunar flight. Also, there was no provision for docking with the lunar module. A 1964 program definition study concluded that the initial design should be continued as Block I which would be used for early testing, while Block II, the actual lunar spacecraft, would incorporate the docking equipment and take advantage of the lessons learned in Block I development.
Apollo Lunar Module
The Apollo Lunar Module (LM) was designed to descend from lunar orbit to land two astronauts on the Moon and take them back to orbit to rendezvous with the command module. Not designed to fly through the Earth's atmosphere or return to Earth, its fuselage was designed totally without aerodynamic considerations and was of an extremely lightweight construction. It consisted of separate descent and ascent stages, each with its own engine. The descent stage contained storage for the descent propellant, surface stay consumables, and surface exploration equipment. The ascent stage contained the crew cabin, ascent propellant, and a reaction control system. The initial LM model weighed approximately , and allowed surface stays up to around 34 hours. An extended lunar module (ELM) weighed over , and allowed surface stays of more than three days. The contract for design and construction of the lunar module was awarded to Grumman Aircraft Engineering Corporation, and the project was overseen by Thomas J. Kelly.
Launch vehicles
Before the Apollo program began, Wernher von Braun and his team of rocket engineers had started work on plans for very large launch vehicles, the Saturn series, and the even larger Nova series. In the midst of these plans, von Braun was transferred from the Army to NASA and was made Director of the Marshall Space Flight Center. The initial direct ascent plan to send the three-person Apollo command and service module directly to the lunar surface, on top of a large descent rocket stage, would require a Nova-class launcher, with a lunar payload capability of over . The June 11, 1962, decision to use lunar orbit rendezvous enabled the Saturn V to replace the Nova, and the MSFC proceeded to develop the Saturn rocket family for Apollo.
Since Apollo, like Mercury, used more than one launch vehicle for space missions, NASA used spacecraft-launch vehicle combination series numbers: AS-10x for Saturn I, AS-20x for Saturn IB, and AS-50x for Saturn V (compare Mercury-Redstone 3, Mercury-Atlas 6) to designate and plan all missions, rather than numbering them sequentially as in Project Gemini. This was changed by the time human flights began.
Little Joe II
Since Apollo, like Mercury, would require a launch escape system (LES) in case of a launch failure, a relatively small rocket was required for qualification flight testing of this system. A rocket bigger than the Little Joe used by Mercury would be required, so the Little Joe II was built by General Dynamics/Convair. After an August 1963 qualification test flight, four LES test flights (A-001 through 004) were made at the White Sands Missile Range between May 1964 and January 1966.
Saturn I
Saturn I, the first US heavy lift launch vehicle, was initially planned to launch partially equipped CSMs in low Earth orbit tests. The S-I first stage burned RP-1 with liquid oxygen (LOX) oxidizer in eight clustered Rocketdyne H-1 engines, to produce of thrust. The S-IV second stage used six liquid hydrogen-fueled Pratt & Whitney RL-10 engines with of thrust. The S-V third stage flew inactively on Saturn I four times.
The first four Saturn I test flights were launched from LC-34, with only the first stage live, carrying dummy upper stages filled with water. The first flight with a live S-IV was launched from LC-37. This was followed by five launches of boilerplate CSMs (designated AS-101 through AS-105) into orbit in 1964 and 1965. The last three of these further supported the Apollo program by also carrying Pegasus satellites, which verified the safety of the translunar environment by measuring the frequency and severity of micrometeorite impacts.
In September 1962, NASA planned to launch four crewed CSM flights on the Saturn I from late 1965 through 1966, concurrent with Project Gemini. The payload capacity would have severely limited the systems which could be included, so the decision was made in October 1963 to use the uprated Saturn IB for all crewed Earth orbital flights.
Saturn IB
The Saturn IB was an upgraded version of the Saturn I. The S-IB first stage increased the thrust to by uprating the H-1 engine. The second stage replaced the S-IV with the S-IVB-200, powered by a single J-2 engine burning liquid hydrogen fuel with LOX, to produce of thrust. A restartable version of the S-IVB was used as the third stage of the Saturn V. The Saturn IB could send over into low Earth orbit, sufficient for a partially fueled CSM or the LM. Saturn IB launch vehicles and flights were designated with an AS-200 series number, "AS" indicating "Apollo Saturn" and the "2" indicating the second member of the Saturn rocket family.
Saturn V
Saturn V launch vehicles and flights were designated with an AS-500 series number, "AS" indicating "Apollo Saturn" and the "5" indicating Saturn V. The three-stage Saturn V was designed to send a fully fueled CSM and LM to the Moon. It was in diameter and stood tall with its lunar payload. Its capability grew to for the later advanced lunar landings. The S-IC first stage burned RP-1/LOX for a rated thrust of , which was upgraded to . The second and third stages burned liquid hydrogen; the third stage was a modified version of the S-IVB, with thrust increased to and capability to restart the engine for translunar injection after reaching a parking orbit.
Astronauts
NASA's director of flight crew operations during the Apollo program was Donald K. "Deke" Slayton, one of the original Mercury Seven astronauts who was medically grounded in September 1962 due to a heart murmur. Slayton was responsible for making all Gemini and Apollo crew assignments.
Thirty-two astronauts were assigned to fly missions in the Apollo program. Twenty-four of these left Earth's orbit and flew around the Moon between December 1968 and December 1972 (three of them twice). Half of the 24 walked on the Moon's surface, though none of them returned to it after landing once. One of the moonwalkers was a trained geologist. Of the 32, Gus Grissom, Ed White, and Roger Chaffee were killed during a ground test in preparation for the Apollo 1 mission.
The Apollo astronauts were chosen from the Project Mercury and Gemini veterans, plus from two later astronaut groups. All missions were commanded by Gemini or Mercury veterans. Crews on all development flights (except the Earth orbit CSM development flights) through the first two landings on Apollo 11 and Apollo 12, included at least two (sometimes three) Gemini veterans. Harrison Schmitt, a geologist, was the first NASA scientist astronaut to fly in space, and landed on the Moon on the last mission, Apollo 17. Schmitt participated in the lunar geology training of all of the Apollo landing crews.
NASA awarded all 32 of these astronauts its highest honor, the Distinguished Service Medal, given for "distinguished service, ability, or courage", and personal "contribution representing substantial progress to the NASA mission". The medals were awarded posthumously to Grissom, White, and Chaffee in 1969, then to the crews of all missions from Apollo 8 onward. The crew that flew the first Earth orbital test mission Apollo 7, Walter M. Schirra, Donn Eisele, and Walter Cunningham, were awarded the lesser NASA Exceptional Service Medal, because of discipline problems with the flight director's orders during their flight. In October 2008, the NASA Administrator decided to award them the Distinguished Service Medals. For Schirra and Eisele, this was posthumously.
Lunar mission profile
The first lunar landing mission was planned to proceed:
Profile variations
The first three lunar missions (Apollo 8, Apollo 10, and Apollo 11) used a free return trajectory, keeping a flight path coplanar with the lunar orbit, which would allow a return to Earth in case the SM engine failed to make lunar orbit insertion. Landing site lighting conditions on later missions dictated a lunar orbital plane change, which required a course change maneuver soon after TLI, and eliminated the free-return option.
After Apollo 12 placed the second of several seismometers on the Moon, the jettisoned LM ascent stages on Apollo 12 and later missions were deliberately crashed on the Moon at known locations to induce vibrations in the Moon's structure. The only exceptions to this were the Apollo 13 LM which burned up in the Earth's atmosphere, and Apollo 16, where a loss of attitude control after jettison prevented making a targeted impact.
As another active seismic experiment, the S-IVBs on Apollo 13 and subsequent missions were deliberately crashed on the Moon instead of being sent to solar orbit.
Starting with Apollo 13, descent orbit insertion was to be performed using the service module engine instead of the LM engine, in order to allow a greater fuel reserve for landing. This was actually done for the first time on Apollo 14, since the Apollo 13 mission was aborted before landing.
Development history
Uncrewed flight tests
Two Block I CSMs were launched from LC-34 on suborbital flights in 1966 with the Saturn IB. The first, AS-201 launched on February 26, reached an altitude of and splashed down downrange in the Atlantic Ocean. The second, AS-202 on August 25, reached altitude and was recovered downrange in the Pacific Ocean. These flights validated the service module engine and the command module heat shield.
A third Saturn IB test, AS-203 launched from pad 37, went into orbit to support design of the S-IVB upper stage restart capability needed for the Saturn V. It carried a nose cone instead of the Apollo spacecraft, and its payload was the unburned liquid hydrogen fuel, the behavior of which engineers measured with temperature and pressure sensors, and a TV camera. This flight occurred on July 5, before AS-202, which was delayed because of problems getting the Apollo spacecraft ready for flight.
Preparation for crewed flight
Two crewed orbital Block I CSM missions were planned: AS-204 and AS-205. The Block I crew positions were titled Command Pilot, Senior Pilot, and Pilot. The Senior Pilot would assume navigation duties, while the Pilot would function as a systems engineer. The astronauts would wear a modified version of the Gemini spacesuit.
After an uncrewed LM test flight AS-206, a crew would fly the first Block II CSM and LM in a dual mission known as AS-207/208, or AS-278 (each spacecraft would be launched on a separate Saturn IB). The Block II crew positions were titled Commander, Command Module Pilot, and Lunar Module Pilot. The astronauts would begin wearing a new Apollo A6L spacesuit, designed to accommodate lunar extravehicular activity (EVA). The traditional visor helmet was replaced with a clear "fishbowl" type for greater visibility, and the lunar surface EVA suit would include a water-cooled undergarment.
Deke Slayton, the grounded Mercury astronaut who became director of flight crew operations for the Gemini and Apollo programs, selected the first Apollo crew in January 1966, with Grissom as Command Pilot, White as Senior Pilot, and rookie Donn F. Eisele as Pilot. But Eisele dislocated his shoulder twice aboard the KC135 weightlessness training aircraft, and had to undergo surgery on January 27. Slayton replaced him with Chaffee. NASA announced the final crew selection for AS-204 on March 21, 1966, with the backup crew consisting of Gemini veterans James McDivitt and David Scott, with rookie Russell L. "Rusty" Schweickart. Mercury/Gemini veteran Wally Schirra, Eisele, and rookie Walter Cunningham were announced on September 29 as the prime crew for AS-205.
In December 1966, the AS-205 mission was canceled, since the validation of the CSM would be accomplished on the 14-day first flight, and AS-205 would have been devoted to space experiments and contribute no new engineering knowledge about the spacecraft. Its Saturn IB was allocated to the dual mission, now redesignated AS-205/208 or AS-258, planned for August 1967. McDivitt, Scott and Schweickart were promoted to the prime AS-258 crew, and Schirra, Eisele and Cunningham were reassigned as the Apollo1 backup crew.
Program delays
The spacecraft for the AS-202 and AS-204 missions were delivered by North American Aviation to the Kennedy Space Center with long lists of equipment problems which had to be corrected before flight; these delays caused the launch of AS-202 to slip behind AS-203, and eliminated hopes the first crewed mission might be ready to launch as soon as November 1966, concurrently with the last Gemini mission. Eventually, the planned AS-204 flight date was pushed to February 21, 1967.
North American Aviation was prime contractor not only for the Apollo CSM, but for the SaturnV S-II second stage as well, and delays in this stage pushed the first uncrewed SaturnV flight AS-501 from late 1966 to November 1967. (The initial assembly of AS-501 had to use a dummy spacer spool in place of the stage.)
The problems with North American were severe enough in late 1965 to cause Manned Space Flight Administrator George Mueller to appoint program director Samuel Phillips to head a "tiger team" to investigate North American's problems and identify corrections. Phillips documented his findings in a December 19 letter to NAA president Lee Atwood, with a strongly worded letter by Mueller, and also gave a presentation of the results to Mueller and Deputy Administrator Robert Seamans. Meanwhile, Grumman was also encountering problems with the Lunar Module, eliminating hopes it would be ready for crewed flight in 1967, not long after the first crewed CSM flights.
Apollo 1 fire
Grissom, White, and Chaffee decided to name their flight Apollo1 as a motivational focus on the first crewed flight. They trained and conducted tests of their spacecraft at North American, and in the altitude chamber at the Kennedy Space Center. A "plugs-out" test was planned for January, which would simulate a launch countdown on LC-34 with the spacecraft transferring from pad-supplied to internal power. If successful, this would be followed by a more rigorous countdown simulation test closer to the February 21 launch, with both spacecraft and launch vehicle fueled.
The plugs-out test began on the morning of January 27, 1967, and immediately was plagued with problems. First, the crew noticed a strange odor in their spacesuits which delayed the sealing of the hatch. Then, communications problems frustrated the astronauts and forced a hold in the simulated countdown. During this hold, an electrical fire began in the cabin and spread quickly in the high pressure, 100% oxygen atmosphere. Pressure rose high enough from the fire that the cabin inner wall burst, allowing the fire to erupt onto the pad area and frustrating attempts to rescue the crew. The astronauts were asphyxiated before the hatch could be opened.
NASA immediately convened an accident review board, overseen by both houses of Congress. While the determination of responsibility for the accident was complex, the review board concluded that "deficiencies existed in command module design, workmanship and quality control". At the insistence of NASA Administrator Webb, North American removed Harrison Storms as command module program manager. Webb also reassigned Apollo Spacecraft Program Office (ASPO) Manager Joseph Francis Shea, replacing him with George Low.
To remedy the causes of the fire, changes were made in the Block II spacecraft and operational procedures, the most important of which were use of a nitrogen/oxygen mixture instead of pure oxygen before and during launch, and removal of flammable cabin and space suit materials. The Block II design already called for replacement of the Block I plug-type hatch cover with a quick-release, outward opening door. NASA discontinued the crewed Block I program, using the BlockI spacecraft only for uncrewed SaturnV flights. Crew members would also exclusively wear modified, fire-resistant A7L Block II space suits, and would be designated by the Block II titles, regardless of whether a LM was present on the flight or not.
Uncrewed Saturn V and LM tests
On April 24, 1967, Mueller published an official Apollo mission numbering scheme, using sequential numbers for all flights, crewed or uncrewed. The sequence would start with Apollo 4 to cover the first three uncrewed flights while retiring the Apollo1 designation to honor the crew, per their widows' wishes.
In September 1967, Mueller approved a sequence of mission types which had to be successfully accomplished in order to achieve the crewed lunar landing. Each step had to be successfully accomplished before the next ones could be performed, and it was unknown how many tries of each mission would be necessary; therefore letters were used instead of numbers. The A missions were uncrewed Saturn V validation; B was uncrewed LM validation using the Saturn IB; C was crewed CSM Earth orbit validation using the Saturn IB; D was the first crewed CSM/LM flight (this replaced AS-258, using a single Saturn V launch); E would be a higher Earth orbit CSM/LM flight; F would be the first lunar mission, testing the LM in lunar orbit but without landing (a "dress rehearsal"); and G would be the first crewed landing. The list of types covered follow-on lunar exploration to include H lunar landings, I for lunar orbital survey missions, and J for extended-stay lunar landings.
The delay in the CSM caused by the fire enabled NASA to catch up on human-rating the LM and SaturnV. Apollo4 (AS-501) was the first uncrewed flight of the SaturnV, carrying a BlockI CSM on November 9, 1967. The capability of the command module's heat shield to survive a trans-lunar reentry was demonstrated by using the service module engine to ram it into the atmosphere at higher than the usual Earth-orbital reentry speed.
Apollo 5 (AS-204) was the first uncrewed test flight of the LM in Earth orbit, launched from pad 37 on January 22, 1968, by the Saturn IB that would have been used for Apollo 1. The LM engines were successfully test-fired and restarted, despite a computer programming error which cut short the first descent stage firing. The ascent engine was fired in abort mode, known as a "fire-in-the-hole" test, where it was lit simultaneously with jettison of the descent stage. Although Grumman wanted a second uncrewed test, George Low decided the next LM flight would be crewed.
This was followed on April 4, 1968, by Apollo 6 (AS-502) which carried a CSM and a LM Test Article as ballast. The intent of this mission was to achieve trans-lunar injection, followed closely by a simulated direct-return abort, using the service module engine to achieve another high-speed reentry. The Saturn V experienced pogo oscillation, a problem caused by non-steady engine combustion, which damaged fuel lines in the second and third stages. Two S-II engines shut down prematurely, but the remaining engines were able to compensate. The damage to the third stage engine was more severe, preventing it from restarting for trans-lunar injection. Mission controllers were able to use the service module engine to essentially repeat the flight profile of Apollo 4. Based on the good performance of Apollo6 and identification of satisfactory fixes to the Apollo6 problems, NASA declared the SaturnV ready to fly crew, canceling a third uncrewed test.
Crewed development missions
Apollo 7, launched from LC-34 on October 11, 1968, was the Cmission, crewed by Schirra, Eisele, and Cunningham. It was an 11-day Earth-orbital flight which tested the CSM systems.
Apollo 8 was planned to be the D mission in December 1968, crewed by McDivitt, Scott and Schweickart, launched on a SaturnV instead of two Saturn IBs. In the summer it had become clear that the LM would not be ready in time. Rather than waste the Saturn V on another simple Earth-orbiting mission, ASPO Manager George Low suggested the bold step of sending Apollo8 to orbit the Moon instead, deferring the Dmission to the next mission in March 1969, and eliminating the E mission. This would keep the program on track. The Soviet Union had sent two tortoises, mealworms, wine flies, and other lifeforms around the Moon on September 15, 1968, aboard Zond 5, and it was believed they might soon repeat the feat with human cosmonauts. The decision was not announced publicly until successful completion of Apollo 7. Gemini veterans Frank Borman and Jim Lovell, and rookie William Anders captured the world's attention by making ten lunar orbits in 20 hours, transmitting television pictures of the lunar surface on Christmas Eve, and returning safely to Earth.
The following March, LM flight, rendezvous and docking were successfully demonstrated in Earth orbit on Apollo 9, and Schweickart tested the full lunar EVA suit with its portable life support system (PLSS) outside the LM. The F mission was successfully carried out on Apollo 10 in May 1969 by Gemini veterans Thomas P. Stafford, John Young and Eugene Cernan. Stafford and Cernan took the LM to within of the lunar surface.
The G mission was achieved on Apollo 11 in July 1969 by an all-Gemini veteran crew consisting of Neil Armstrong, Michael Collins and Buzz Aldrin. Armstrong and Aldrin performed the first landing at the Sea of Tranquility at 20:17:40 UTC on July 20, 1969. They spent a total of 21 hours, 36 minutes on the surface, and spent 2hours, 31 minutes outside the spacecraft, walking on the surface, taking photographs, collecting material samples, and deploying automated scientific instruments, while continuously sending black-and-white television back to Earth. The astronauts returned safely on July 24.
Production lunar landings
In November 1969, Charles "Pete" Conrad became the third person to step onto the Moon, which he did while speaking more informally than had Armstrong:
Conrad and rookie Alan L. Bean made a precision landing of Apollo 12 within walking distance of the Surveyor 3 uncrewed lunar probe, which had landed in April 1967 on the Ocean of Storms. The command module pilot was Gemini veteran Richard F. Gordon Jr. Conrad and Bean carried the first lunar surface color television camera, but it was damaged when accidentally pointed into the Sun. They made two EVAs totaling 7hours and 45 minutes. On one, they walked to the Surveyor, photographed it, and removed some parts which they returned to Earth.
The contracted batch of 15 Saturn Vs was enough for lunar landing missions through Apollo 20. Shortly after Apollo 11, NASA publicized a preliminary list of eight more planned landing sites after Apollo 12, with plans to increase the mass of the CSM and LM for the last five missions, along with the payload capacity of the Saturn V. These final missions would combine the I and J types in the 1967 list, allowing the CMP to operate a package of lunar orbital sensors and cameras while his companions were on the surface, and allowing them to stay on the Moon for over three days. These missions would also carry the Lunar Roving Vehicle (LRV) increasing the exploration area and allowing televised liftoff of the LM. Also, the Block II spacesuit was revised for the extended missions to allow greater flexibility and visibility for driving the LRV.
The success of the first two landings allowed the remaining missions to be crewed with a single veteran as commander, with two rookies. Apollo 13 launched Lovell, Jack Swigert, and Fred Haise in April 1970, headed for the Fra Mauro formation. But two days out, a liquid oxygen tank exploded, disabling the service module and forcing the crew to use the LM as a "lifeboat" to return to Earth. Another NASA review board was convened to determine the cause, which turned out to be a combination of damage of the tank in the factory, and a subcontractor not making a tank component according to updated design specifications. Apollo was grounded again, for the remainder of 1970 while the oxygen tank was redesigned and an extra one was added.
Mission cutbacks
About the time of the first landing in 1969, it was decided to use an existing Saturn V to launch the Skylab orbital laboratory pre-built on the ground, replacing the original plan to construct it in orbit from several Saturn IB launches; this eliminated Apollo 20. NASA's yearly budget also began to shrink in light of the successful landing, and NASA also had to make funds available for the development of the upcoming Space Shuttle. By 1971, the decision was made to also cancel missions 18 and 19. The two unused Saturn Vs became museum exhibits at the John F. Kennedy Space Center on Merritt Island, Florida, George C. Marshall Space Center in Huntsville, Alabama, Michoud Assembly Facility in New Orleans, Louisiana, and Lyndon B. Johnson Space Center in Houston, Texas.
The cutbacks forced mission planners to reassess the original planned landing sites in order to achieve the most effective geological sample and data collection from the remaining four missions. Apollo 15 had been planned to be the last of the H series missions, but since there would be only two subsequent missions left, it was changed to the first of three J missions.
Apollo 13's Fra Mauro mission was reassigned to Apollo 14, commanded in February 1971 by Mercury veteran Alan Shepard, with Stuart Roosa and Edgar Mitchell. This time the mission was successful. Shepard and Mitchell spent 33 hours and 31 minutes on the surface, and completed two EVAs totalling 9hours 24 minutes, which was a record for the longest EVA by a lunar crew at the time.
In August 1971, just after conclusion of the Apollo 15 mission, President Richard Nixon proposed canceling the two remaining lunar landing missions, Apollo 16 and 17. Office of Management and Budget Deputy Director Caspar Weinberger was opposed to this, and persuaded Nixon to keep the remaining missions.
Extended missions
Apollo 15 was launched on July 26, 1971, with David Scott, Alfred Worden and James Irwin. Scott and Irwin landed on July 30 near Hadley Rille, and spent just under two days, 19 hours on the surface. In over 18 hours of EVA, they collected about of lunar material.
Apollo 16 landed in the Descartes Highlands on April 20, 1972. The crew was commanded by John Young, with Ken Mattingly and Charles Duke. Young and Duke spent just under three days on the surface, with a total of over 20 hours EVA.
Apollo 17 was the last of the Apollo program, landing in the Taurus–Littrow region in December 1972. Eugene Cernan commanded Ronald E. Evans and NASA's first scientist-astronaut, geologist Harrison H. Schmitt. Schmitt was originally scheduled for Apollo 18, but the lunar geological community lobbied for his inclusion on the final lunar landing. Cernan and Schmitt stayed on the surface for just over three days and spent just over 23 hours of total EVA.
Canceled missions
Several missions were planned for but were canceled before details were finalized.
Mission summary
Source: Apollo by the Numbers: A Statistical Reference (Orloff 2004).
Samples returned
The Apollo program returned over of lunar rocks and soil to the Lunar Receiving Laboratory in Houston. Today, 75% of the samples are stored at the Lunar Sample Laboratory Facility built in 1979.
The rocks collected from the Moon are extremely old compared to rocks found on Earth, as measured by radiometric dating techniques. They range in age from about 3.2 billion years for the basaltic samples derived from the lunar maria, to about 4.6 billion years for samples derived from the highlands crust. As such, they represent samples from a very early period in the development of the Solar System, that are largely absent on Earth. One important rock found during the Apollo Program is dubbed the Genesis Rock, retrieved by astronauts David Scott and James Irwin during the Apollo 15 mission. This anorthosite rock is composed almost exclusively of the calcium-rich feldspar mineral anorthite, and is believed to be representative of the highland crust. A geochemical component called KREEP was discovered by Apollo 12, which has no known terrestrial counterpart. KREEP and the anorthositic samples have been used to infer that the outer portion of the Moon was once completely molten (see lunar magma ocean).
Almost all the rocks show evidence of impact process effects. Many samples appear to be pitted with micrometeoroid impact craters, which is never seen on Earth rocks, due to the thick atmosphere. Many show signs of being subjected to high-pressure shock waves that are generated during impact events. Some of the returned samples are of impact melt (materials melted near an impact crater.) All samples returned from the Moon are highly brecciated as a result of being subjected to multiple impact events.
From analyses of the composition of the returned lunar samples, it is now believed that the Moon was created through the impact of a large astronomical body with Earth.
Costs
Apollo cost $25.4 billion or approximately $257 billion (2023) using improved cost analysis.
Of this amount, $20.2 billion ($ adjusted) was spent on the design, development, and production of the Saturn family of launch vehicles, the Apollo spacecraft, spacesuits, scientific experiments, and mission operations. The cost of constructing and operating Apollo-related ground facilities, such as the NASA human spaceflight centers and the global tracking and data acquisition network, added an additional $5.2 billion ($ adjusted).
The amount grows to $28 billion ($280 billion adjusted) if the costs for related projects such as Project Gemini and the robotic Ranger, Surveyor, and Lunar Orbiter programs are included.
NASA's official cost breakdown, as reported to Congress in the Spring of 1973, is as follows:
Accurate estimates of human spaceflight costs were difficult in the early 1960s, as the capability was new and management experience was lacking. Preliminary cost analysis by NASA estimated $7 billion – $12 billion for a crewed lunar landing effort. NASA Administrator James Webb increased this estimate to $20 billion before reporting it to Vice President Johnson in April 1961.
Project Apollo was a massive undertaking, representing the largest research and development project in peacetime. At its peak, it employed over 400,000 employees and contractors around the country and accounted for more than half of NASA's total spending in the 1960s. After the first Moon landing, public and political interest waned, including that of President Nixon, who wanted to rein in federal spending. NASA's budget could not sustain Apollo missions which cost, on average, $445 million ($ adjusted) each while simultaneously developing the Space Shuttle. The final fiscal year of Apollo funding was 1973.
Apollo Applications Program
Looking beyond the crewed lunar landings, NASA investigated several post-lunar applications for Apollo hardware. The Apollo Extension Series (Apollo X) proposed up to 30 flights to Earth orbit, using the space in the Spacecraft Lunar Module Adapter (SLA) to house a small orbital laboratory (workshop). Astronauts would continue to use the CSM as a ferry to the station. This study was followed by design of a larger orbital workshop to be built in orbit from an empty S-IVB Saturn upper stage and grew into the Apollo Applications Program (AAP). The workshop was to be supplemented by the Apollo Telescope Mount, which could be attached to the ascent stage of the lunar module via a rack. The most ambitious plan called for using an empty S-IVB as an interplanetary spacecraft for a Venus fly-by mission.
The S-IVB orbital workshop was the only one of these plans to make it off the drawing board. Dubbed Skylab, it was assembled on the ground rather than in space, and launched in 1973 using the two lower stages of a Saturn V. It was equipped with an Apollo Telescope Mount. Skylab's last crew departed the station on February 8, 1974, and the station itself re-entered the atmosphere in 1979 after development of the Space Shuttle was delayed too long to save it.
The Apollo–Soyuz program also used Apollo hardware for the first joint nation spaceflight, paving the way for future cooperation with other nations in the Space Shuttle and International Space Station programs.
Recent observations
In 2008, Japan Aerospace Exploration Agency's SELENE probe observed evidence of the halo surrounding the Apollo 15 Lunar Module blast crater while orbiting above the lunar surface.
Beginning in 2009, NASA's robotic Lunar Reconnaissance Orbiter, while orbiting above the Moon, photographed the remnants of the Apollo program left on the lunar surface, and each site where crewed Apollo flights landed. All of the U.S. flags left on the Moon during the Apollo missions were found to still be standing, with the exception of the one left during the Apollo 11 mission, which was blown over during that mission's lift-off from the lunar surface; the degree to which these flags retain their original colors remains unknown. The flags cannot be seen through a telescope from Earth.
In a November 16, 2009, editorial, The New York Times opined:
Legacy
Science and engineering
The Apollo program has been described as the greatest technological achievement in human history. Apollo stimulated many areas of technology, leading to over 1,800 spinoff products as of 2015, including advances in the development of cordless power tools, fireproof materials, heart monitors, solar panels, digital imaging, and the use of liquid methane as fuel. The flight computer design used in both the lunar and command modules was, along with the Polaris and Minuteman missile systems, the driving force behind early research into integrated circuits (ICs). By 1963, Apollo was using 60 percent of the United States' production of ICs. The crucial difference between the requirements of Apollo and the missile programs was Apollo's much greater need for reliability. While the Navy and Air Force could work around reliability problems by deploying more missiles, the political and financial cost of failure of an Apollo mission was unacceptably high.
Technologies and techniques required for Apollo were developed by Project Gemini. The Apollo project was enabled by NASA's adoption of new advances in semiconductor electronic technology, including metal–oxide–semiconductor field-effect transistors (MOSFETs) in the Interplanetary Monitoring Platform (IMP) and silicon integrated circuit chips in the Apollo Guidance Computer (AGC).
Cultural impact
The crew of Apollo 8 sent the first live televised pictures of the Earth and the Moon back to Earth, and read from the creation story in the Book of Genesis, on Christmas Eve 1968. An estimated one-quarter of the population of the world saw—either live or delayed—the Christmas Eve transmission during the ninth orbit of the Moon, and an estimated one-fifth of the population of the world watched the live transmission of the Apollo 11 moonwalk.
The Apollo program also affected environmental activism in the 1970s due to photos taken by the astronauts. The most well known include Earthrise, taken by William Anders on Apollo 8, and The Blue Marble, taken by the Apollo 17 astronauts. The Blue Marble was released during a surge in environmentalism, and became a symbol of the environmental movement as a depiction of Earth's frailty, vulnerability, and isolation amid the vast expanse of space.
According to The Economist, Apollo succeeded in accomplishing President Kennedy's goal of taking on the Soviet Union in the Space Race by accomplishing a singular and significant achievement, to demonstrate the superiority of the free-market system. The publication noted the irony that in order to achieve the goal, the program required the organization of tremendous public resources within a vast, centralized government bureaucracy.
Apollo 11 broadcast data restoration project
Prior to Apollo 11's 40th anniversary in 2009, NASA searched for the original videotapes of the mission's live televised moonwalk. After an exhaustive three-year search, it was concluded that the tapes had probably been erased and reused. A new digitally remastered version of the best available broadcast television footage was released instead.
Depictions on film
Documentaries
Numerous documentary films cover the Apollo program and the Space Race, including:
Footprints on the Moon (1969)
Moonwalk One (1970)
The Greatest Adventure (1978)
For All Mankind (1989)
Moon Shot (1994 miniseries)
"Moon" from the BBC miniseries The Planets (1999)
Magnificent Desolation: Walking on the Moon 3D (2005)
The Wonder of It All (2007)
In the Shadow of the Moon (2007)
When We Left Earth: The NASA Missions (2008 miniseries)
Moon Machines (2008 miniseries)
James May on the Moon (2009)
NASA's Story (2009 miniseries)
Apollo 11 (2019)
Chasing the Moon (2019 miniseries)
Docudramas
Some missions have been dramatized:
Apollo 13 (1995)
Apollo 11 (1996)
From the Earth to the Moon (1998)
The Dish (2000)
Space Race (2005)
Moonshot (2009)
First Man (2018)
Fictional
The Apollo program has been the focus of several works of fiction, including:
Apollo 18 (2011), horror movie which was released to negative reviews.
Men in Black 3 (2012), Science Fiction/Comedy movie. Agent J played by Will Smith goes back to the Apollo 11 launch in 1969 to ensure that a global protection system is launched in to space.
For All Mankind (2019), TV series depicting an alternate history in which the Soviet Union was the first country to successfully land a man on the Moon.
Indiana Jones and the Dial of Destiny (2023), fifth Indiana Jones film, in which Jürgen Voller, a NASA member and ex-Nazi involved with the Apollo program, wants to time travel. The New York City parade for the Apollo 11 crew is portrayed as a plot point.
See also
Apollo 11 in popular culture
Apollo Lunar Surface Experiments Package
Exploration of the Moon
Leslie Cantwell collection
List of artificial objects on the Moon
List of crewed spacecraft
List of missions to the Moon
Soviet crewed lunar programs
Stolen and missing Moon rocks
Artemis Program
Notes
References
Citations
Sources
Chaikin interviewed all the surviving astronauts and others who worked with the program.
Further reading
NASA Report JSC-09423, April 1975
Astronaut Mike Collins autobiography of his experiences as an astronaut, including his flight aboard Apollo 11.
Although this book focuses on Apollo 13, it provides a wealth of background information on Apollo technology and procedures.
History of the Apollo program from Apollos 1–11, including many interviews with the Apollo astronauts.
Gleick, James, "Moon Fever" [review of Oliver Morton, The Moon: A History of the Future; Apollo's Muse: The Moon in the Age of Photography, an exhibition at the Metropolitan Museum of Art, New York City, July 3 – September 22, 2019; Douglas Brinkley, American Moonshot: John F. Kennedy and the Great Space Race; Brandon R. Brown, The Apollo Chronicles: Engineering America's First Moon Missions; Roger D. Launius, Reaching for the Moon: A Short History of the Space Race; Apollo 11, a documentary film directed by Todd Douglas Miller; and Michael Collins, Carrying the Fire: An Astronaut's Journeys (50th Anniversary Edition)], The New York Review of Books, vol. LXVI, no. 13 (15 August 2019), pp. 54–58.
Factual, from the standpoint of a flight controller during the Mercury, Gemini, and Apollo space programs.
Details the flight of Apollo 13.
Tells Grumman's story of building the lunar modules.
History of the crewed space program from 1September 1960, to 5January 1968.
Account of Deke Slayton's life as an astronaut and of his work as chief of the astronaut office, including selection of Apollo crews.
From origin to November 7, 1962
November 8, 1962 – September 30, 1964
October 1, 1964 – January 20, 1966
January 21, 1966 – July 13, 1974
The history of lunar exploration from a geologist's point of view.
External links
Apollo program history at NASA's Human Space Flight (HSF) website
The Apollo Program at the NASA History Program Office
The Apollo Program at the National Air and Space Museum
Apollo 35th Anniversary Interactive Feature at NASA (in Flash)
Lunar Mission Timeline at the Lunar and Planetary Institute
Apollo Collection, The University of Alabama in Huntsville Archives and Special Collections
NASA reports
Apollo Program Summary Report (PDF), NASA, JSC-09423, April 1975
NASA History Series Publications
Project Apollo Drawings and Technical Diagrams at the NASA History Program Office
The Apollo Lunar Surface Journal edited by Eric M. Jones and Ken Glover
The Apollo Flight Journal by W. David Woods, et al.
Multimedia
NASA Apollo Program images and videos
Apollo Image Archive at Arizona State University
Audio recording and transcript of President John F. Kennedy, NASA administrator James Webb, et al., discussing the Apollo agenda (White House Cabinet Room, November 21, 1962)
The Project Apollo Archive by Kipp Teague is a large repository of Apollo images, videos, and audio recordings
The Project Apollo Archive on Flickr
Apollo Image Atlas—almost 25,000 lunar images, Lunar and Planetary Institute
The short film The Time of Apollo (1975) is available for free viewing and download at the National Archives.
1960s in the United States
1970s in the United States
Articles containing video clips
Engineering projects
Exploration of the Moon
Human spaceflight programs
NASA programs
Space program of the United States | Apollo program | [
"Engineering"
] | 12,902 | [
"Space programs",
"Human spaceflight programs",
"nan"
] |
1,523 | https://en.wikipedia.org/wiki/Agate | Agate ( ) is a variety of chalcedony, which comes in a wide variety of colors. Agates are primarily formed within volcanic and metamorphic rocks. The ornamental use of agate was common in ancient Greece, in assorted jewelry and in the seal stones of Greek warriors, while bead necklaces with pierced and polished agate date back to the 3rd millennium BCE in the Indus Valley civilisation.
Etymology
The stone was given its name by Theophrastus, a Greek philosopher and naturalist, who discovered the stone along the shore line of the Dirillo River or Achates () in Sicily, sometime between the 4th and 3rd centuries BCE.
Formation and properties
Agate minerals have the tendency to form on or within existing rocks, creating difficulties in accurately determining their time of formation. Their host rocks have been dated to have formed as early as the Archean Eon. Agates are most commonly found as nodules within the cavities of volcanic rocks. These cavities are formed from the gases trapped within the liquid volcanic material forming vesicles. Cavities are then filled in with silica-rich fluids from the volcanic material, layers are deposited on the walls of the cavity slowly working their way inwards. The first layer deposited on the cavity walls is commonly known as the priming layer. Variations in the character of the solution or in the conditions of deposition may cause a corresponding variation in the successive layers. These variations in layers result in bands of chalcedony, often alternating with layers of crystalline quartz forming banded agate. Hollow agates can also form due to the deposition of liquid-rich silica not penetrating deep enough to fill the cavity completely. Agate will form crystals within the reduced cavity, and the apex of each crystal may point towards the center of the cavity.
The priming layer is often dark green, but can be modified by iron oxide resulting in a rust like appearance. Agate is very durable, and is often found detached from its host matrix, which has eroded. Once removed, the outer surface is usually pitted and rough from filling the cavity of its former matrix. Agates have also been found in sedimentary rocks, normally in limestone or dolomite; these sedimentary rocks acquire cavities often from decomposed branches or other buried organic material. If silica-rich fluids are able to penetrate into these cavities agates can be formed.
Types
Lace agate is a variety that exhibits a lace-like pattern with forms such as eyes, swirls, bands or zigzags. Blue lace agate is found in Africa and is especially hard. Crazy lace agate, typically found in Mexico, is often brightly colored with a complex pattern, demonstrating randomized distribution of contour lines and circular droplets, scattered throughout the rock. The stone is typically coloured red and white but is also seen to exhibit yellow and grey combinations as well.
Moss agate, as the name suggests, exhibits a moss-like pattern and is of a greenish colour. The coloration is not created by any vegetative growth, but rather through the mixture of chalcedony and oxidized iron hornblende. Dendritic agate also displays vegetative features, including fern-like patterns formed due to the presence of manganese and iron oxides.
Turritella agate (Elimia tenera) is formed from the shells of fossilized freshwater Turritella gastropods with elongated spiral shells. Similarly, coral, petrified wood, porous rocks and other organic remains can also form agate.
Coldwater agates, such as the Lake Michigan cloud agate, did not form under volcanic processes, but instead formed within the limestone and dolomite strata of marine origin. Like volcanic-origin agates, Coldwater agates formed from silica gels that lined pockets and seams within the bedrock. These agates are typically less colorful, with banded lines of grey and white chalcedony.
Greek agate is a name given to pale white to tan colored agate found in the former Greek colony of Sicily as early as 400 BCE. The Greeks used it for making jewelry and beads.
Brazilian agate is found as sizable geodes of layered nodules. These occur in brownish tones inter-layered with white and gray. It is often dyed in various colors for ornamental purposes.
Polyhedroid agate forms in a flat-sided shape similar to a polyhedron. When sliced, it often shows a characteristic layering of concentric polygons. It has been suggested that growth is not crystallographically controlled but is due to the filling-in of spaces between pre-existing crystals which have since dissolved.
Iris agate is a finely-banded and usually colorless agate, that when thinly sliced, exhibits spectral decomposition of white light into its constituent colors, requiring 400 to up to 30,000 bands per inch.
Other forms of agate include Holley blue agate (also spelled "Holly blue agate"), a rare dark blue ribbon agate found only near Holley, Oregon; Lake Superior agate; Carnelian agate (has reddish hues); Botswana agate; plume agate; condor agate; tube agate containing visible flow channels or pinhole-sized "tubes"; fortification agate with contrasting concentric banding reminiscent of defensive ditches and walls around ancient forts; Binghamite, a variety found only on the Cuyuna iron range (near Crosby) in Crow Wing County, Minnesota; fire agate showing an iridescent, internal flash or "fire", the result of a layer of clear agate over a layer of hydrothermally deposited hematite; Patuxent River stone, a red and yellow form of agate only found in Maryland; and enhydro agate, which contains tiny inclusions of water, sometimes with air bubbles.
Uses
Agate is one of the most common materials used in the art of hardstone carving, and has been recovered at a number of ancient sites, indicating its widespread use in the ancient world; for example, archaeological recovery at the Knossos site on Crete illustrates its role in Bronze Age Minoan culture. It has also been used for centuries for leather burnishing tools.
The decorative arts use it to make ornaments such as pins, brooches or other types of jewellery, paper knives, inkstands, marbles and seals. Agate is also still used today for decorative displays, cabochons, beads, carvings and Intarsia art as well as face-polished and tumble-polished specimens of varying size and origin. Idar-Oberstein was one of the centers which made use of agate on an industrial scale. Where in the beginning locally found agates were used to make all types of objects for the European market, this became a globalized business around the turn of the 20th century: Idar-Oberstein imported large quantities of agate from Brazil, as ship's ballast. Making use of a variety of proprietary chemical processes, they produced colored beads that were sold around the globe. Agates have long been used in arts and crafts. The sanctuary of a Presbyterian church in Yachats, Oregon, has six windows with panes made of agates collected from the local beaches.
Industrial uses of agate exploit its hardness, ability to retain a highly polished surface finish and resistance to chemical attack. It has traditionally been used to make knife-edge bearings for laboratory balances and precision pendulums, and sometimes to make mortars and pestles to crush and mix chemicals.
Health impact
Respiratory diseases such as silicosis, and a higher incidence of tuberculosis among workers involved in the agate industry, have been studied in India and China.
See also
Amber
Amethyst
Aqeeq
Aquamarine
Citrine
Diamond
Emerald
Garnet
Geode
Kyanite
Labradorite
Lithophysa
Moonstone
Opal
Peridot
Rose Quartz
Swiss Blue Topaz
Thunderegg
Tiger's Eye
Topaz
Tourmaline
Turquoise
Citations
General and cited references
Cross, Brad L. and Zeitner, June Culp. Geodes: Nature's Treasures. Bardwin Park, Calif.: Gem Guides Book Co. 2005.
Hart, Gilbert "The Nomenclature of Silica", American Mineralogist, Volume 12, pages 383–395, 1927
International Colored Gemstone Association, "Agate: banded beauty"
"Agate", Mindat.org, Hudson Institute of Mineralogy
Moxon, Terry. Agate: Microstructure and Possible Origin. Doncaster, S. Yorks, UK: Terra Publications, 1996.
Pabian, Roger, et al. Agates: Treasures of the Earth. Buffalo, New York: Firefly Books, 2006.
Schumann, Walter. Gemstones of the World. 3rd edition. New York: Sterling, 2006.
External links
"Agates", School of Natural Resources, University of Nebraska-Lincoln (retrieved 27 December 2014).
Gemstones
Hardstone carving
Silicate minerals
Symbols of Florida | Agate | [
"Physics"
] | 1,844 | [
"Materials",
"Gemstones",
"Matter"
] |
1,525 | https://en.wikipedia.org/wiki/Aspirin | Aspirin is the genericized trademark for acetylsalicylic acid (ASA), a nonsteroidal anti-inflammatory drug (NSAID) used to reduce pain, fever, and inflammation, and as an antithrombotic. Specific inflammatory conditions that aspirin is used to treat include Kawasaki disease, pericarditis, and rheumatic fever.
Aspirin is also used long-term to help prevent further heart attacks, ischaemic strokes, and blood clots in people at high risk. For pain or fever, effects typically begin within 30 minutes. Aspirin works similarly to other NSAIDs but also suppresses the normal functioning of platelets.
One common adverse effect is an upset stomach. More significant side effects include stomach ulcers, stomach bleeding, and worsening asthma. Bleeding risk is greater among those who are older, drink alcohol, take other NSAIDs, or are on other blood thinners. Aspirin is not recommended in the last part of pregnancy. It is not generally recommended in children with infections because of the risk of Reye syndrome. High doses may result in ringing in the ears.
A precursor to aspirin found in the bark of the willow tree (genus Salix) has been used for its health effects for at least 2,400 years. In 1853, chemist Charles Frédéric Gerhardt treated the medicine sodium salicylate with acetyl chloride to produce acetylsalicylic acid for the first time. Over the next 50 years, other chemists, mostly of the German company Bayer, established the chemical structure and devised more efficient production methods. Felix Hoffmann (or Arthur Eichengrün) of Bayer was the first to produce acetylsalicylic acid in a pure, stable form in 1897. By 1899, Bayer had dubbed this drug Aspirin and was selling it globally.
Aspirin is available without medical prescription as a proprietary or generic medication in most jurisdictions. It is one of the most widely used medications globally, with an estimated (50 to 120 billion pills) consumed each year, and is on the World Health Organization's List of Essential Medicines. In 2022, it was the 36th most commonly prescribed medication in the United States, with more than 16million prescriptions.
Brand vs. generic name
In 1897, scientists at the Bayer company began studying acetylsalicylic acid as a less-irritating replacement medication for common salicylate medicines. By 1899, Bayer had named it "Aspirin" and was selling it around the world.
Aspirin's popularity grew over the first half of the 20th century, leading to competition between many brands and formulations. The word Aspirin was Bayer's brand name; however, its rights to the trademark were lost or sold in many countries. The name is ultimately a blend of the prefix a(cetyl) + spir Spiraea, the meadowsweet plant genus from which the acetylsalicylic acid was originally derived at Bayer + -in, the common chemical suffix.
Chemical properties
Aspirin decomposes rapidly in solutions of ammonium acetate or the acetates, carbonates, citrates, or hydroxides of the alkali metals. It is stable in dry air, but gradually hydrolyses in contact with moisture to acetic and salicylic acids. In solution with alkalis, the hydrolysis proceeds rapidly and the clear solutions formed may consist entirely of acetate and salicylate.
Like flour mills, factories producing aspirin tablets must control the amount of the powder that becomes airborne inside the building, because the powder-air mixture can be explosive. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit in the United States of 5mg/m3 (time-weighted average). In 1989, the Occupational Safety and Health Administration (OSHA) set a legal permissible exposure limit for aspirin of 5mg/m3, but this was vacated by the AFL-CIO v. OSHA decision in 1993.
Synthesis
The synthesis of aspirin is classified as an esterification reaction. Salicylic acid is treated with acetic anhydride, an acid derivative, causing a chemical reaction that turns salicylic acid's hydroxyl group into an ester group (R-OH → R-OCOCH3). This process yields aspirin and acetic acid, which is considered a byproduct of this reaction. Small amounts of sulfuric acid (and occasionally phosphoric acid) are almost always used as a catalyst. This method is commonly demonstrated in undergraduate teaching labs.
Reaction between acetic acid and salicylic acid can also form aspirin but this esterification reaction is reversible and the presence of water can lead to hydrolysis of the aspirin. So, an anhydrous reagent is preferred.
Reaction mechanism
Formulations containing high concentrations of aspirin often smell like vinegar because aspirin can decompose through hydrolysis in moist conditions, yielding salicylic and acetic acids.
Physical properties
Aspirin, an acetyl derivative of salicylic acid, is a white, crystalline, weakly acidic substance that melts at , and decomposes around . Its acid dissociation constant (pKa) is 3.5 at .
Polymorphism
Polymorphism, or the ability of a substance to form more than one crystal structure, is important in the development of pharmaceutical ingredients. Many drugs receive regulatory approval for only a single crystal form or polymorph.
Until 2005, there was only one proven polymorph of aspirin (Form I), though the existence of another polymorph was debated since the 1960s, and one report from 1981 reported that when crystallized in the presence of aspirin anhydride, the diffractogram of aspirin has weak additional peaks. Though at the time it was dismissed as mere impurity, it was, in retrospect, Form II aspirin.
Form II was reported in 2005, found after attempted co-crystallization of aspirin and levetiracetam from hot acetonitrile.
In form I, pairs of aspirin molecules form centrosymmetric dimers through the acetyl groups with the (acidic) methyl proton to carbonyl hydrogen bonds. In form II, each aspirin molecule forms the same hydrogen bonds, but with two neighbouring molecules instead of one. With respect to the hydrogen bonds formed by the carboxylic acid groups, both polymorphs form identical dimer structures. The aspirin polymorphs contain identical 2-dimensional sections and are therefore more precisely described as polytypes.
Pure Form II aspirin could be prepared by seeding the batch with aspirin anhydrate in 15% weight.
Form III was reported in 2015 by compressing form I above 2 GPa, but it reverts back to Form I when pressure is removed. Form IV was reported in 2017. It is stable at ambient conditions.
Mechanism of action
Discovery of the mechanism
In 1971, British pharmacologist John Robert Vane, then employed by the Royal College of Surgeons in London, showed aspirin suppressed the production of prostaglandins and thromboxanes. For this discovery he was awarded the 1982 Nobel Prize in Physiology or Medicine, jointly with Sune Bergström and Bengt Ingemar Samuelsson.
Prostaglandins and thromboxanes
Aspirin's ability to suppress the production of prostaglandins and thromboxanes is due to its irreversible inactivation of the cyclooxygenase (COX; officially known as prostaglandin-endoperoxide synthase, PTGS) enzyme required for prostaglandin and thromboxane synthesis. Aspirin acts as an acetylating agent where an acetyl group is covalently attached to a serine residue in the active site of the COX enzyme (Suicide inhibition). This makes aspirin different from other NSAIDs (such as diclofenac and ibuprofen), which are reversible inhibitors.
Low-dose aspirin use irreversibly blocks the formation of thromboxane A2 in platelets, producing an inhibitory effect on platelet aggregation during the lifetime of the affected platelet (8–9 days). This antithrombotic property makes aspirin useful for reducing the incidence of heart attacks in people who have had a heart attack, unstable angina, ischemic stroke or transient ischemic attack. 40mg of aspirin a day is able to inhibit a large proportion of maximum thromboxane A2 release provoked acutely, with the prostaglandin I2 synthesis being little affected; however, higher doses of aspirin are required to attain further inhibition.
Prostaglandins, local hormones produced in the body, have diverse effects, including the transmission of pain information to the brain, modulation of the hypothalamic thermostat, and inflammation. Thromboxanes are responsible for the aggregation of platelets that form blood clots. Heart attacks are caused primarily by blood clots, and low doses of aspirin are seen as an effective medical intervention to prevent a second acute myocardial infarction.
COX-1 and COX-2 inhibition
At least two different types of cyclooxygenases, COX-1 and COX-2, are acted on by aspirin. Aspirin irreversibly inhibits COX-1 and modifies the enzymatic activity of COX-2. COX-2 normally produces prostanoids, most of which are proinflammatory. Aspirin-modified COX-2 (aka prostaglandin-endoperoxide synthase 2 or PTGS2) produces epi-lipoxins, most of which are anti-inflammatory. Newer NSAID drugs, COX-2 inhibitors (coxibs), have been developed to inhibit only COX-2, with the intent to reduce the incidence of gastrointestinal side effects.
Several COX-2 inhibitors, such as rofecoxib (Vioxx), have been withdrawn from the market, after evidence emerged that COX-2 inhibitors increase the risk of heart attack and stroke. Endothelial cells lining the microvasculature in the body are proposed to express COX-2, and, by selectively inhibiting COX-2, prostaglandin production (specifically, PGI2; prostacyclin) is downregulated with respect to thromboxane levels, as COX-1 in platelets is unaffected. Thus, the protective anticoagulative effect of PGI2 is removed, increasing the risk of thrombus and associated heart attacks and other circulatory problems. Since platelets have no DNA, they are unable to synthesize new COX-1 once aspirin has irreversibly inhibited the enzyme, an important difference as compared with reversible inhibitors.
Furthermore, aspirin, while inhibiting the ability of COX-2 to form pro-inflammatory products such as the prostaglandins, converts this enzyme's activity from a prostaglandin-forming cyclooxygenase to a lipoxygenase-like enzyme: aspirin-treated COX-2 metabolizes a variety of polyunsaturated fatty acids to hydroperoxy products which are then further metabolized to specialized proresolving mediators such as the aspirin-triggered lipoxins(15-epilipoxin-A4/B4), aspirin-triggered resolvins, and aspirin-triggered maresins. These mediators possess potent anti-inflammatory activity. It is proposed that this aspirin-triggered transition of COX-2 from cyclooxygenase to lipoxygenase activity and the consequential formation of specialized proresolving mediators contributes to the anti-inflammatory effects of aspirin.
Additional mechanisms
Aspirin has been shown to have at least three additional modes of action. It uncouples oxidative phosphorylation in cartilaginous (and hepatic) mitochondria, by diffusing from the inner membrane space as a proton carrier back into the mitochondrial matrix, where it ionizes once again to release protons. Aspirin buffers and transports the protons. When high doses are given, it may actually cause fever, owing to the heat released from the electron transport chain, as opposed to the antipyretic action of aspirin seen with lower doses. In addition, aspirin induces the formation of NO-radicals in the body, which have been shown in mice to have an independent mechanism of reducing inflammation. This reduced leukocyte adhesion is an important step in the immune response to infection; however, evidence is insufficient to show aspirin helps to fight infection. More recent data also suggest salicylic acid and its derivatives modulate signalling through NF-κB. NF-κB, a transcription factor complex, plays a central role in many biological processes, including inflammation.
Aspirin is readily broken down in the body to salicylic acid, which itself has anti-inflammatory, antipyretic, and analgesic effects. In 2012, salicylic acid was found to activate AMP-activated protein kinase, which has been suggested as a possible explanation for some of the effects of both salicylic acid and aspirin. The acetyl portion of the aspirin molecule has its own targets. Acetylation of cellular proteins is a well-established phenomenon in the regulation of protein function at the post-translational level. Aspirin is able to acetylate several other targets in addition to COX isoenzymes. These acetylation reactions may explain many hitherto unexplained effects of aspirin.
Formulations
Aspirin is produced in many formulations, with some differences in effect. In particular, aspirin can cause gastrointestinal bleeding, and formulations are sought which deliver the benefits of aspirin while mitigating harmful bleeding. Formulations may be combined (e.g., buffered + vitamin C).
Tablets, typically of about 75–100 mg and 300–320 mg of immediate-release aspirin (IR-ASA).
Dispersible tablets.
Enteric-coated tablets.
Buffered formulations containing aspirin with one of many buffering agents.
Formulations of aspirin with vitamin C (ASA-VitC)
A phospholipid-aspirin complex liquid formulation, PL-ASA. the phospholipid coating was being trialled to determine if it caused less gastrointestinal damage.
Pharmacokinetics
Acetylsalicylic acid is a weak acid, and very little of it is ionized in the stomach after oral administration. Acetylsalicylic acid is quickly absorbed through the cell membrane in the acidic conditions of the stomach. The increased pH and larger surface area of the small intestine causes aspirin to be absorbed more slowly there, as more of it is ionized. Owing to the formation of concretions, aspirin is absorbed much more slowly during overdose, and plasma concentrations can continue to rise for up to 24 hours after ingestion.
About 50–80% of salicylate in the blood is bound to human serum albumin, while the rest remains in the active, ionized state; protein binding is concentration-dependent. Saturation of binding sites leads to more free salicylate and increased toxicity. The volume of distribution is 0.1–0.2 L/kg. Acidosis increases the volume of distribution because of enhancement of tissue penetration of salicylates.
As much as 80% of therapeutic doses of salicylic acid is metabolized in the liver. Conjugation with glycine forms salicyluric acid, and with glucuronic acid to form two different glucuronide esters. The conjugate with the acetyl group intact is referred to as the acyl glucuronide; the deacetylated conjugate is the phenolic glucuronide. These metabolic pathways have only a limited capacity. Small amounts of salicylic acid are also hydroxylated to gentisic acid. With large salicylate doses, the kinetics switch from first-order to zero-order, as metabolic pathways become saturated and renal excretion becomes increasingly important.
Salicylates are excreted mainly by the kidneys as salicyluric acid (75%), free salicylic acid (10%), salicylic phenol (10%), and acyl glucuronides (5%), gentisic acid (< 1%), and 2,3-dihydroxybenzoic acid. When small doses (less than 250mg in an adult) are ingested, all pathways proceed by first-order kinetics, with an elimination half-life of about 2.0 h to 4.5 h. When higher doses of salicylate are ingested (more than 4 g), the half-life becomes much longer (15 h to 30 h), because the biotransformation pathways concerned with the formation of salicyluric acid and salicyl phenolic glucuronide become saturated. Renal excretion of salicylic acid becomes increasingly important as the metabolic pathways become saturated, because it is extremely sensitive to changes in urinary pH. A 10- to 20-fold increase in renal clearance occurs when urine pH is increased from 5 to 8. The use of urinary alkalinization exploits this particular aspect of salicylate elimination. It was found that short-term aspirin use in therapeutic doses might precipitate reversible acute kidney injury when the patient was ill with glomerulonephritis or cirrhosis. Aspirin for some patients with chronic kidney disease and some children with congestive heart failure was contraindicated.
History
Medicines made from willow and other salicylate-rich plants appear in clay tablets from ancient Sumer as well as the Ebers Papyrus from ancient Egypt. Hippocrates referred to the use of salicylic tea to reduce fevers around 400 BC, and willow bark preparations were part of the pharmacopoeia of Western medicine in classical antiquity and the Middle Ages. Willow bark extract became recognized for its specific effects on fever, pain, and inflammation in the mid-eighteenth century after the Rev Edward Stone of Chipping Norton, Oxfordshire, noticed that the bitter taste of willow bark resembled the taste of the bark of the cinchona tree, known as "Peruvian bark", which was used successfully in Peru to treat a variety of ailments. Stone experimented with preparations of powdered willow bark on people in Chipping Norton for five years and found it to be as effective as Peruvian bark and a cheaper domestic version. In 1763 he sent a report of his findings to the Royal Society in London. By the nineteenth century, pharmacists were experimenting with and prescribing a variety of chemicals related to salicylic acid, the active component of willow extract.
In 1853, chemist Charles Frédéric Gerhardt treated sodium salicylate with acetyl chloride to produce acetylsalicylic acid for the first time; in the second half of the 19th century, other academic chemists established the compound's chemical structure and devised more efficient methods of synthesis. In 1897, scientists at the drug and dye firm Bayer began investigating acetylsalicylic acid as a less-irritating replacement for standard common salicylate medicines, and identified a new way to synthesize it. That year, Felix Hoffmann (or Arthur Eichengrün) of Bayer was the first to produce acetylsalicylic acid in a pure, stable form. By 1899, Bayer had dubbed this drug Aspirin and was selling it globally. The word Aspirin was Bayer's brand name, rather than the generic name of the drug; however, Bayer's rights to the trademark were lost or sold in many countries. Aspirin's popularity grew over the first half of the 20th century leading to fierce competition with the proliferation of aspirin brands and products.
Aspirin's popularity declined after the development of acetaminophen/paracetamol in 1956 and ibuprofen in 1962. In the 1960s and 1970s, John Vane and others discovered the basic mechanism of aspirin's effects, while clinical trials and other studies from the 1960s to the 1980s established aspirin's efficacy as an anti-clotting agent that reduces the risk of clotting diseases. The initial large studies on the use of low-dose aspirin to prevent heart attacks that were published in the 1970s and 1980s helped spur reform in clinical research ethics and guidelines for human subject research and US federal law, and are often cited as examples of clinical trials that included only men, but from which people drew general conclusions that did not hold true for women.
Aspirin sales revived considerably in the last decades of the 20th century, and remain strong in the 21st century with widespread use as a preventive treatment for heart attacks and strokes.
Trademark
Bayer lost its trademark for Aspirin in the United States and some other countries in actions taken between 1918 and 1921 because it had failed to use the name for its own product correctly and had for years allowed the use of "Aspirin" by other manufacturers without defending the intellectual property rights. Today, aspirin is a generic trademark in many countries. Aspirin, with a capital "A", remains a registered trademark of Bayer in Germany, Canada, Mexico, and in over 80 other countries, for acetylsalicylic acid in all markets, but using different packaging and physical aspects for each.
Compendial status
United States Pharmacopeia
British Pharmacopoeia
Medical use
Aspirin is used in the treatment of a number of conditions, including fever, pain, rheumatic fever, and inflammatory conditions, such as rheumatoid arthritis, pericarditis, and Kawasaki disease. Lower doses of aspirin have also been shown to reduce the risk of death from a heart attack, or the risk of stroke in people who are at high risk or who have cardiovascular disease, but not in elderly people who are otherwise healthy. There is evidence that aspirin is effective at preventing colorectal cancer, though the mechanisms of this effect are unclear.
Pain
Aspirin is an effective analgesic for acute pain, although it is generally considered inferior to ibuprofen because aspirin is more likely to cause gastrointestinal bleeding. Aspirin is generally ineffective for those pains caused by muscle cramps, bloating, gastric distension, or acute skin irritation. As with other NSAIDs, combinations of aspirin and caffeine provide slightly greater pain relief than aspirin alone. Effervescent formulations of aspirin relieve pain faster than aspirin in tablets, which makes them useful for the treatment of migraines. Topical aspirin may be effective for treating some types of neuropathic pain.
Aspirin, either by itself or in a combined formulation, effectively treats certain types of a headache, but its efficacy may be questionable for others. Secondary headaches, meaning those caused by another disorder or trauma, should be promptly treated by a medical provider. Among primary headaches, the International Classification of Headache Disorders distinguishes between tension headache (the most common), migraine, and cluster headache. Aspirin or other over-the-counter analgesics are widely recognized as effective for the treatment of tension headaches. Aspirin, especially as a component of an aspirin/paracetamol/caffeine combination, is considered a first-line therapy in the treatment of migraine, and comparable to lower doses of sumatriptan. It is most effective at stopping migraines when they are first beginning.
Fever
Like its ability to control pain, aspirin's ability to control fever is due to its action on the prostaglandin system through its irreversible inhibition of COX. Although aspirin's use as an antipyretic in adults is well established, many medical societies and regulatory agencies, including the American Academy of Family Physicians, the American Academy of Pediatrics, and the Food and Drug Administration, strongly advise against using aspirin for the treatment of fever in children because of the risk of Reye's syndrome, a rare but often fatal illness associated with the use of aspirin or other salicylates in children during episodes of viral or bacterial infection. Because of the risk of Reye's syndrome in children, in 1986, the US Food and Drug Administration (FDA) required labeling on all aspirin-containing medications advising against its use in children and teenagers.
Inflammation
Aspirin is used as an anti-inflammatory agent for both acute and long-term inflammation, as well as for the treatment of inflammatory diseases, such as rheumatoid arthritis.
Heart attacks and strokes
Aspirin is an important part of the treatment of those who have had a heart attack. It is generally not recommended for routine use by people with no other health problems, including those over the age of 70.
The 2009 Antithrombotic Trialists' Collaboration published in Lancet evaluated the efficacy and safety of low dose aspirin in secondary prevention. In those with prior ischaemic stroke or acute myocardial infarction, daily low dose aspirin was associated with a 19% relative risk reduction of serious cardiovascular events (non-fatal myocardial infarction, non-fatal stroke, or vascular death). This did come at the expense of a 0.19% absolute risk increase in gastrointestinal bleeding; however, the benefits outweigh the hazard risk in this case. Data from previous trials have suggested that weight-based dosing of aspirin has greater benefits in primary prevention of cardiovascular outcomes. However, more recent trials were not able to replicate similar outcomes using low dose aspirin in low body weight (<70 kg) in specific subset of population studied i.e. elderly and diabetic population, and more evidence is required to study the effect of high dose aspirin in high body weight (≥70 kg).
After percutaneous coronary interventions (PCIs), such as the placement of a coronary artery stent, a U.S. Agency for Healthcare Research and Quality guideline recommends that aspirin be taken indefinitely. Frequently, aspirin is combined with an ADP receptor inhibitor, such as clopidogrel, prasugrel, or ticagrelor to prevent blood clots. This is called dual antiplatelet therapy (DAPT). Duration of DAPT was advised in the United States and European Union guidelines after the CURE and PRODIGY studies. In 2020, the systematic review and network meta-analysis from Khan et al. showed promising benefits of short-term (< 6 months) DAPT followed by P2Y12 inhibitors in selected patients, as well as the benefits of extended-term (> 12 months) DAPT in high risk patients. In conclusion, the optimal duration of DAPT after PCIs should be personalized after outweighing each patient's risks of ischemic events and risks of bleeding events with consideration of multiple patient-related and procedure-related factors. Moreover, aspirin should be continued indefinitely after DAPT is complete.
The status of the use of aspirin for the primary prevention in cardiovascular disease is conflicting and inconsistent, with recent changes from previously recommending it widely decades ago, and that some referenced newer trials in clinical guidelines show less of benefit of adding aspirin alongside other anti-hypertensive and cholesterol lowering therapies. The ASCEND study demonstrated that in high-bleeding risk diabetics with no prior cardiovascular disease, there is no overall clinical benefit (12% decrease in risk of ischaemic events v/s 29% increase in GI bleeding) of low dose aspirin in preventing the serious vascular events over a period of 7.4 years. Similarly, the results of the ARRIVE study also showed no benefit of same dose of aspirin in reducing the time to first cardiovascular outcome in patients with moderate risk of cardiovascular disease over a period of five years. Aspirin has also been suggested as a component of a polypill for prevention of cardiovascular disease. Complicating the use of aspirin for prevention is the phenomenon of aspirin resistance. For people who are resistant, aspirin's efficacy is reduced. Some authors have suggested testing regimens to identify people who are resistant to aspirin.
As of , the United States Preventive Services Task Force (USPSTF) determined that there was a "small net benefit" for patients aged 40–59 with a 10% or greater 10-year cardiovascular disease (CVD) risk, and "no net benefit" for patients aged over 60. Determining the net benefit was based on balancing the risk reduction of taking aspirin for heart attacks and ischaemic strokes, with the increased risk of gastrointestinal bleeding, intracranial bleeding, and hemorrhagic strokes. Their recommendations state that age changes the risk of the medicine, with the magnitude of the benefit of aspirin coming from starting at a younger age, while the risk of bleeding, while small, increases with age, particular for adults over 60, and can be compounded by other risk factors such as diabetes and a history of gastrointestinal bleeding. As a result, the USPSTF suggests that "people ages 40 to 59 who are at higher risk for CVD should decide with their clinician whether to start taking aspirin; people 60 or older should not start taking aspirin to prevent a first heart attack or stroke." Primary prevention guidelines from made by the American College of Cardiology and the American Heart Association state they might consider aspirin for patients aged 40–69 with a higher risk of atherosclerotic CVD, without an increased bleeding risk, while stating they would not recommend aspirin for patients aged over 70 or adults of any age with an increased bleeding risk. They state a CVD risk estimation and a risk discussion should be done before starting on aspirin, while stating aspirin should be used "infrequently in the routine primary prevention of (atherosclerotic CVD) because of lack of net benefit". As of , the European Society of Cardiology made similar recommendations; considering aspirin specifically to patients aged less than 70 at high or very high CVD risk, without any clear contraindications, on a case-by-case basis considering both ischemic risk and bleeding risk.
Cancer prevention
Aspirin may reduce the overall risk of both getting cancer and dying from cancer. There is substantial evidence for lowering the risk of colorectal cancer (CRC), but aspirin must be taken for at least 10–20 years to see this benefit. It may also slightly reduce the risk of endometrial cancer and prostate cancer.
Some conclude the benefits are greater than the risks due to bleeding in those at average risk. Others are unclear if the benefits are greater than the risk. Given this uncertainty, the 2007 United States Preventive Services Task Force (USPSTF) guidelines on this topic recommended against the use of aspirin for prevention of CRC in people with average risk. Nine years later however, the USPSTF issued a grade B recommendation for the use of low-dose aspirin (75 to 100mg/day) "for the primary prevention of CVD [cardiovascular disease] and CRC in adults 50 to 59 years of age who have a 10% or greater 10-year CVD risk, are not at increased risk for bleeding, have a life expectancy of at least 10 years, and are willing to take low-dose aspirin daily for at least 10 years".
A meta-analysis through 2019 said that there was an association between taking aspirin and lower risk of cancer of the colorectum, esophagus, and stomach.
In 2021, the U.S. Preventive services Task Force raised questions about the use of aspirin in cancer prevention. It notes the results of the 2018 ASPREE (Aspirin in Reducing Events in the Elderly) Trial, in which the risk of cancer-related death was higher in the aspirin-treated group than in the placebo group.
Psychiatry
Bipolar disorder
Aspirin, along with several other agents with anti-inflammatory properties, has been repurposed as an add-on treatment for depressive episodes in subjects with bipolar disorder in light of the possible role of inflammation in the pathogenesis of severe mental disorders. A 2022 systematic review concluded that aspirin exposure reduced the risk of depression in a pooled cohort of three studies (HR 0.624, 95% CI: 0.0503, 1.198, P=0.033). However, further high-quality, longer-duration, double-blind randomized controlled trials (RCTs) are needed to determine whether aspirin is an effective add-on treatment for bipolar depression. Thus, notwithstanding the biological rationale, the clinical perspectives of aspirin and anti-inflammatory agents in the treatment of bipolar depression remain uncertain.
Dementia
Although cohort and longitudinal studies have shown low-dose aspirin has a greater likelihood of reducing the incidence of dementia, numerous randomized controlled trials have not validated this.
Schizophrenia
Some researchers have speculated the anti-inflammatory effects of aspirin may be beneficial for schizophrenia. Small trials have been conducted but evidence remains lacking.
Other uses
Aspirin is a first-line treatment for the fever and joint-pain symptoms of acute rheumatic fever. The therapy often lasts for one to two weeks, and is rarely indicated for longer periods. After fever and pain have subsided, the aspirin is no longer necessary, since it does not decrease the incidence of heart complications and residual rheumatic heart disease. Naproxen has been shown to be as effective as aspirin and less toxic, but due to the limited clinical experience, naproxen is recommended only as a second-line treatment.
Along with rheumatic fever, Kawasaki disease remains one of the few indications for aspirin use in children in spite of a lack of high quality evidence for its effectiveness.
Low-dose aspirin supplementation has moderate benefits when used for prevention of pre-eclampsia. This benefit is greater when started in early pregnancy.
Aspirin has also demonstrated anti-tumoral effects, via inhibition of the PTTG1 gene, which is often overexpressed in tumors.
Resistance
For some people, aspirin does not have as strong an effect on platelets as for others, an effect known as aspirin-resistance or insensitivity. One study has suggested women are more likely to be resistant than men, and a different, aggregate study of 2,930 people found 28% were resistant.
A study in 100 Italian people found, of the apparent 31% aspirin-resistant subjects, only 5% were truly resistant, and the others were noncompliant.
Another study of 400 healthy volunteers found no subjects who were truly resistant, but some had "pseudoresistance, reflecting delayed and reduced drug absorption".
Meta-analysis and systematic reviews have concluded that laboratory confirmed aspirin resistance confers increased rates of poorer outcomes in cardiovascular and neurovascular diseases. Although the majority of research conducted has surrounded cardiovascular and neurovascular, there is emerging research into the risk of aspirin resistance after orthopaedic surgery where aspirin is used for venous thromboembolism prophylaxis. Aspirin resistance in orthopaedic surgery, specifically after total hip and knee arthroplasties, is of interest as risk factors for aspirin resistance are also risk factors for venous thromboembolisms and osteoarthritis; the sequelae of requiring a total hip or knee arthroplasty. Some of these risk factors include obesity, advancing age, diabetes mellitus, dyslipidemia and inflammatory diseases.
Dosages
Adult aspirin tablets are produced in standardised sizes, which vary slightly from country to country, for example 300mg in Britain and 325mg in the United States. Smaller doses are based on these standards, e.g., 75mg and 81mg tablets. The 81 mg tablets are commonly called "baby aspirin" or "baby-strength", because they were originallybut no longerintended to be administered to infants and children. No medical significance occurs due to the slight difference in dosage between the 75mg and the 81mg tablets. The dose required for benefit appears to depend on a person's weight. For those weighing less than , low dose is effective for preventing cardiovascular disease; for patients above this weight, higher doses are required.
In general, for adults, doses are taken four times a day for fever or arthritis, with doses near the maximal daily dose used historically for the treatment of rheumatic fever. For the prevention of myocardial infarction (MI) in someone with documented or suspected coronary artery disease, much lower doses are taken once daily.
March 2009 recommendations from the USPSTF on the use of aspirin for the primary prevention of coronary heart disease encourage men aged 45–79 and women aged 55–79 to use aspirin when the potential benefit of a reduction in MI for men or stroke for women outweighs the potential harm of an increase in gastrointestinal hemorrhage. The WHI study of postmenopausal women found that aspirin resulted in a 25% lower risk of death from cardiovascular disease and a 14% lower risk of death from any cause, though there was no significant difference between 81mg and 325mg aspirin doses. The 2021 ADAPTABLE study also showed no significant difference in cardiovascular events or major bleeding between 81mg and 325mg doses of aspirin in patients (both men and women) with established cardiovascular disease.
Low-dose aspirin use was also associated with a trend toward lower risk of cardiovascular events, and lower aspirin doses (75 or 81mg/day) may optimize efficacy and safety for people requiring aspirin for long-term prevention.
In children with Kawasaki disease, aspirin is taken at dosages based on body weight, initially four times a day for up to two weeks and then at a lower dose once daily for a further six to eight weeks.
Adverse effects
In October 2020, the US Food and Drug Administration (FDA) required the drug label to be updated for all nonsteroidal anti-inflammatory medications to describe the risk of kidney problems in unborn babies that result in low amniotic fluid. They recommend avoiding NSAIDs in pregnant women at 20 weeks or later in pregnancy. One exception to the recommendation is the use of low-dose 81mg aspirin at any point in pregnancy under the direction of a health care professional.
Contraindications
Aspirin should not be taken by people who are allergic to ibuprofen or naproxen, or who have salicylate intolerance or a more generalized drug intolerance to NSAIDs, and caution should be exercised in those with asthma or NSAID-precipitated bronchospasm. Owing to its effect on the stomach lining, manufacturers recommend people with peptic ulcers, mild diabetes, or gastritis seek medical advice before using aspirin. Even if none of these conditions is present, the risk of stomach bleeding is still increased when aspirin is taken with alcohol or warfarin. People with hemophilia or other bleeding tendencies should not take aspirin or other salicylates. Aspirin is known to cause hemolytic anemia in people who have the genetic disease glucose-6-phosphate dehydrogenase deficiency, particularly in large doses and depending on the severity of the disease. Use of aspirin during dengue fever is not recommended owing to increased bleeding tendency. Aspirin taken at doses of ≤325 mg and ≤100 mg per day for ≥2 days can increase the odds of suffering a gout attack by 81% and 91% respectively. This effect may potentially be worsened by high purine diets, diuretics, and kidney disease, but is eliminated by the urate lowering drug allopurinol. Daily low dose aspirin does not appear to worsen kidney function. Aspirin may reduce cardiovascular risk in those without established cardiovascular disease in people with moderate CKD, without significantly increasing the risk of bleeding. Aspirin should not be given to children or adolescents under the age of 16 to control cold or influenza symptoms, as this has been linked with Reye's syndrome.
Gastrointestinal
Aspirin increases the risk of upper gastrointestinal bleeding. Enteric coating on aspirin may be used in manufacturing to prevent release of aspirin into the stomach to reduce gastric harm, but enteric coating does not reduce gastrointestinal bleeding risk. Enteric-coated aspirin may not be as effective at reducing blood clot risk. Combining aspirin with other NSAIDs has been shown to further increase the risk of gastrointestinal bleeding. Using aspirin in combination with clopidogrel or warfarin also increases the risk of upper gastrointestinal bleeding.
Blockade of COX-1 by aspirin apparently results in the upregulation of COX-2 as part of a gastric defense. There is no clear evidence that simultaneous use of a COX-2 inhibitor with aspirin may increase the risk of gastrointestinal injury.
"Buffering" is an additional method used with the intent to mitigate gastrointestinal bleeding, such as by preventing aspirin from concentrating in the walls of the stomach, although the benefits of buffered aspirin are disputed. Almost any buffering agent used in antacids can be used; Bufferin, for example, uses magnesium oxide. Other preparations use calcium carbonate. Gas-forming agents in effervescent tablet and powder formulations can also double as a buffering agent, one example being sodium bicarbonate, used in Alka-Seltzer.
Taking vitamin C with aspirin has been investigated as a method of protecting the stomach lining. In trials vitamin C-releasing aspirin (ASA-VitC) or a buffered aspirin formulation containing vitamin C was found to cause less stomach damage than aspirin alone.
Retinal vein occlusion
It is a widespread habit among eye specialists (ophthalmologists) to prescribe aspirin as an add-on medication for patients with retinal vein occlusion (RVO), such as central retinal vein occlusion (CRVO) and branch retinal vein occlusion (BRVO). The reason of this widespread use is the evidence of its proven effectiveness in major systemic venous thrombotic disorders, and it has been assumed that may be similarly beneficial in various types of retinal vein occlusion.
However, a large-scale investigation based on data of nearly 700 patients showed "that aspirin or other antiplatelet aggregating agents or anticoagulants adversely influence the visual outcome in patients with CRVO and hemi-CRVO, without any evidence of protective or beneficial effect". Several expert groups, including the Royal College of Ophthalmologists, recommended against the use of antithrombotic drugs (incl. aspirin) for patients with RVO.
Central effects
Large doses of salicylate, a metabolite of aspirin, cause temporary tinnitus (ringing in the ears) based on experiments in rats, via the action on arachidonic acid and NMDA receptors cascade.
Reye's syndrome
Reye's syndrome, a rare but severe illness characterized by acute encephalopathy and fatty liver, can occur when children or adolescents are given aspirin for a fever or other illness or infection. From 1981 to 1997, 1207 cases of Reye's syndrome in people younger than 18 were reported to the US Centers for Disease Control and Prevention (CDC). Of these, 93% reported being ill in the three weeks preceding the onset of Reye's syndrome, most commonly with a respiratory infection, chickenpox, or diarrhea. Salicylates were detectable in 81.9% of children for whom test results were reported. After the association between Reye's syndrome and aspirin was reported, and safety measures to prevent it (including a Surgeon General's warning, and changes to the labeling of aspirin-containing drugs) were implemented, aspirin taken by children declined considerably in the United States, as did the number of reported cases of Reye's syndrome; a similar decline was found in the United Kingdom after warnings against pediatric aspirin use were issued. The US Food and Drug Administration recommends aspirin (or aspirin-containing products) should not be given to anyone under the age of 12 who has a fever, and the UK National Health Service recommends children who are under 16 years of age should not take aspirin, unless it is on the advice of a doctor.
Skin
For a small number of people, taking aspirin can result in symptoms including hives, swelling, and headache. Aspirin can exacerbate symptoms among those with chronic hives, or create acute symptoms of hives. These responses can be due to allergic reactions to aspirin, or more often due to its effect of inhibiting the COX-1 enzyme. Skin reactions may also tie to systemic contraindications, seen with NSAID-precipitated bronchospasm, or those with atopy.
Aspirin and other NSAIDs, such as ibuprofen, may delay the healing of skin wounds. Earlier findings from two small, low-quality trials suggested a benefit with aspirin (alongside compression therapy) on venous leg ulcer healing time and leg ulcer size, however larger, more recent studies of higher quality have been unable to corroborate these outcomes. As such, further research is required to clarify the role of aspirin in this context.
Other adverse effects
Aspirin can induce swelling of skin tissues in some people. In one study, angioedema appeared one to six hours after ingesting aspirin in some of the people. However, when the aspirin was taken alone, it did not cause angioedema in these people; the aspirin had been taken in combination with another NSAID-induced drug when angioedema appeared.
Aspirin causes an increased risk of cerebral microbleeds, having the appearance on MRI scans of 5 to 10mm or smaller, hypointense (dark holes) patches.
A study of a group with a mean dosage of aspirin of 270mg per day estimated an average absolute risk increase in intracerebral hemorrhage (ICH) of 12 events per 10,000 persons. In comparison, the estimated absolute risk reduction in myocardial infarction was 137 events per 10,000 persons, and a reduction of 39 events per 10,000 persons in ischemic stroke. In cases where ICH already has occurred, aspirin use results in higher mortality, with a dose of about 250mg per day resulting in a relative risk of death within three months after the ICH around 2.5 (95% confidence interval 1.3 to 4.6).
Aspirin and other NSAIDs can cause abnormally high blood levels of potassium by inducing a hyporeninemic hypoaldosteronism state via inhibition of prostaglandin synthesis; however, these agents do not typically cause hyperkalemia by themselves in the setting of normal renal function and euvolemic state.
Use of low-dose aspirin before a surgical procedure has been associated with an increased risk of bleeding events in some patients, however, ceasing aspirin prior to surgery has also been associated with an increase in major adverse cardiac events. An analysis of multiple studies found a three-fold increase in adverse events such as myocardial infarction in patients who ceased aspirin prior to surgery. The analysis found that the risk is dependent on the type of surgery being performed and the patient indication for aspirin use.
On 9 July 2015, the US Food and Drug Administration (FDA) toughened warnings of increased heart attack and stroke risk associated with nonsteroidal anti-inflammatory drugs (NSAID). Aspirin is an NSAID but is not affected by the new warnings.
Overdose
Aspirin overdose can be acute or chronic. In acute poisoning, a single large dose is taken; in chronic poisoning, higher than normal doses are taken over a period of time. Acute overdose has a mortality rate of 2%. Chronic overdose is more commonly lethal, with a mortality rate of 25%; chronic overdose may be especially severe in children. Toxicity is managed with a number of potential treatments, including activated charcoal, intravenous dextrose and normal saline, sodium bicarbonate, and dialysis. The diagnosis of poisoning usually involves measurement of plasma salicylate, the active metabolite of aspirin, by automated spectrophotometric methods. Plasma salicylate levels in general range from 30 to 100mg/L after usual therapeutic doses, 50–300mg/L in people taking high doses and 700–1400mg/L following acute overdose. Salicylate is also produced as a result of exposure to bismuth subsalicylate, methyl salicylate, and sodium salicylate.
Interactions
Aspirin is known to interact with other drugs. For example, acetazolamide and ammonium chloride are known to enhance the intoxicating effect of salicylates, and alcohol also increases the gastrointestinal bleeding associated with these types of drugs. Aspirin is known to displace a number of drugs from protein-binding sites in the blood, including the antidiabetic drugs tolbutamide and chlorpropamide, warfarin, methotrexate, phenytoin, probenecid, valproic acid (as well as interfering with beta oxidation, an important part of valproate metabolism), and other NSAIDs. Corticosteroids may also reduce the concentration of aspirin. Other NSAIDs, such as ibuprofen and naproxen, may reduce the antiplatelet effect of aspirin. Although limited evidence suggests this may not result in a reduced cardioprotective effect of aspirin. Analgesic doses of aspirin decrease sodium loss induced by spironolactone in the urine, however this does not reduce the antihypertensive effects of spironolactone. Furthermore, antiplatelet doses of aspirin are deemed too small to produce an interaction with spironolactone. Aspirin is known to compete with penicillin G for renal tubular secretion. Aspirin may also inhibit the absorption of vitamin C.
Research
The ISIS-2 trial demonstrated that aspirin at doses of 160mg daily for one month, decreased the mortality by 21% of participants with a suspected myocardial infarction in the first five weeks. A single daily dose of 324mg of aspirin for 12 weeks has a highly protective effect against acute myocardial infarction and death in men with unstable angina.
Bipolar disorder
Aspirin has been repurposed as an add-on treatment for depressive episodes in subjects with bipolar disorder. However, meta-analytic evidence is based on very few studies and does not suggest any efficacy of aspirin in the treatment of bipolar depression. Thus, notwithstanding the biological rationale, the clinical perspectives of aspirin and anti-inflammatory agents in the treatment of bipolar depression remain uncertain.
Infectious diseases
Several studies investigated the anti-infective properties of aspirin for bacterial, viral and parasitic infections. Aspirin was demonstrated to limit platelet activation induced by Staphylococcus aureus and Enterococcus faecalis and to reduce streptococcal adhesion to heart valves. In patients with tuberculous meningitis, the addition of aspirin reduced the risk of new cerebral infarction [RR = 0.52 (0.29-0.92)]. A role of aspirin on bacterial and fungal biofilm is also being supported by growing evidence.
Cancer prevention
Evidence from observational studies was conflicting on the effect of aspirin in breast cancer prevention; a randomized controlled trial showed that aspirin had no significant effect in reducing breast cancer, thus further studies are needed to clarify the effect of aspirin in cancer prevention.
In gardening
There are many anecdotal reportings that aspirin can improve plant's growth and resistance though most research involved salicylic acid instead of aspirin.
Veterinary medicine
Aspirin is sometimes used in veterinary medicine as an anticoagulant or to relieve pain associated with musculoskeletal inflammation or osteoarthritis. Aspirin should be given to animals only under the direct supervision of a veterinarian, as adverse effects—including gastrointestinal issues—are common. An aspirin overdose in any species may result in salicylate poisoning, characterized by hemorrhaging, seizures, coma, and even death.
Dogs are better able to tolerate aspirin than cats are. Cats metabolize aspirin slowly because they lack the glucuronide conjugates that aid in the excretion of aspirin, making it potentially toxic if dosing is not spaced out properly. No clinical signs of toxicosis occurred when cats were given 25mg/kg of aspirin every 48 hours for 4 weeks, but the recommended dose for relief of pain and fever and for treating blood clotting diseases in cats is 10mg/kg every 48 hours to allow for metabolization.
References
Further reading
External links
1897 in Germany
1897 in science
Acetate esters
Acetylsalicylic acids
Antiplatelet drugs
Drugs developed by Bayer
Brands that became generic
Chemical substances for emergency medicine
Commercialization of traditional medicines
Covalent inhibitors
Equine medications
German inventions
Hepatotoxins
Nonsteroidal anti-inflammatory drugs
Salicylic acids
Salicylyl esters
World Health Organization essential medicines
Wikipedia medicine articles ready to translate | Aspirin | [
"Chemistry"
] | 11,494 | [
"Chemicals in medicine",
"Chemical substances for emergency medicine"
] |
1,635 | https://en.wikipedia.org/wiki/Kolmogorov%20complexity | In algorithmic information theory (a subfield of computer science and mathematics), the Kolmogorov complexity of an object, such as a piece of text, is the length of a shortest computer program (in a predetermined programming language) that produces the object as output. It is a measure of the computational resources needed to specify the object, and is also known as algorithmic complexity, Solomonoff–Kolmogorov–Chaitin complexity, program-size complexity, descriptive complexity, or algorithmic entropy. It is named after Andrey Kolmogorov, who first published on the subject in 1963 and is a generalization of classical information theory.
The notion of Kolmogorov complexity can be used to state and prove impossibility results akin to Cantor's diagonal argument, Gödel's incompleteness theorem, and Turing's halting problem.
In particular, no program P computing a lower bound for each text's Kolmogorov complexity can return a value essentially larger than P's own length (see section ); hence no single program can compute the exact Kolmogorov complexity for infinitely many texts. Kolmogorov complexity is the length of the ultimately compressed version of a file (i.e., anything which can be put in a computer). Formally, it is the length of a shortest program from which the file can be reconstructed. While Kolmogorov complexity is uncomputable, various approaches have been proposed and reviewed.
Definition
Intuition
Consider the following two strings of 32 lowercase letters and digits:
abababababababababababababababab , and
4c1j5b2p0cv4w1x8rx2y39umgw5q85s7
The first string has a short English-language description, namely "write ab 16 times", which consists of 17 characters. The second one has no obvious simple description (using the same character set) other than writing down the string itself, i.e., "write 4c1j5b2p0cv4w1x8rx2y39umgw5q85s7" which has 38 characters. Hence the operation of writing the first string can be said to have "less complexity" than writing the second.
More formally, the complexity of a string is the length of the shortest possible description of the string in some fixed universal description language (the sensitivity of complexity relative to the choice of description language is discussed below). It can be shown that the Kolmogorov complexity of any string cannot be more than a few bytes larger than the length of the string itself. Strings like the abab example above, whose Kolmogorov complexity is small relative to the string's size, are not considered to be complex.
The Kolmogorov complexity can be defined for any mathematical object, but for simplicity the scope of this article is restricted to strings. We must first specify a description language for strings. Such a description language can be based on any computer programming language, such as Lisp, Pascal, or Java. If P is a program which outputs a string x, then P is a description of x. The length of the description is just the length of P as a character string, multiplied by the number of bits in a character (e.g., 7 for ASCII).
We could, alternatively, choose an encoding for Turing machines, where an encoding is a function which associates to each Turing Machine M a bitstring <M>. If M is a Turing Machine which, on input w, outputs string x, then the concatenated string <M> w is a description of x. For theoretical analysis, this approach is more suited for constructing detailed formal proofs and is generally preferred in the research literature. In this article, an informal approach is discussed.
Any string s has at least one description. For example, the second string above is output by the pseudo-code:
function GenerateString2()
return "4c1j5b2p0cv4w1x8rx2y39umgw5q85s7"
whereas the first string is output by the (much shorter) pseudo-code:
function GenerateString1()
return "ab" × 16
If a description d(s) of a string s is of minimal length (i.e., using the fewest bits), it is called a minimal description of s, and the length of d(s) (i.e. the number of bits in the minimal description) is the Kolmogorov complexity of s, written K(s). Symbolically,
K(s) = |d(s)|.
The length of the shortest description will depend on the choice of description language; but the effect of changing languages is bounded (a result called the invariance theorem).
Plain Kolmogorov complexity C
There are two definitions of Kolmogorov complexity: plain and prefix-free. The plain complexity is the minimal description length of any program, and denoted while the prefix-free complexity is the minimal description length of any program encoded in a prefix-free code, and denoted . The plain complexity is more intuitive, but the prefix-free complexity is easier to study.
By default, all equations hold only up to an additive constant. For example, really means that , that is, .
Let be a computable function mapping finite binary strings to binary strings. It is a universal function if, and only if, for any computable , we can encode the function in a "program" , such that . We can think of as a program interpreter, which takes in an initial segment describing the program, followed by data that the program should process.
One problem with plain complexity is that , because intuitively speaking, there is no general way to tell where to divide an output string just by looking at the concatenated string. We can divide it by specifying the length of or , but that would take extra symbols. Indeed, for any there exists such that .
Typically, inequalities with plain complexity have a term like on one side, whereas the same inequalities with prefix-free complexity have only .
The main problem with plain complexity is that there is something extra sneaked into a program. A program not only represents for something with its code, but also represents its own length. In particular, a program may represent a binary number up to , simply by its own length. Stated in another way, it is as if we are using a termination symbol to denote where a word ends, and so we are not using 2 symbols, but 3. To fix this defect, we introduce the prefix-free Kolmogorov complexity.
Prefix-free Kolmogorov complexity K
A prefix-free code is a subset of such that given any two different words in the set, neither is a prefix of the other. The benefit of a prefix-free code is that we can build a machine that reads words from the code forward in one direction, and as soon as it reads the last symbol of the word, it knows that the word is finished, and does not need to backtrack or a termination symbol.
Define a prefix-free Turing machine to be a Turing machine that comes with a prefix-free code, such that the Turing machine can read any string from the code in one direction, and stop reading as soon as it reads the last symbol. Afterwards, it may compute on a work tape and write to a write tape, but it cannot move its read-head anymore.
This gives us the following formal way to describe K.
Fix a prefix-free universal Turing machine, with three tapes: a read tape infinite in one direction, a work tape infinite in two directions, and a write tape infinite in one direction.
The machine can read from the read tape in one direction only (no backtracking), and write to the write tape in one direction only. It can read and write the work tape in both directions.
The work tape and write tape start with all zeros. The read tape starts with an input prefix code, followed by all zeros.
Let be the prefix-free code on , used by the universal Turing machine.
Note that some universal Turing machines may not be programmable with prefix codes. We must pick only a prefix-free universal Turing machine.
The prefix-free complexity of a string is the shortest prefix code that makes the machine output :
Invariance theorem
Informal treatment
There are some description languages which are optimal, in the following sense: given any description of an object in a description language, said description may be used in the optimal description language with a constant overhead. The constant depends only on the languages involved, not on the description of the object, nor the object being described.
Here is an example of an optimal description language. A description will have two parts:
The first part describes another description language.
The second part is a description of the object in that language.
In more technical terms, the first part of a description is a computer program (specifically: a compiler for the object's language, written in the description language), with the second part being the input to that computer program which produces the object as output.
The invariance theorem follows: Given any description language L, the optimal description language is at least as efficient as L, with some constant overhead.
Proof: Any description D in L can be converted into a description in the optimal language by first describing L as a computer program P (part 1), and then using the original description D as input to that program (part 2). The
total length of this new description D′ is (approximately):
|D′ | = |P| + |D|
The length of P is a constant that doesn't depend on D. So, there is at most a constant overhead, regardless of the object described. Therefore, the optimal language is universal up to this additive constant.
A more formal treatment
Theorem: If K1 and K2 are the complexity functions relative to Turing complete description languages L1 and L2, then there is a constant c – which depends only on the languages L1 and L2 chosen – such that
∀s. −c ≤ K1(s) − K2(s) ≤ c.
Proof: By symmetry, it suffices to prove that there is some constant c such that for all strings s
K1(s) ≤ K2(s) + c.
Now, suppose there is a program in the language L1 which acts as an interpreter for L2:
function InterpretLanguage(string p)
where p is a program in L2. The interpreter is characterized by the following property:
Running InterpretLanguage on input p returns the result of running p.
Thus, if P is a program in L2 which is a minimal description of s, then InterpretLanguage(P) returns the string s. The length of this description of s is the sum of
The length of the program InterpretLanguage, which we can take to be the constant c.
The length of P which by definition is K2(s).
This proves the desired upper bound.
History and context
Algorithmic information theory is the area of computer science that studies Kolmogorov complexity and other complexity measures on strings (or other data structures).
The concept and theory of Kolmogorov Complexity is based on a crucial theorem first discovered by Ray Solomonoff, who published it in 1960, describing it in "A Preliminary Report on a General Theory of Inductive Inference" as part of his invention of algorithmic probability. He gave a more complete description in his 1964 publications, "A Formal Theory of Inductive Inference," Part 1 and Part 2 in Information and Control.
Andrey Kolmogorov later independently published this theorem in Problems Inform. Transmission in 1965. Gregory Chaitin also presents this theorem in J. ACM – Chaitin's paper was submitted October 1966 and revised in December 1968, and cites both Solomonoff's and Kolmogorov's papers.
The theorem says that, among algorithms that decode strings from their descriptions (codes), there exists an optimal one. This algorithm, for all strings, allows codes as short as allowed by any other algorithm up to an additive constant that depends on the algorithms, but not on the strings themselves. Solomonoff used this algorithm and the code lengths it allows to define a "universal probability" of a string on which inductive inference of the subsequent digits of the string can be based. Kolmogorov used this theorem to define several functions of strings, including complexity, randomness, and information.
When Kolmogorov became aware of Solomonoff's work, he acknowledged Solomonoff's priority. For several years, Solomonoff's work was better known in the Soviet Union than in the Western World. The general consensus in the scientific community, however, was to associate this type of complexity with Kolmogorov, who was concerned with randomness of a sequence, while Algorithmic Probability became associated with Solomonoff, who focused on prediction using his invention of the universal prior probability distribution. The broader area encompassing descriptional complexity and probability is often called Kolmogorov complexity. The computer scientist Ming Li considers this an example of the Matthew effect: "...to everyone who has, more will be given..."
There are several other variants of Kolmogorov complexity or algorithmic information. The most widely used one is based on self-delimiting programs, and is mainly due to Leonid Levin (1974).
An axiomatic approach to Kolmogorov complexity based on Blum axioms (Blum 1967) was introduced by Mark Burgin in the paper presented for publication by Andrey Kolmogorov.
In the late 1990s and early 2000s, methods developed to approximate Kolmogorov complexity relied on popular compression algorithms like LZW, which made difficult or impossible to provide any estimation to short strings until a method based on Algorithmic probability was introduced, offering the only alternative to compression-based methods.
Basic results
We write to be , where means some fixed way to code for a tuple of strings x and y.
Inequalities
We omit additive factors of . This section is based on.
Theorem.
Proof. Take any program for the universal Turing machine used to define plain complexity, and convert it to a prefix-free program by first coding the length of the program in binary, then convert the length to prefix-free coding. For example, suppose the program has length 9, then we can convert it as follows:where we double each digit, then add a termination code. The prefix-free universal Turing machine can then read in any program for the other machine as follows:The first part programs the machine to simulate the other machine, and is a constant overhead . The second part has length . The third part has length .
Theorem: There exists such that . More succinctly, . Similarly, , and .
Proof. For the plain complexity, just write a program that simply copies the input to the output. For the prefix-free complexity, we need to first describe the length of the string, before writing out the string itself.
Theorem. (extra information bounds, subadditivity)
Note that there is no way to compare and or or or . There are strings such that the whole string is easy to describe, but its substrings are very hard to describe.
Theorem. (symmetry of information) .
Proof. One side is simple. For the other side with , we need to use a counting argument (page 38 ).
Theorem. (information non-increase) For any computable function , we have .
Proof. Program the Turing machine to read two subsequent programs, one describing the function and one describing the string. Then run both programs on the work tape to produce , and write it out.
Uncomputability of Kolmogorov complexity
A naive attempt at a program to compute K
At first glance it might seem trivial to write a program which can compute K(s) for any s, such as the following:
function KolmogorovComplexity(string s)
for i = 1 to infinity:
for each string p of length exactly i
if isValidProgram(p) and evaluate(p) == s
return i
This program iterates through all possible programs (by iterating through all possible strings and only considering those which are valid programs), starting with the shortest. Each program is executed to find the result produced by that program, comparing it to the input s. If the result matches then the length of the program is returned.
However this will not work because some of the programs p tested will not terminate, e.g. if they contain infinite loops. There is no way to avoid all of these programs by testing them in some way before executing them due to the non-computability of the halting problem.
What is more, no program at all can compute the function K, be it ever so sophisticated. This is proven in the following.
Formal proof of uncomputability of K
Theorem: There exist strings of arbitrarily large Kolmogorov complexity. Formally: for each natural number n, there is a string s with K(s) ≥ n.
Proof: Otherwise all of the infinitely many possible finite strings could be generated by the finitely many programs with a complexity below n bits.
Theorem: K is not a computable function. In other words, there is no program which takes any string s as input and produces the integer K(s) as output.
The following proof by contradiction uses a simple Pascal-like language to denote programs; for sake of proof simplicity assume its description (i.e. an interpreter) to have a length of bits.
Assume for contradiction there is a program
function KolmogorovComplexity(string s)
which takes as input a string s and returns K(s). All programs are of finite length so, for sake of proof simplicity, assume it to be bits.
Now, consider the following program of length bits:
function GenerateComplexString()
for i = 1 to infinity:
for each string s of length exactly i
if KolmogorovComplexity(s) ≥ 8000000000
return s
Using KolmogorovComplexity as a subroutine, the program tries every string, starting with the shortest, until it returns a string with Kolmogorov complexity at least bits, i.e. a string that cannot be produced by any program shorter than bits. However, the overall length of the above program that produced s is only bits, which is a contradiction. (If the code of KolmogorovComplexity is shorter, the contradiction remains. If it is longer, the constant used in GenerateComplexString can always be changed appropriately.)
The above proof uses a contradiction similar to that of the Berry paradox: "The smallest positive integer that cannot be defined in fewer than twenty English words". It is also possible to show the non-computability of K by reduction from the non-computability of the halting problem H, since K and H are Turing-equivalent.
There is a corollary, humorously called the "full employment theorem" in the programming language community, stating that there is no perfect size-optimizing compiler.
Chain rule for Kolmogorov complexity
The chain rule for Kolmogorov complexity states that there exists a constant c such that for all X and Y:
K(X,Y) = K(X) + K(Y|X) + c*max(1,log(K(X,Y))).
It states that the shortest program that reproduces X and Y is no more than a logarithmic term larger than a program to reproduce X and a program to reproduce Y given X. Using this statement, one can define an analogue of mutual information for Kolmogorov complexity.
Compression
It is straightforward to compute upper bounds for K(s) – simply compress the string s with some method, implement the corresponding decompressor in the chosen language, concatenate the decompressor to the compressed string, and measure the length of the resulting string – concretely, the size of a self-extracting archive in the given language.
A string s is compressible by a number c if it has a description whose length does not exceed |s| − c bits. This is equivalent to saying that . Otherwise, s is incompressible by c. A string incompressible by 1 is said to be simply incompressible – by the pigeonhole principle, which applies because every compressed string maps to only one uncompressed string, incompressible strings must exist, since there are 2n bit strings of length n, but only 2n − 1 shorter strings, that is, strings of length less than n, (i.e. with length 0, 1, ..., n − 1).
For the same reason, most strings are complex in the sense that they cannot be significantly compressed – their K(s) is not much smaller than |s|, the length of s in bits. To make this precise, fix a value of n. There are 2n bitstrings of length n. The uniform probability distribution on the space of these bitstrings assigns exactly equal weight 2−n to each string of length n.
Theorem: With the uniform probability distribution on the space of bitstrings of length n, the probability that a string is incompressible by c is at least .
To prove the theorem, note that the number of descriptions of length not exceeding n − c is given by the geometric series:
1 + 2 + 22 + ... + 2n − c = 2n−c+1 − 1.
There remain at least
2n − 2n−c+1 + 1
bitstrings of length n that are incompressible by c. To determine the probability, divide by 2n.
Chaitin's incompleteness theorem
By the above theorem (), most strings are complex in the sense that they cannot be described in any significantly "compressed" way. However, it turns out that the fact that a specific string is complex cannot be formally proven, if the complexity of the string is above a certain threshold. The precise formalization is as follows. First, fix a particular axiomatic system S for the natural numbers. The axiomatic system has to be powerful enough so that, to certain assertions A about complexity of strings, one can associate a formula FA in S. This association must have the following property:
If FA is provable from the axioms of S, then the corresponding assertion A must be true. This "formalization" can be achieved based on a Gödel numbering.
Theorem: There exists a constant L (which only depends on S and on the choice of description language) such that there does not exist a string s for which the statement
K(s) ≥ L (as formalized in S)
can be proven within S.
Proof Idea: The proof of this result is modeled on a self-referential construction used in Berry's paradox. We firstly obtain a program which enumerates the proofs within S and we specify a procedure P which takes as an input an integer L and prints the strings x which are within proofs within S of the statement K(x) ≥ L. By then setting L to greater than the length of this procedure P, we have that the required length of a program to print x as stated in K(x) ≥ L as being at least L is then less than the amount L since the string x was printed by the procedure P. This is a contradiction. So it is not possible for the proof system S to prove K(x) ≥ L for L arbitrarily large, in particular, for L larger than the length of the procedure P, (which is finite).
Proof:
We can find an effective enumeration of all the formal proofs in S by some procedure
function NthProof(int n)
which takes as input n and outputs some proof. This function enumerates all proofs. Some of these are proofs for formulas we do not care about here, since every possible proof in the language of S is produced for some n. Some of these are complexity formulas of the form K(s) ≥ n where s and n are constants in the language of S. There is a procedure
function NthProofProvesComplexityFormula(int n)
which determines whether the nth proof actually proves a complexity formula K(s) ≥ L. The strings s, and the integer L in turn, are computable by procedure:
function StringNthProof(int n)
function ComplexityLowerBoundNthProof(int n)
Consider the following procedure:
function GenerateProvablyComplexString(int n)
for i = 1 to infinity:
if NthProofProvesComplexityFormula(i) and ComplexityLowerBoundNthProof(i) ≥ n
return StringNthProof(i)
Given an n, this procedure tries every proof until it finds a string and a proof in the formal system S of the formula K(s) ≥ L for some L ≥ n; if no such proof exists, it loops forever.
Finally, consider the program consisting of all these procedure definitions, and a main call:
GenerateProvablyComplexString(n0)
where the constant n0 will be determined later on. The overall program length can be expressed as U+log2(n0), where U is some constant and log2(n0) represents the length of the integer value n0, under the reasonable assumption that it is encoded in binary digits. We will choose n0 to be greater than the program length, that is, such that n0 > U+log2(n0). This is clearly true for n0 sufficiently large, because the left hand side grows linearly in n0 whilst the right hand side grows logarithmically in n0 up to the fixed constant U.
Then no proof of the form "K(s)≥L" with L≥n0 can be obtained in S, as can be seen by an indirect argument:
If ComplexityLowerBoundNthProof(i) could return a value ≥n0, then the loop inside GenerateProvablyComplexString would eventually terminate, and that procedure would return a string s such that
This is a contradiction, Q.E.D.
As a consequence, the above program, with the chosen value of n0, must loop forever.
Similar ideas are used to prove the properties of Chaitin's constant.
Minimum message length
The minimum message length principle of statistical and inductive inference and machine learning was developed by C.S. Wallace and D.M. Boulton in 1968. MML is Bayesian (i.e. it incorporates prior beliefs) and information-theoretic. It has the desirable properties of statistical invariance (i.e. the inference transforms with a re-parametrisation, such as from polar coordinates to Cartesian coordinates), statistical consistency (i.e. even for very hard problems, MML will converge to any underlying model) and efficiency (i.e. the MML model will converge to any true underlying model about as quickly as is possible). C.S. Wallace and D.L. Dowe (1999) showed a formal connection between MML and algorithmic information theory (or Kolmogorov complexity).
Kolmogorov randomness
Kolmogorov randomness defines a string (usually of bits) as being random if and only if every computer program that can produce that string is at least as long as the string itself. To make this precise, a universal computer (or universal Turing machine) must be specified, so that "program" means a program for this universal machine. A random string in this sense is "incompressible" in that it is impossible to "compress" the string into a program that is shorter than the string itself. For every universal computer, there is at least one algorithmically random string of each length. Whether a particular string is random, however, depends on the specific universal computer that is chosen. This is because a universal computer can have a particular string hard-coded in itself, and a program running on this universal computer can then simply refer to this hard-coded string using a short sequence of bits (i.e. much shorter than the string itself).
This definition can be extended to define a notion of randomness for infinite sequences from a finite alphabet. These algorithmically random sequences can be defined in three equivalent ways. One way uses an effective analogue of measure theory; another uses effective martingales. The third way defines an infinite sequence to be random if the prefix-free Kolmogorov complexity of its initial segments grows quickly enough — there must be a constant c such that the complexity of an initial segment of length n is always at least n−c. This definition, unlike the definition of randomness for a finite string, is not affected by which universal machine is used to define prefix-free Kolmogorov complexity.
Relation to entropy
For dynamical systems, entropy rate and algorithmic complexity of the trajectories are related by a theorem of Brudno, that the equality holds for almost all .
It can be shown that for the output of Markov information sources, Kolmogorov complexity is related to the entropy of the information source. More precisely, the Kolmogorov complexity of the output of a Markov information source, normalized by the length of the output, converges almost surely (as the length of the output goes to infinity) to the entropy of the source.
Theorem. (Theorem 14.2.5 ) The conditional Kolmogorov complexity of a binary string satisfieswhere is the binary entropy function (not to be confused with the entropy rate).
Halting problem
The Kolmogorov complexity function is equivalent to deciding the halting problem.
If we have a halting oracle, then the Kolmogorov complexity of a string can be computed by simply trying every halting program, in lexicographic order, until one of them outputs the string.
The other direction is much more involved. It shows that given a Kolmogorov complexity function, we can construct a function , such that for all large , where is the Busy Beaver shift function (also denoted as ). By modifying the function at lower values of we get an upper bound on , which solves the halting problem.
Consider this program , which takes input as , and uses .
List all strings of length .
For each such string , enumerate all (prefix-free) programs of length until one of them does output . Record its runtime .
Output the largest .
We prove by contradiction that for all large .
Let be a Busy Beaver of length . Consider this (prefix-free) program, which takes no input:
Run the program , and record its runtime length .
Generate all programs with length . Run every one of them for up to steps. Note the outputs of those that have halted.
Output the string with the lowest lexicographic order that has not been output by any of those.
Let the string output by the program be .
The program has length , where comes from the length of the Busy Beaver , comes from using the (prefix-free) Elias delta code for the number , and comes from the rest of the program. Therefore,for all big . Further, since there are only so many possible programs with length , we have by pigeonhole principle.
By assumption, , so every string of length has a minimal program with runtime . Thus, the string has a minimal program with runtime . Further, that program has length . This contradicts how was constructed.
Universal probability
Fix a universal Turing machine , the same one used to define the (prefix-free) Kolmogorov complexity. Define the (prefix-free) universal probability of a string to beIn other words, it is the probability that, given a uniformly random binary stream as input, the universal Turing machine would halt after reading a certain prefix of the stream, and output .
Note. does not mean that the input stream is , but that the universal Turing machine would halt at some point after reading the initial segment , without reading any further input, and that, when it halts, its has written to the output tape.
Theorem. (Theorem 14.11.1)
Conditional versions
The conditional Kolmogorov complexity of two strings is, roughly speaking, defined as the Kolmogorov complexity of x given y as an auxiliary input to the procedure.
There is also a length-conditional complexity , which is the complexity of x given the length of x as known/input.
Time-bounded complexity
Time-bounded Kolmogorov complexity is a modified version of Kolmogorov complexity where the space of programs to be searched for a solution is confined to only programs that can run within some pre-defined number of steps. It is hypothesised that the possibility of the existence of an efficient algorithm for determining approximate time-bounded Kolmogorov complexity is related to the question of whether true one-way functions exist.
See also
Berry paradox
Code golf
Data compression
Descriptive complexity theory
Grammar induction
Inductive reasoning
Kolmogorov structure function
Levenshtein distance
Solomonoff's theory of inductive inference
Sample entropy
Notes
References
Further reading
External links
The Legacy of Andrei Nikolaevich Kolmogorov
Chaitin's online publications
Solomonoff's IDSIA page
Generalizations of algorithmic information by J. Schmidhuber
Tromp's lambda calculus computer model offers a concrete definition of K()]
Universal AI based on Kolmogorov Complexity by M. Hutter:
David Dowe's Minimum Message Length (MML) and Occam's razor pages.
Computability theory
Descriptive complexity
Measures of complexity
Computational complexity theory
Data compression | Kolmogorov complexity | [
"Mathematics",
"Technology",
"Engineering"
] | 7,052 | [
"Telecommunications engineering",
"Applied mathematics",
"Mathematical logic",
"Computer science",
"Information theory",
"Computability theory"
] |
1,697 | https://en.wikipedia.org/wiki/Ambergris | Ambergris ( or ; ; ), ambergrease, or grey amber is a solid, waxy, flammable substance of a dull grey or blackish colour produced in the digestive system of sperm whales. Freshly produced ambergris has a marine, fecal odor. It acquires a sweet, earthy scent as it ages, commonly likened to the fragrance of isopropyl alcohol without the vaporous chemical astringency.
Ambergris has been highly valued by perfume makers as a fixative that allows the scent to last much longer, although it has been mostly replaced by synthetic ambroxide. It is sometimes used in cooking.
Dogs are attracted to the smell of ambergris and are sometimes used by ambergris searchers.
Etymology
The English word amber derives from Middle Persian ʾmbl, traveling via Arabic (), Middle Latin ambar, and Middle French ambre to be adopted in Middle English in the 14th century.
The word "ambergris" comes from the Old French ambre gris or "grey amber". The addition of "grey" came about when, in the Romance languages, the sense of the word "amber" was extended to Baltic amber (fossil resin), as white or yellow amber (ambre jaune), from as early as the late 13th century. This fossilized resin subsequently became the dominant (and now exclusive) sense of "amber", leaving "ambergris" as the word for the whale secretion.
The archaic alternate spelling "ambergrease" arose as an eggcorn from the phonetic pronunciation of "ambergris," encouraged by the substance's waxy texture.
Formation
Ambergris is formed from a secretion of the bile duct in the intestines of the sperm whale, and can be found floating on the sea or washed up on coastlines. It is sometimes found in the abdomens of dead sperm whales. Because the beaks of giant squids have been discovered within lumps of ambergris, scientists have hypothesized that the substance is produced by the whale's gastrointestinal tract to ease the passage of hard, sharp objects that it may have eaten.
Ambergris is passed like fecal matter. It is speculated that an ambergris mass too large to be passed through the intestines is expelled via the mouth, but this remains under debate. Another theory states that an ambergris mass is formed when the colon of a whale is enlarged by a blockage from intestinal worms and cephalopod parts resulting in the death of the whale and the mass being excreted into the sea. Ambergris takes years to form. Christopher Kemp, the author of Floating Gold: A Natural (and Unnatural) History of Ambergris, says that it is only produced by sperm whales, and only by an estimated one percent of them. Ambergris is rare; once expelled by a whale, it often floats for years before making landfall. The slim chances of finding ambergris and the legal ambiguity involved led perfume makers away from ambergris, and led chemists on a quest to find viable alternatives.
Ambergris is found primarily in the Atlantic Ocean and on the coasts of South Africa; Brazil; Madagascar; the East Indies; The Maldives; China; Japan; India; Australia; New Zealand; and the Molucca Islands. Most commercially collected ambergris comes from the Bahamas in the Atlantic, particularly New Providence. In 2021, fishermen found a 127 kg (280-pound) piece of ambergris off the coast of Yemen, valued at US$1.5 million. Fossilised ambergris from 1.75 million years ago has also been found.
Physical properties
Ambergris is found in lumps of various shapes and sizes, usually weighing from to or more. When initially expelled by or removed from the whale, the fatty precursor of ambergris is pale white in color (sometimes streaked with black), soft, with a strong fecal smell. Following months to years of photodegradation and oxidation in the ocean, this precursor gradually hardens, developing a dark grey or black color, a crusty and waxy texture, and a peculiar odor that is at once sweet, earthy, marine, and animalic. Its scent has been generally described as a vastly richer and smoother version of isopropanol without its stinging harshness. In this developed condition, ambergris has a specific gravity ranging from 0.780 to 0.926 (meaning it floats in water). It melts at about to a fatty, yellow resinous liquid; and at it is volatilised into a white vapor. It is soluble in ether, and in volatile and fixed oils.
Chemical properties
Ambergris is relatively nonreactive to acid. White crystals of a terpenoid known as ambrein, discovered by Leopold Ružička and Fernand Lardon in 1946, can be separated from ambergris by heating raw ambergris in alcohol, then allowing the resulting solution to cool. Breakdown of the relatively scentless ambrein through oxidation produces ambroxide and ambrinol, the main odor components of ambergris.
Ambroxide is now produced synthetically and used extensively in the perfume industry.
Applications
Ambergris has been mostly known for its use in creating perfume and fragrance much like musk. Perfumes based on ambergris still exist.
Ambergris has historically been used in food and drink. A serving of eggs and ambergris was reportedly King Charles II of England's favorite dish. A recipe for Rum Shrub liqueur from the mid 19th century called for a thread of ambergris to be added to rum, almonds, cloves, cassia, and the peel of oranges in making a cocktail from The English and Australian Cookery Book. It has been used as a flavoring agent in Turkish coffee and in hot chocolate in 18th century Europe. The substance is considered an aphrodisiac in some cultures.
Ancient Egyptians burned ambergris as incense, while in modern Egypt ambergris is used for scenting cigarettes. The ancient Chinese called the substance "dragon's spittle fragrance". During the Black Death in Europe, people believed that carrying a ball of ambergris could help prevent them from contracting plague. This was because the fragrance covered the smell of the air which was believed to be a cause of plague.
During the Middle Ages, Europeans used ambergris as a medication for headaches, colds, epilepsy, and other ailments.
Legality
From the 18th to the mid-19th century, the whaling industry prospered. By some reports, nearly 50,000 whales, including sperm whales, were killed each year. Throughout the 19th century, "millions of whales were killed for their oil, whalebone, and ambergris" to fuel profits, and they soon became endangered as a species as a result. Due to studies showing that the whale populations were being threatened, the International Whaling Commission instituted a moratorium on commercial whaling in 1982. Although ambergris is not harvested from whales, many countries also ban the trade of ambergris as part of the more general ban on the hunting and exploitation of whales.
Urine, faeces, and ambergris (that has been naturally excreted by a sperm whale) are waste products not considered parts or derivatives of a CITES species and are therefore not covered by the provisions of the convention.
Countries where ambergris trade is illegal include:
Australia – Under federal law, the export and import of ambergris for commercial purposes is banned by the Environment Protection and Biodiversity Conservation Act 1999. The various states and territories have additional laws regarding ambergris.
United States – The possession and trade of ambergris is prohibited by the Endangered Species Act of 1973.
India – Sale or possession is illegal under the Wild Life (Protection) Act, 1972.
Countries where trade of ambergris is legal include:
United Kingdom
France
Switzerland
Maldives
References
Further reading
montalvoeascinciasdonossotempo.blogspot, accessed 21 August 2015
External links
Natural History Magazine Article (from 1933): Floating Gold – The Romance of Ambergris
Ambergris – A Pathfinder and Annotated Bibliography
On the chemistry and ethics of Ambergris
Pathologist finds €500,000 ‘floating gold’ in dead whale in Canary Islands
Perfume ingredients
Whale products
Animal glandular products
Natural products
Traditional medicine | Ambergris | [
"Chemistry"
] | 1,735 | [
"Natural products",
"Medicinal chemistry"
] |
1,751 | https://en.wikipedia.org/wiki/Alexander%20Anderson%20%28mathematician%29 | Alexander Anderson ( in Aberdeen – in Paris) was a Scottish mathematician.
Life
He was born in Aberdeen, possibly in 1582, according to a print which suggests he was aged 35 in 1617. It is unknown where he was educated, but it is likely that he initially studied writing and philosophy (the "belles lettres") in his home city of Aberdeen.
He then went to the continent, and was a professor of mathematics in Paris by the start of the seventeenth century. There he published or edited, between the years 1612 and 1619, various geometric and algebraic tracts. He described himself as having "more wisdom than riches" in the dedication of Vindiciae Archimedis (1616).
He was first cousin of David Anderson of Finshaugh, a celebrated mathematician, and David Anderson's daughter was the mother of mathematician James Gregory.
Work
He was selected by the executors of François Viète to revise and edit Viète's manuscript works. Viète died in 1603, and it is unclear if Anderson knew him, but his eminence was sufficient to attract the attention of the dead man's executors. Anderson corrected and expanded upon Viète's manuscripts, which extended known geometry to the new algebra, which used general symbols to represent quantities.
Publications
The known works of Anderson amount to six thin quarto volumes, and as the last of them was published in 1619, it is probable that the author died soon after that year, but the precise date is unknown. He wrote other works that have since been lost. From his last work it appears he wrote another piece, "A Treatise on the Mensuration of Solids," and copies of two other works, Ex. Math. and Stereometria Triangulorum Sphæricorum, were in the possession of Sir Alexander Hume until the after the middle of the seventeenth century.
1612: Supplementum Apollonii Redivivi
1615: Ad Angularum Sectionem Analytica Theoremata F. Vieta
1615: Pro Zetetico Apolloniani
1615: Francisci Vietae Fontenaeensis
1616: Vindiciae Archimedis
1619: Alexandri Andersoni Exercitationum Mathematicarum Decas Prima
See also
Marin Getaldić
Denis Henrion
Frans van Schooten
References
Attribution:
Further reading
1580s births
1620 deaths
People from Aberdeen
Algebraists
British geometers
Scottish scholars and academics
Academic staff of the University of Paris
17th-century Scottish mathematicians | Alexander Anderson (mathematician) | [
"Mathematics"
] | 518 | [
"Geometers",
"Geometry",
"Algebra",
"Algebraists"
] |
1,776 | https://en.wikipedia.org/wiki/Arthritis | Arthritis is a general medical term used to describe a disorder that affects joints. Symptoms generally include joint pain and stiffness. Other symptoms may include redness, warmth, swelling, and decreased range of motion of the affected joints. In certain types of arthritis, other organs such as the skin are also affected. Onset can be gradual or sudden.
There are several types of arthritis. The most common forms are osteoarthritis (most commonly seen in weightbearing joints) and rheumatoid arthritis. Osteoarthritis usually occurs as an individual ages and often affects the hips, knees, shoulders, and fingers. Rheumatoid arthritis is an autoimmune disorder that often affects the hands and feet. Other types of arthritis include gout, lupus, and septic arthritis. These are inflammatory based types of rheumatic disease.
Early treatment for arthritis commonly includes resting the affected joint and conservative measures such as heating or icing. Weight loss and exercise may also be useful to reduce the force across a weightbearing joint. Medication intervention for symptoms depends on the form of arthritis. These may include anti-inflammatory medications such as ibuprofen and paracetamol (acetaminophen). With severe cases of arthritis, joint replacement surgery may be necessary.
Osteoarthritis is the most common form of arthritis affecting more than 3.8% of people, while rheumatoid arthritis is the second most common affecting about 0.24% of people. In Australia about 15% of people are affected by arthritis, while in the United States more than 20% have a type of arthritis. Overall arthritis becomes more common with age. Arthritis is a common reason people are unable to carryout their work and can result in decreased ability to complete activities of daily living. The term arthritis is derived from arthr- (meaning 'joint') and -itis (meaning 'inflammation').
Classification
There are several diseases where joint pain is the most prominent symptom. Generally when a person has "arthritis" it means that they have one of the following diseases:
Hemarthrosis
Osteoarthritis
Rheumatoid arthritis
Gout and pseudo-gout
Septic arthritis
Ankylosing spondylitis
Juvenile idiopathic arthritis
Still's disease
Psoriatic arthritis
Joint pain can also be a symptom of other diseases. In this case, the person may not have arthritis and instead have one of the following diseases:
Psoriasis
Reactive arthritis
Ehlers–Danlos syndrome
Iron overload
Hepatitis
Lyme disease
Sjögren's disease
Hashimoto's thyroiditis
Celiac disease
Non-celiac gluten sensitivity
Inflammatory bowel disease (including Crohn's disease and ulcerative colitis)
Henoch–Schönlein purpura
Hyperimmunoglobulinemia D with recurrent fever
Sarcoidosis
Whipple's disease
TNF receptor associated periodic syndrome
Granulomatosis with polyangiitis (and many other vasculitis syndromes)
Familial Mediterranean fever
Systemic lupus erythematosus
An undifferentiated arthritis is an arthritis that does not fit into well-known clinical disease categories, possibly being an early stage of a definite rheumatic disease.
Signs and symptoms
Pain in varying severity is a common symptom in most types of arthritis. Other symptoms include swelling, joint stiffness, redness, and aching around the joint(s). Arthritic disorders like lupus and rheumatoid arthritis can affect other organs in the body, leading to a variety of symptoms including:
Inability to use the hand or walk
Stiffness in one or more joints
Rash or itch
Malaise and fatigue
Weight loss
Poor sleep
Muscle aches and pains
Tenderness
Difficulty moving the joint
Causes
Some common risk factors that can increase the chances of developing osteoarthritis include obesity, prior injury to the joint, type of joint, and muscle strength. The risk factors with the strongest association for developing inflammatory arthritis such as rheumatoid arthritis are the female sex, a family history of rheumatoid arthritis, age, obesity, previous joint damage from an injury, and exposure to tobacco smoke.
Risk factors
There are common risk factors that increase a person's chance of developing arthritis later in adulthood. Some of these are modifiable while others are not. Smoking has been linked to an increased susceptibility of developing arthritis, particularly rheumatoid arthritis.
Diagnosis
Diagnosis is made by clinical examination from an appropriate health professional, and may be supported by tests such as radiologic imaging and blood tests, depending on the type of suspected arthritis. Pain patterns may vary depending on the arthritis type and the location. Rheumatoid arthritis is generally worse in the morning and associated with stiffness lasting over 30 minutes.
Important features of diagnosis are rate of onset, pattern of joint involvement, symmetry of symptoms, early morning stiffness, tenderness, locking of joint with inactivity, aggravating and relieving factors, and other systemic symptoms. Physical examination may include checking joints, evaluating gait, examination of skin for dermatological findings and symptoms of pulmonary inflammation. Physical examination may confirm the diagnosis or may indicate systemic disease. Chest radiographs are often used to follow progression or help assess severity.
Screening blood tests for suspected arthritis include: rheumatoid factor, antinuclear factor (ANF), extractable nuclear antigen, and specific antibodies.
Rheumatoid arthritis patients often have elevated erythrocyte sedimentation rate (ESR, also known as sed rate) or C-reactive protein (CRP) levels, which indicates the presence of an inflammatory process in the body. Anti-cyclic citrullinated peptide (anti-CCP) antibodies and rheumatoid factor (RF) are two more common blood tests when assessing for rheumatoid arthritis.
Imaging tests like X-rays are commonly utilized to diagnose and monitor arthritis. Other imaging tests for rheumatoid arthritis that may be considered include computed tomography (CT) scanning, positron emission tomography (PET) scanning, bone scanning, and dual-energy X-ray absorptiometry (DEXA).
Osteoarthritis
Osteoarthritis (OA) is the most common form of arthritis. It affects humans and other animals, notably dogs, but also occurs in cats and horses. It can affect both the larger (ie. knee, hip, shoulder, etc.) and the smaller joints (ie. fingers, toes, foot, etc.) of the body. The disease is caused by daily wear and tear of the joint. This process can progress more rapidly as a result of injury to the joint. Osteoarthritis is caused by the break down of the smooth surface between two bones, known as cartilage, which can eventually lead to the two opposing bones coming in direct contact and eroding one another. OA symptoms typically begin with minor pain during physical activity, but can eventually progress to be present at rest. The pain can be debilitating and prevent one from doing activities that they would normally do as part of their daily routine. OA typically affects the weight-bearing joints, such as the back, knee and hip due to the mechanical nature of this disease process. Unlike rheumatoid arthritis, osteoarthritis is most commonly a disease of the elderly. The strongest predictor of osteoarthritis is increased age, likely due to the declining ability of chondrocytes to maintain the structural integrity of cartilage. More than 30 percent of women have some degree of osteoarthritis by age 65. One of the primary tools for diagnosing OA are X-rays of the joint. Findings on X-ray that are consistent with OA include those with joint space narrowing (due to cartilage breakdown), bone spurs, sclerosis, and bone cysts.
Rheumatoid arthritis
Rheumatoid arthritis (RA) is a disorder in which the body's own immune system starts to attack body tissues specifically the cartilage at the end of bones known as articular cartilage. The attack is not only directed at the joint but to many other parts of the body. RA often affects joints in the fingers, wrists, knees and elbows, is symmetrical (appears on both sides of the body), and can lead to severe progressive deformity in a matter of years if not adequately treated. RA usually onsets earlier in life than OA and commonly effects people aged 20 and above. In children, the disorder can present with a skin rash, fever, pain, disability, and limitations in daily activities. With earlier diagnosis and appropriate aggressive treatment, many individuals can obtain control of their symptoms leading to a better quality of life compared to those without treatment.
One of the main triggers of bone erosion in the joints in rheumatoid arthritis is inflammation of the synovium (lining of the joint capsule), caused in part by the production of pro-inflammatory cytokines and receptor activator of nuclear factor kappa B ligand (RANKL), a cell surface protein present in Th17 cells and osteoblasts. Osteoclast activity can be directly induced by osteoblasts through the RANK/RANKL mechanism.
Lupus
Lupus is an autoimmune collagen vascular disorder that can be present with severe arthritis. Other features of lupus include a skin rash, extreme photosensitivity, hair loss, kidney problems, lung fibrosis and constant joint pain.
Gout
Gout is caused by deposition of uric acid crystals in the joints, causing inflammation. There is also an uncommon form of gouty arthritis caused by the formation of rhomboid crystals of calcium pyrophosphate known as pseudogout. In the early stages, the gouty arthritis usually occurs in one joint, but with time, it can occur in many joints and be quite crippling. The joints in gout can often become swollen and lose function. Gouty arthritis can become particularly painful and potentially debilitating when gout cannot successfully be treated. When uric acid levels and gout symptoms cannot be controlled with standard gout medicines that decrease the production of uric acid (e.g., allopurinol) or increase uric acid elimination from the body through the kidneys (e.g., probenecid), this can be referred to as refractory chronic gout.
Comparison of types
Other
Infectious arthritis is another severe form of arthritis that is sometimes referred to as septic arthritis. It presents with infections symptoms that can include sudden onset of chills, fever, and joint pain. The condition is caused by bacteria, that can spread through the blood stream from elsewhere in the body, that infects a joint and begins to erode cartilage. Infectious arthritis must be rapidly diagnosed and treated promptly to prevent irreversible joint damage. Only about 1% of cases of infectious arthritis are due to any of a wide variety of viruses. The virus SARS-CoV-2, which causes Covid-19 has been added to the list of viruses which can cause infections arthritis. SARS-CoV-2 causes reactive arthritis rather than local septic arthritis.
Psoriasis can develop into psoriatic arthritis. With psoriatic arthritis, most individuals develop the skin symptoms first and then the joint related symptoms. The typical features are continuous joint pains, stiffness and swelling like other forms of arthritis. The disease does recur with periods of remission but there is no known cure for the disorder. Treatment current revolves around decreasing autoimmune attacks with immune suppression medications. A small percentage develop a severely painful and destructive form of arthritis which destroys the small joints in the hands and can lead to permanent disability and loss of hand function.
Treatment
There is no known cure for arthritis and rheumatic diseases. Treatment options vary depending on the type of arthritis and include physical therapy, exercise and diet, orthopedic bracing, and oral and topical medications. Joint replacement surgery may be required to repair damage, restore function, or relieve pain.
Physical therapy
In general, studies have shown that physical exercise of the affected joint can noticeably improve long-term pain relief. Furthermore, exercise of the arthritic joint is encouraged to maintain the health of the particular joint and the overall body of the person.
Individuals with arthritis can benefit from both physical and occupational therapy. In arthritis the joints become stiff and the range of movement can be limited. Physical therapy has been shown to significantly improve function, decrease pain, and delay the need for surgical intervention in advanced cases. Exercise prescribed by a physical therapist has been shown to be more effective than medications in treating osteoarthritis of the knee. Exercise often focuses on improving muscle strength, endurance and flexibility. In some cases, exercises may be designed to train balance. Occupational therapy can provide assistance with activities. Assistive technology is a tool used to aid a person's disability by reducing their physical barriers by improving the use of their damaged body part, typically after an amputation. Assistive technology devices can be customized to the patient or bought commercially.
Medications
There are several types of medications that are used for the treatment of arthritis. Treatment typically begins with medications that have the fewest side effects with further medications being added if insufficiently effective.
Depending on the type of arthritis, the medications that are given may be different. For example, the first-line treatment for osteoarthritis is acetaminophen (paracetamol) while for inflammatory arthritis it involves non-steroidal anti-inflammatory drugs (NSAIDs) like ibuprofen. Opioids and NSAIDs may be less well tolerated. However, topical NSAIDs may have better safety profiles than oral NSAIDs. For more severe cases of osteoarthritis, intra-articular corticosteroid injections may also be considered.
The drugs to treat rheumatoid arthritis (RA) range from corticosteroids to monoclonal antibodies given intravenously. Due to the autoimmune nature of RA, treatments may include not only pain medications and anti-inflammatory drugs, but also another category of drugs called disease-modifying antirheumatic drugs (DMARDs). csDMARDs, TNF biologics and tsDMARDs are specific kinds of DMARDs that are recommended for treatment. Treatment with DMARDs is designed to slow down the progression of RA by initiating an adaptive immune response, in part by CD4+ T helper (Th) cells, specifically Th17 cells. Th17 cells are present in higher quantities at the site of bone destruction in joints and produce inflammatory cytokines associated with inflammation, such as interleukin-17 (IL-17).
Surgery
A number of surgical interventions have been incorporated in the treatment of arthritis since the 1950s. The primary surgical treatment option of arthritis is joint replacement surgery known as arthroplasty. Common joints that are replaced due to arthritis include the shoulder, hip, and knee. Arthroscopic surgery for osteoarthritis of the knee provides no additional benefit to patients when compared to optimized physical and medical therapy. Joint replacement surgery can last anywhere from 15-30 years depending on the patient. Following joint replacement surgery, patients can expect to get back to several physical activities including those such as swimming, tennis, and golf.
Adaptive aids
People with hand arthritis can have trouble with simple activities of daily living tasks (ADLs), such as turning a key in a lock or opening jars, as these activities can be cumbersome and painful. There are adaptive aids or assistive devices (ADs) available to help with these tasks, but they are generally more costly than conventional products with the same function. It is now possible to 3-D print adaptive aids, which have been released as open source hardware to reduce patient costs. Adaptive aids can significantly help arthritis patients and the vast majority of those with arthritis need and use them.
Alternative medicine
Further research is required to determine if transcutaneous electrical nerve stimulation (TENS) for knee osteoarthritis is effective for controlling pain.
Low level laser therapy may be considered for relief of pain and stiffness associated with arthritis. Evidence of benefit is tentative.
Pulsed electromagnetic field therapy (PEMFT) has tentative evidence supporting improved functioning but no evidence of improved pain in osteoarthritis. The FDA has not approved PEMFT for the treatment of arthritis. In Canada, PEMF devices are legally licensed by Health Canada for the treatment of pain associated with arthritic conditions.
Epidemiology
Arthritis is predominantly a disease of the elderly, but children can also be affected by the disease. Arthritis is more common in women than men at all ages and affects all races, ethnic groups and cultures. In the United States a CDC survey based on data from 2013 to 2015 showed 54.4 million (22.7%) adults had self-reported doctor-diagnosed arthritis, and 23.7 million (43.5% of those with arthritis) had arthritis-attributable activity limitation (AAAL). With an aging population, this number is expected to increase. Adults with co-morbid conditions, such as heart disease, diabetes, and obesity, were seen to have a higher than average prevalence of doctor-diagnosed arthritis (49.3%, 47.1%, and 30.6% respectively).
Disability due to musculoskeletal disorders increased by 45% from 1990 to 2010. Of these, osteoarthritis is the fastest increasing major health condition. Among the many reports on the increased prevalence of musculoskeletal conditions, data from Africa are lacking and underestimated. A systematic review assessed the prevalence of arthritis in Africa and included twenty population-based and seven hospital-based studies. The majority of studies, twelve, were from South Africa. Nine studies were well-conducted, eleven studies were of moderate quality, and seven studies were conducted poorly. The results of the systematic review were as follows:
Rheumatoid arthritis: 0.1% in Algeria (urban setting); 0.6% in Democratic Republic of Congo (urban setting); 2.5% and 0.07% in urban and rural settings in South Africa respectively; 0.3% in Egypt (rural setting), 0.4% in Lesotho (rural setting)
Osteoarthritis: 55.1% in South Africa (urban setting); ranged from 29.5 to 82.7% in South Africans aged 65 years and older
Knee osteoarthritis has the highest prevalence from all types of osteoarthritis, with 33.1% in rural South Africa
Ankylosing spondylitis: 0.1% in South Africa (rural setting)
Psoriatic arthritis: 4.4% in South Africa (urban setting)
Gout: 0.7% in South Africa (urban setting)
Juvenile idiopathic arthritis: 0.3% in Egypt (urban setting)
History
Evidence of osteoarthritis and potentially inflammatory arthritis has been discovered in dinosaurs. The first known traces of human arthritis date back as far as 4500 BC. In early reports, arthritis was frequently referred to as the most common ailment of prehistoric peoples. It was noted in skeletal remains of Native Americans found in Tennessee and parts of what is now Olathe, Kansas. Evidence of arthritis has been found throughout history, from Ötzi, a mummy () found along the border of modern Italy and Austria, to the Egyptian mummies .
In 1715, William Musgrave published the second edition of his most important medical work, De arthritide symptomatica, which concerned arthritis and its effects. Augustin Jacob Landré-Beauvais, a 28-year-old resident physician at Salpêtrière Asylum in France was the first person to describe the symptoms of rheumatoid arthritis. Though Landré-Beauvais' classification of rheumatoid arthritis as a relative of gout was inaccurate, his dissertation encouraged others to further study the disease.
John Charnley completed the first hip replacement (total hip arthroplasty) in England to treat arthritis in the 1960s.
Society and Culture:
Arthritis is the most common cause of disability in the United States. More than 20 million individuals with arthritis have severe limitations in function on a daily basis. Absenteeism and frequent visits to the physician are common in individuals who have arthritis. Arthritis can make it difficult for individuals to be physically active and some become home bound.
It is estimated that the total cost of arthritis cases is close to $100 billion of which almost 50% is from lost earnings.
Terminology
The term is derived from arthr- (from ) and -itis (from , , ), the latter suffix having come to be associated with inflammation.
The word arthritides is the plural form of arthritis, and denotes the collective group of arthritis-like conditions.
See also
Antiarthritics
Arthritis Care (charity in the UK)
Arthritis Foundation (US not-for-profit)
Knee arthritis
Osteoimmunology
Weather pains
References
External links
American College of Rheumatology – US professional society of rheumatologists
National Institute of Arthritis and Musculoskeletal and Skin Diseases - US National Institute of Arthritis and Musculoskeletal and Skin Diseases
The Ultimate Arthritis Diet Arthritis Foundation
Aging-associated diseases
Inflammations
Rheumatology
Wikipedia neurology articles ready to translate
Skeletal disorders
Wikipedia medicine articles ready to translate | Arthritis | [
"Biology"
] | 4,487 | [
"Senescence",
"Aging-associated diseases"
] |
1,778 | https://en.wikipedia.org/wiki/Acetylene | Acetylene (systematic name: ethyne) is the chemical compound with the formula and structure . It is a hydrocarbon and the simplest alkyne. This colorless gas is widely used as a fuel and a chemical building block. It is unstable in its pure form and thus is usually handled as a solution. Pure acetylene is odorless, but commercial grades usually have a marked odor due to impurities such as divinyl sulfide and phosphine.
As an alkyne, acetylene is unsaturated because its two carbon atoms are bonded together in a triple bond. The carbon–carbon triple bond places all four atoms in the same straight line, with CCH bond angles of 180°.
Discovery
Acetylene was discovered in 1836 by Edmund Davy, who identified it as a "new carburet of hydrogen". It was an accidental discovery while attempting to isolate potassium metal. By heating potassium carbonate with carbon at very high temperatures, he produced a residue of what is now known as potassium carbide, (K2C2), which reacted with water to release the new gas. It was rediscovered in 1860 by French chemist Marcellin Berthelot, who coined the name acétylène. Berthelot's empirical formula for acetylene (C4H2), as well as the alternative name "quadricarbure d'hydrogène" (hydrogen quadricarbide), were incorrect because many chemists at that time used the wrong atomic mass for carbon (6 instead of 12). Berthelot was able to prepare this gas by passing vapours of organic compounds (methanol, ethanol, etc.) through a red hot tube and collecting the effluent. He also found that acetylene was formed by sparking electricity through mixed cyanogen and hydrogen gases. Berthelot later obtained acetylene directly by passing hydrogen between the poles of a carbon arc.
Preparation
Partial combustion of hydrocarbons
Since the 1950s, acetylene has mainly been manufactured by the partial combustion of methane in the US, much of the EU, and many other countries:
It is a recovered side product in production of ethylene by cracking of hydrocarbons. Approximately 400,000 tonnes were produced by this method in 1983. Its presence in ethylene is usually undesirable because of its explosive character and its ability to poison Ziegler–Natta catalysts. It is selectively hydrogenated into ethylene, usually using Pd–Ag catalysts.
Dehydrogenation of alkanes
The heaviest alkanes in petroleum and natural gas are cracked into lighter molecules which are dehydrogenated at high temperature:
This last reaction is implemented in the process of anaerobic decomposition of methane by microwave plasma.
Carbochemical method
The first acetylene produced was by Edmund Davy in 1836, via potassium carbide.
Acetylene was historically produced by hydrolysis (reaction with water) of calcium carbide:
This reaction was discovered by Friedrich Wöhler in 1862, but a suitable commercial scale production method which allowed acetylene to be put into wider scale use was not found until 1892 by the Canadian inventor Thomas Willson while searching for a viable commercial production method for aluminum.
As late as the early 21st century, China, Japan, and Eastern Europe produced acetylene primarily by this method.
The use of this technology has since declined worldwide with the notable exception of China, with its emphasis on coal-based chemical industry, as of 2013. Otherwise oil has increasingly supplanted coal as the chief source of reduced carbon.
Calcium carbide production requires high temperatures, ~2000 °C, necessitating the use of an electric arc furnace. In the US, this process was an important part of the late-19th century revolution in chemistry enabled by the massive hydroelectric power project at Niagara Falls.
Bonding
In terms of valence bond theory, in each carbon atom the 2s orbital hybridizes with one 2p orbital thus forming an sp hybrid. The other two 2p orbitals remain unhybridized. The two ends of the two sp hybrid orbital overlap to form a strong σ valence bond between the carbons, while on each of the other two ends hydrogen atoms attach also by σ bonds. The two unchanged 2p orbitals form a pair of weaker π bonds.
Since acetylene is a linear symmetrical molecule, it possesses the D∞h point group.
Physical properties
Changes of state
At atmospheric pressure, acetylene cannot exist as a liquid and does not have a melting point. The triple point on the phase diagram corresponds to the melting point (−80.8 °C) at the minimal pressure at which liquid acetylene can exist (1.27 atm). At temperatures below the triple point, solid acetylene can change directly to the vapour (gas) by sublimation. The sublimation point at atmospheric pressure is −84.0 °C.
Other
At room temperature, the solubility of acetylene in acetone is 27.9 g per kg. For the same amount of dimethylformamide (DMF), the solubility is 51 g. At
20.26 bar, the solubility increases to 689.0 and 628.0 g for acetone and DMF, respectively. These solvents are used in pressurized gas cylinders.
Applications
Welding
Approximately 20% of acetylene is supplied by the industrial gases industry for oxyacetylene gas welding and cutting due to the high temperature of the flame. Combustion of acetylene with oxygen produces a flame of over , releasing 11.8 kJ/g. Oxygen with acetylene is the hottest burning common gas mixture. Acetylene is the third-hottest natural chemical flame after dicyanoacetylene's and cyanogen at . Oxy-acetylene welding was a popular welding process in previous decades. The development and advantages of arc-based welding processes have made oxy-fuel welding nearly extinct for many applications. Acetylene usage for welding has dropped significantly. On the other hand, oxy-acetylene welding equipment is quite versatile – not only because the torch is preferred for some sorts of iron or steel welding (as in certain artistic applications), but also because it lends itself easily to brazing, braze-welding, metal heating (for annealing or tempering, bending or forming), the loosening of corroded nuts and bolts, and other applications. Bell Canada cable-repair technicians still use portable acetylene-fuelled torch kits as a soldering tool for sealing lead sleeve splices in manholes and in some aerial locations. Oxyacetylene welding may also be used in areas where electricity is not readily accessible. Oxyacetylene cutting is used in many metal fabrication shops. For use in welding and cutting, the working pressures must be controlled by a regulator, since above , if subjected to a shockwave (caused, for example, by a flashback), acetylene decomposes explosively into hydrogen and carbon.
Chemicals
Acetylene is useful for many processes, but few are conducted on a commercial scale.
One of the major chemical applications is ethynylation of formaldehyde.
Acetylene adds to aldehydes and ketones to form α-ethynyl alcohols:
The reaction gives butynediol, with propargyl alcohol as the by-product. Copper acetylide is used as the catalyst.
In addition to ethynylation, acetylene reacts with carbon monoxide, acetylene reacts to give acrylic acid, or acrylic esters. Metal catalysts are required. These derivatives form products such as acrylic fibers, glasses, paints, resins, and polymers. Except in China, use of acetylene as a chemical feedstock has declined by 70% from 1965 to 2007 owing to cost and environmental considerations. In China, acetylene is a major precursor to vinyl chloride.
Historical uses
Prior to the widespread use of petrochemicals, coal-derived acetylene was a building block for several industrial chemicals. Thus acetylene can be hydrated to give acetaldehyde, which in turn can be oxidized to acetic acid. Processes leading to acrylates were also commercialized. Almost all of these processes became obsolete with the availability of petroleum-derived ethylene and propylene.
Niche applications
In 1881, the Russian chemist Mikhail Kucherov described the hydration of acetylene to acetaldehyde using catalysts such as mercury(II) bromide. Before the advent of the Wacker process, this reaction was conducted on an industrial scale.
The polymerization of acetylene with Ziegler–Natta catalysts produces polyacetylene films. Polyacetylene, a chain of CH centres with alternating single and double bonds, was one of the first discovered organic semiconductors. Its reaction with iodine produces a highly electrically conducting material. Although such materials are not useful, these discoveries led to the developments of organic semiconductors, as recognized by the Nobel Prize in Chemistry in 2000 to Alan J. Heeger, Alan G MacDiarmid, and Hideki Shirakawa.
In the 1920s, pure acetylene was experimentally used as an inhalation anesthetic.
Acetylene is sometimes used for carburization (that is, hardening) of steel when the object is too large to fit into a furnace.
Acetylene is used to volatilize carbon in radiocarbon dating. The carbonaceous material in an archeological sample is treated with lithium metal in a small specialized research furnace to form lithium carbide (also known as lithium acetylide). The carbide can then be reacted with water, as usual, to form acetylene gas to feed into a mass spectrometer to measure the isotopic ratio of carbon-14 to carbon-12.
Acetylene combustion produces a strong, bright light and the ubiquity of carbide lamps drove much acetylene commercialization in the early 20th century. Common applications included coastal lighthouses, street lights, and automobile and mining headlamps. In most of these applications, direct combustion is a fire hazard, and so acetylene has been replaced, first by incandescent lighting and many years later by low-power/high-lumen LEDs. Nevertheless, acetylene lamps remain in limited use in remote or otherwise inaccessible areas and in countries with a weak or unreliable central electric grid.
Natural occurrence
The energy richness of the C≡C triple bond and the rather high solubility of acetylene in water make it a suitable substrate for bacteria, provided an adequate source is available. A number of bacteria living on acetylene have been identified. The enzyme acetylene hydratase catalyzes the hydration of acetylene to give acetaldehyde:
Acetylene is a moderately common chemical in the universe, often associated with the atmospheres of gas giants. One curious discovery of acetylene is on Enceladus, a moon of Saturn. Natural acetylene is believed to form from catalytic decomposition of long-chain hydrocarbons at temperatures of and above. Since such temperatures are highly unlikely on such a small distant body, this discovery is potentially suggestive of catalytic reactions within that moon, making it a promising site to search for prebiotic chemistry.
Reactions
Vinylation reactions
In vinylation reactions, H−X compounds add across the triple bond. Alcohols and phenols add to acetylene to give vinyl ethers. Thiols give vinyl thioethers. Similarly, vinylpyrrolidone and vinylcarbazole are produced industrially by vinylation of 2-pyrrolidone and carbazole.
The hydration of acetylene is a vinylation reaction, but the resulting vinyl alcohol isomerizes to acetaldehyde. The reaction is catalyzed by mercury salts. This reaction once was the dominant technology for acetaldehyde production, but it has been displaced by the Wacker process, which affords acetaldehyde by oxidation of ethylene, a cheaper feedstock. A similar situation applies to the conversion of acetylene to the valuable vinyl chloride by hydrochlorination vs the oxychlorination of ethylene.
Vinyl acetate is used instead of acetylene for some vinylations, which are more accurately described as transvinylations. Higher esters of vinyl acetate have been used in the synthesis of vinyl formate.
Organometallic chemistry
Acetylene and its derivatives (2-butyne, diphenylacetylene, etc.) form complexes with transition metals. Its bonding to the metal is somewhat similar to that of ethylene complexes. These complexes are intermediates in many catalytic reactions such as alkyne trimerisation to benzene, tetramerization to cyclooctatetraene, and carbonylation to hydroquinone:
at basic conditions (50–, 20–).
Metal acetylides, species of the formula , are also common. Copper(I) acetylide and silver acetylide can be formed in aqueous solutions with ease due to a favorable solubility equilibrium.
Acid-base reactions
Acetylene has a pKa of 25, acetylene can be deprotonated by a superbase to form an acetylide:
HC#CH + RM -> RH + HC#CM
Various organometallic and inorganic reagents are effective.
Hydrogenation
Acetylene can be semihydrogenated to ethylene, providing a feedstock for a variety of polyethylene plastics. Halogens add to the triple bond.
Safety and handling
Acetylene is not especially toxic, but when generated from calcium carbide, or CAC2, it can contain toxic impurities such as traces of phosphine and arsine, which gives it a distinct garlic-like smell. It is also highly flammable, as are most light hydrocarbons, hence its use in welding. Its most singular hazard is associated with its intrinsic instability, especially when it is pressurized: under certain conditions acetylene can react in an exothermic addition-type reaction to form a number of products, typically benzene and/or vinylacetylene, possibly in addition to carbon and hydrogen. Consequently, acetylene, if initiated by intense heat or a shockwave, can decompose explosively if the absolute pressure of the gas exceeds about . Most regulators and pressure gauges on equipment report gauge pressure, and the safe limit for acetylene therefore is 101 kPagage, or 15 psig. It is therefore supplied and stored dissolved in acetone or dimethylformamide (DMF), contained in a gas cylinder with a porous filling, which renders it safe to transport and use, given proper handling. Acetylene cylinders should be used in the upright position to avoid withdrawing acetone during use.
Information on safe storage of acetylene in upright cylinders is provided by the OSHA, Compressed Gas Association, United States Mine Safety and Health Administration (MSHA), EIGA, and other agencies.
Copper catalyses the decomposition of acetylene, and as a result acetylene should not be transported in copper pipes.
Cylinders should be stored in an area segregated from oxidizers to avoid exacerbated reaction in case of fire/leakage. Acetylene cylinders should not be stored in confined spaces, enclosed vehicles, garages, and buildings, to avoid unintended leakage leading to explosive atmosphere. In the US, National Electric Code (NEC) requires consideration for hazardous areas including those where acetylene may be released during accidents or leaks. Consideration may include electrical classification and use of listed Group A electrical components in US. Further information on determining the areas requiring special consideration is in NFPA 497. In Europe, ATEX also requires consideration for hazardous areas where flammable gases may be released during accidents or leaks.
References
External links
Acetylene Production Plant and Detailed Process
Acetylene at Chemistry Comes Alive!
Movie explaining acetylene formation from calcium carbide and the explosive limits forming fire hazards
Calcium Carbide & Acetylene at The Periodic Table of Videos (University of Nottingham)
CDC – NIOSH Pocket Guide to Chemical Hazards – Acetylene
Alkynes
Fuel gas
Industrial gases
Synthetic fuel technologies
Explosive gases | Acetylene | [
"Chemistry",
"Engineering"
] | 3,410 | [
"Alkynes",
"Welding",
"Petroleum technology",
"Organic compounds",
"Industrial gases",
"Explosive gases",
"Synthetic fuel technologies",
"Explosive chemicals",
"Mechanical engineering",
"Chemical process engineering"
] |
1,786 | https://en.wikipedia.org/wiki/Arabic%20numerals | The ten Arabic numerals (0, 1, 2, 3, 4, 5, 6, 7, 8, and 9) are the most commonly used symbols for writing numbers. The term often also implies a positional notation number with a decimal base, in particular when contrasted with Roman numerals. However the symbols are also used to write numbers in other bases, such as octal, as well as non-numerical information such as trademarks or license plate identifiers.
They are also called Western Arabic numerals, Western digits, European digits, Ghubār numerals, or Hindu–Arabic numerals due to positional notation (but not these digits) originating in India. The Oxford English Dictionary uses lowercase Arabic numerals while using the fully capitalized term Arabic Numerals for Eastern Arabic numerals. In contemporary society, the terms digits, numbers, and numerals often implies only these symbols, although it can only be inferred from context.
Europeans first learned of Arabic numerals , though their spread was a gradual process. After Italian scholar Fibonacci of Pisa encountered the numerals in the Algerian city of Béjaïa, his 13th-century work became crucial in making them known in Europe. However, their use was largely confined to Northern Italy until the invention of the printing press in the 15th century. European trade, books, and colonialism subsequently helped popularize the adoption of Arabic numerals around the world. The numerals are used worldwide—significantly beyond the contemporary spread of the Latin alphabet—and have become common in the writing systems where other numeral systems existed previously, such as Chinese and Japanese numerals.
History
Origin
Positional decimal notation including a zero symbol was developed in India, using symbols visually distinct from those that would eventually enter into international use. As the concept spread, the sets of symbols used in different regions diverged over time.
The immediate ancestors of the digits now commonly called "Arabic numerals" were introduced to Europe in the 10th century by Arabic speakers of Spain and North Africa, with digits at the time in wide use from Libya to Morocco. In the east from Egypt to Iraq and the Arabian Peninsula, the Arabs were using the Eastern Arabic numerals or "Mashriki" numerals: ٠, ١, ٢, ٣, ٤, ٥, ٦, ٧, ٨, ٩.
Al-Nasawi wrote in the early 11th century that mathematicians had not agreed on the form of the numerals, but most of them had agreed to train themselves with the forms now known as Eastern Arabic numerals. The oldest specimens of the written numerals available are from Egypt and date to 873–874 AD. They show three forms of the numeral "2" and two forms of the numeral "3", and these variations indicate the divergence between what later became known as the Eastern Arabic numerals and the Western Arabic numerals. The Western Arabic numerals came to be used in the Maghreb and Al-Andalus from the 10th century onward. Some amount of consistency in the Western Arabic numeral forms endured from the 10th century, found in a Latin manuscript of Isidore of Seville's from 976 and the Gerbertian abacus, into the 12th and 13th centuries, in early manuscripts of translations from the city of Toledo.
Calculations were originally performed using a dust board (, Latin: ), which involved writing symbols with a stylus and erasing them. The use of the dust board appears to have introduced a divergence in terminology as well: whereas the Hindu reckoning was called in the east, it was called 'calculation with dust' in the west. The numerals themselves were referred to in the west as 'dust figures' or 'dust letters'. Al-Uqlidisi later invented a system of calculations with ink and paper 'without board and erasing' ().
A popular myth claims that the symbols were designed to indicate their numeric value through the number of angles they contained, but there is no contemporary evidence of this, and the myth is difficult to reconcile with any digits past 4.
Adoption and spread
The first mentions of the numerals from 1 to 9 in the West are found in the 976 , an illuminated collection of various historical documents covering a period from antiquity to the 10th century in Hispania. Other texts show that numbers from 1 to 9 were occasionally supplemented by a placeholder known as , represented as a circle or wheel, reminiscent of the eventual symbol for zero. The Arabic term for zero is (), transliterated into Latin as , which became the English word cipher.
From the 980s, Gerbert of Aurillac (later Pope Sylvester II) used his position to spread knowledge of the numerals in Europe. Gerbert studied in Barcelona in his youth. He was known to have requested mathematical treatises concerning the astrolabe from Lupitus of Barcelona after he had returned to France.
The reception of Arabic numerals in the West was gradual and lukewarm, as other numeral systems circulated in addition to the older Roman numbers. As a discipline, the first to adopt Arabic numerals as part of their own writings were astronomers and astrologists, evidenced from manuscripts surviving from mid-12th-century Bavaria. Reinher of Paderborn (1140–1190) used the numerals in his calendrical tables to calculate the dates of Easter more easily in his text .
Italy
Leonardo Fibonacci was a Pisan mathematician who had studied in the Pisan trading colony of Bugia, in what is now Algeria, and he endeavored to promote the numeral system in Europe with his 1202 book :
When my father, who had been appointed by his country as public notary in the customs at Bugia acting for the Pisan merchants going there, was in charge, he summoned me to him while I was still a child, and having an eye to usefulness and future convenience, desired me to stay there and receive instruction in the school of accounting. There, when I had been introduced to the art of the Indians' nine symbols through remarkable teaching, knowledge of the art very soon pleased me above all else and I came to understand it.
The s analysis highlighting the advantages of positional notation was widely influential. Likewise, Fibonacci's use of the Béjaïa digits in his exposition ultimately led to their widespread adoption in Europe. Fibonacci's work coincided with the European commercial revolution of the 12th and 13th centuries centered in Italy. Positional notation facilitated complex calculations (such as currency conversion) to be completed more quickly than was possible with the Roman system. In addition, the system could handle larger numbers, did not require a separate reckoning tool, and allowed the user to check their work without repeating the entire procedure. Late medieval Italian merchants did not stop using Roman numerals or other reckoning tools: instead, Arabic numerals were adopted for use in addition to their preexisting methods.
Europe
By the late 14th century, only a few texts using Arabic numerals appeared outside of Italy. This suggests that the use of Arabic numerals in commercial practice, and the significant advantage they conferred, remained a virtual Italian monopoly until the late 15th century. This may in part have been due to language barriers: although Fibonacci's was written in Latin, the Italian abacus traditions were predominantly written in Italian vernaculars that circulated in the private collections of abacus schools or individuals.
The European acceptance of the numerals was accelerated by the invention of the printing press, and they became widely known during the 15th century. Their use grew steadily in other centers of finance and trade such as Lyon. Early evidence of their use in Britain includes: an equal hour horary quadrant from 1396, in England, a 1445 inscription on the tower of Heathfield Church, Sussex; a 1448 inscription on a wooden lych-gate of Bray Church, Berkshire; and a 1487 inscription on the belfry door at Piddletrenthide church, Dorset; and in Scotland a 1470 inscription on the tomb of the first Earl of Huntly in Elgin Cathedral. In central Europe, the King of Hungary Ladislaus the Posthumous, started the use of Arabic numerals, which appear for the first time in a royal document of 1456.
By the mid-16th century, they had been widely adopted in Europe, and by 1800 had almost completely replaced the use of counting boards and Roman numerals in accounting. Roman numerals were mostly relegated to niche uses such as years and numbers on clock faces.
Russia
Prior to the introduction of Arabic numerals, Cyrillic numerals, derived from the Cyrillic alphabet, were used by South and East Slavs. The system was used in Russia as late as the early 18th century, although it was formally replaced in official use by Peter the Great in 1699. Reasons for Peter's switch from the alphanumerical system are believed to go beyond a surface-level desire to imitate the West. Historian Peter Brown makes arguments for sociological, militaristic, and pedagogical reasons for the change. At a broad, societal level, Russian merchants, soldiers, and officials increasingly came into contact with counterparts from the West and became familiar with the communal use of Arabic numerals. Peter also covertly travelled throughout Northern Europe from 1697 to 1698 during his Grand Embassy and was likely informally exposed to Western mathematics during this time. The Cyrillic system was found to be inferior for calculating practical kinematic values, such as the trajectories and parabolic flight patterns of artillery. With its use, it was difficult to keep pace with Arabic numerals in the growing field of ballistics, whereas Western mathematicians such as John Napier had been publishing on the topic since 1614.
China
The Chinese Shang dynasty numerals from the 14th century BC predates the Indian Brahmi numerals by over 1000 years and shows substantial similarity to the Brahmi numerals. Similar to the modern Arabic numerals, the Shang dynasty numeral system was also decimal based and positional.
While positional Chinese numeral systems such as the counting rod system and Suzhou numerals had been in use prior to the introduction of modern Arabic numerals, the externally-developed system was eventually introduced to medieval China by the Hui people. In the early 17th century, European-style Arabic numerals were introduced by Spanish and Portuguese Jesuits.
Encoding
The ten Arabic numerals are encoded in virtually every character set designed for electric, radio, and digital communication, such as Morse code. They are encoded in ASCII (and therefore in Unicode encodings) at positions 0x30 to 0x39. Masking all but the four least-significant binary digits gives the value of the decimal digit, a design decision facilitating the digitization of text onto early computers. EBCDIC used a different offset, but also possessed the aforementioned masking property.
See also
Arabic numeral variations
Regional variations in modern handwritten Arabic numerals
Seven-segment display
Text figures
Footnotes
Sources
Further reading
External links
Lam Lay Yong, "Development of Hindu Arabic and Traditional Chinese Arithmetic", Chinese Science 13 (1996): 35–54.
"Counting Systems and Numerals", Historyworld. Retrieved 11 December 2005.
The Evolution of Numbers. 16 April 2005.
O'Connor, J. J., and E. F. Robertson, Indian numerals . November 2000.
History of the numerals
Arabic numerals
Hindu–Arabic numerals
Numeral & Numbers' history and curiosities
Gerbert d'Aurillac's early use of Hindu–Arabic numerals at Convergence
Numerals | Arabic numerals | [
"Mathematics"
] | 2,416 | [
"Numeral systems",
"Numerals"
] |
1,797 | https://en.wikipedia.org/wiki/Acre | The acre ( ) is a unit of land area used in the British imperial and the United States customary systems. It is traditionally defined as the area of one chain by one furlong (66 by 660 feet), which is exactly equal to 10 square chains, of a square mile, 4,840 square yards, or 43,560 square feet, and approximately 4,047 m2, or about 40% of a hectare. Based upon the international yard and pound agreement of 1959, an acre may be declared as exactly 4,046.8564224 square metres. The acre is sometimes abbreviated ac but is usually spelled out as the word "acre".
Traditionally, in the Middle Ages, an acre was conceived of as the area of land that could be ploughed by one man using a team of eight oxen in one day.
The acre is still a statutory measure in the United States. Both the international acre and the US survey acre are in use, but they differ by only four parts per million (see below). The most common use of the acre is to measure tracts of land.
The acre is used in many established and former Commonwealth of Nations countries by custom. In a few, it continues as a statute measure, although not since 2010 in the UK, and not for decades in Australia, New Zealand, and South Africa. In many places where it is not a statute measure, it is still lawful to "use for trade" if given as supplementary information and is not used for land registration.
Description
One acre equals (0.0015625) square mile, 4,840 square yards, 43,560 square feet, or about (see below). While all modern variants of the acre contain 4,840 square yards, there are alternative definitions of a yard, so the exact size of an acre depends upon the particular yard on which it is based. Originally, an acre was understood as a strip of land sized at forty perches (660 ft, or 1 furlong) long and four perches (66 ft) wide; this may have also been understood as an approximation of the amount of land a yoke of oxen could plough in one day (a furlong being "a furrow long"). A square enclosing one acre is approximately 69.57 yards, or 208 feet 9 inches (), on a side. As a unit of measure, an acre has no prescribed shape; any area of 43,560 square feet is an acre.
US survey acres
In the international yard and pound agreement of 1959, the United States and five countries of the Commonwealth of Nations defined the international yard to be exactly 0.9144 metre. The US authorities decided that, while the refined definition would apply nationally in all other respects, the US survey foot (and thus the survey acre) would continue 'until such a time as it becomes desirable and expedient to readjust [it]'. By inference, an "international acre" may be calculated as exactly square metres but it does not have a basis in any international agreement.
Both the international acre and the US survey acre contain of a square mile or 4,840 square yards, but alternative definitions of a yard are used (see survey foot and survey yard), so the exact size of an acre depends upon the yard upon which it is based. The US survey acre is about 4,046.872 square metres; its exact value ( m2) is based on an inch defined by 1 metre = 39.37 inches exactly, as established by the Mendenhall Order of 1893. Surveyors in the United States use both international and survey feet, and consequently, both varieties of acre.
Since the difference between the US survey acre and international acre (0.016 square metres, 160 square centimetres or 24.8 square inches), is only about a quarter of the size of an A4 sheet or US letter, it is usually not important which one is being discussed. Areas are seldom measured with sufficient accuracy for the different definitions to be detectable.
In October 2019, the US National Geodetic Survey and the National Institute of Standards and Technology announced their joint intent to end the "temporary" continuance of the US survey foot, mile, and acre units (as permitted by their 1959 decision, above), with effect from the end of 2022.
Spanish acre
The Puerto Rican cuerda () is sometimes called the "Spanish acre" in the continental United States.
Use
The acre is commonly used in many current and former Commonwealth countries by custom, and in a few it continues as a statute measure. These include Antigua and Barbuda, American Samoa, The Bahamas, Belize, the British Virgin Islands, Canada, the Cayman Islands, Dominica, the Falkland Islands, Grenada, Ghana, Guam, the Northern Mariana Islands, Jamaica, Montserrat, Samoa, Saint Lucia, St. Helena, St. Kitts and Nevis, St. Vincent and the Grenadines, Turks and Caicos, the United Kingdom, the United States and the US Virgin Islands.
Republic of Ireland
In the Republic of Ireland, the hectare is legally used under European units of measurement directives; however, the acre (the same standard statute as used in the UK, not the old Irish acre, which was of a different size) is still widely used, especially in agriculture.
Indian subcontinent
In India, residential plots are measured in square feet or square metre, while agricultural land is measured in acres. In Sri Lanka, the division of an acre into 160 perches or 4 roods is common.
In Pakistan, residential plots are measured in (20 = 1 = 605 sq yards) and open/agriculture land measurement is in acres (8 = 1 acre) and (25 acres = 1 = 200 ), and .
United Kingdom
Its use as a primary unit for trade in the United Kingdom ceased to be permitted from 1 October 1995, due to the 1994 amendment of the Weights and Measures Act, where it was replaced by the hectare though its use as a supplementary unit continues to be permitted indefinitely. This was with the exemption of Land registration, which records the sale and possession of land, in 2010 HM Land Registry ended its exemption. The measure is still used to communicate with the public and informally (non-contract) by the farming and property industries.
Equivalence to other units of area
1 international acre is equal to the following metric units:
0.40468564224 hectare (A square with 100 m sides has an area of 1 hectare.)
4,046.8564224 square metres (or a square with approximately 63.61 m sides)
1 United States survey acre is equal to:
0.404687261 hectare
4,046.87261 square metres (1 square kilometre is equal to 247.105 acres)
1 acre (both variants) is equal to the following customary units:
66 feet × 660 feet (43,560 square feet)
10 square chains (1 chain = 66 feet = 22 yards = 4 rods = 100 links)
1 acre is approximately 208.71 feet × 208.71 feet (a square)
4,840 square yards
43,560 square feet
160 perches. A perch is equal to a square rod (1 square rod is 0.00625 acre)
4 roods
A furlong by a chain (furlong 220 yards, chain 22 yards)
40 rods by 4 rods, 160 rods2 (historically fencing was often sold in 40 rod lengths)
(0.0015625) square mile (1 square mile is equal to 640 acres)
Perhaps the easiest way for US residents to envision an acre is as a rectangle measuring 88 yards by 55 yards ( of 880 yards by of 880 yards), about the size of a standard American football field. To be more exact, one acre is 90.75% of a 100-yd-long by 53.33-yd-wide American football field (without the end zone). The full field, including the end zones, covers about .
For residents of other countries, the acre might be envisioned as rather more than half of a football pitch.
Historical origin
The word acre is derived from the Norman, attested for the first time in a text of Fécamp in 1006 to the meaning of «agrarian measure». Acre dates back to the old Scandinavian akr “cultivated field, ploughed land” which is perpetuated in Icelandic and the Faroese “field (wheat)”, Norwegian and Swedish , Danish “field”, cognate with German , Dutch , Latin , Sanskrit , and Greek (). In English, an obsolete variant spelling was aker.
According to the Act on the Composition of Yards and Perches, dating from around 1300, an acre is "40 perches [rods] in length and four in breadth", meaning 220 yards by 22 yards. As detailed in the diagram, an acre was roughly the amount of land tillable by a yoke of oxen in one day.
Before the enactment of the metric system, many countries in Europe used their own official acres. In France, the traditional unit of area was the arpent carré, a measure based on the Roman system of land measurement.
The was used only in Normandy (and neighbouring places outside its traditional borders), but its value varied greatly across Normandy, ranging from 3,632 to 9,725 square metres, with 8,172 square metres being the most frequent value. But inside the same of Normandy, for instance in pays de Caux, the farmers (still in the 20th century) made the difference between the (68 ares, 66 centiares) and the (56 to 65 ca). The Normandy was usually divided in 4 (roods) and 160 square , like the English acre.
The Normandy was equal to 1.6 , the unit of area more commonly used in Northern France outside of Normandy. In Canada, the Paris used in Quebec before the metric system was adopted is sometimes called "French acre" in English, even though the Paris and the Normandy were two very different units of area in ancient France (the Paris became the unit of area of French Canada, whereas the Normandy was never used in French Canada).
In Germany, the Netherlands, and Eastern Europe the traditional unit of area was . Like the acre, the morgen was a unit of ploughland, representing a strip that could be ploughed by one man and an ox or horse in a morning. There were many variants of the morgen, differing between the different German territories, ranging from . It was also used in Old Prussia, in the Balkans, Norway, and Denmark, where it was equal to about .
Statutory values for the acre were enacted in England, and subsequently the United Kingdom, by acts of:
Edward I
Edward III
Henry VIII
George IV
Queen Victoria – the British Weights and Measures Act of 1878 defined it as containing 4,840 square yards.
Historically, the size of farms and landed estates in the United Kingdom was usually expressed in acres (or acres, roods, and perches), even if the number of acres was so large that it might conveniently have been expressed in square miles. For example, a certain landowner might have been said to own 32,000 acres of land, not 50 square miles of land.
The acre is related to the square mile, with 640 acres making up one square mile. One mile is 5280 feet (1760 yards). In western Canada and the western United States, divisions of land area were typically based on the square mile, and fractions thereof. If the square mile is divided into quarters, each quarter has a side length of mile (880 yards) and is square mile in area, or 160 acres. These subunits are typically then again divided into quarters, with each side being mile long, and being of a square mile in area, or 40 acres. In the United States, farmland was typically divided as such, and the phrase "the back 40" refers to the 40-acre parcel to the back of the farm. Most of the Canadian Prairie Provinces and the US Midwest are on square-mile grids for surveying purposes.
Legacy units
Customary acre – The customary acre was roughly similar to the Imperial acre, but it was subject to considerable local variation similar to the variation in carucates, virgates, bovates, nooks, and farundels. These may have been multiples of the customary acre, rather than the statute acre.
Builder's acre = an even or , used in US real-estate development to simplify the math and for marketing. It is nearly 10% smaller than a survey acre, and the discrepancy has led to lawsuits alleging misrepresentation.
Feddan - Middle Eastern measurement unit, .
Scottish acre = 1.3 Imperial acres (5,080 m2, an obsolete Scottish measurement)
Irish acre =
Cheshire acre =
Stremma or Greek acre ≈ 10,000 square Greek feet, but now set at exactly 1,000 square metres (a similar unit was the zeugarion)
Dunam or Turkish acre ≈ 1,600 square Turkish paces, but now set at exactly 1,000 square metres (a similar unit was the çift)
Actus quadratus or Roman acre ≈ 14,400 square Roman feet (about 1,260 square metres)
God's Acre – a synonym for a churchyard.
Long acre the grass strip on either side of a road that may be used for illicit grazing.
Town acre was a term used in early 19th century in the planning of towns on a grid plan, such as Adelaide, South Australia and Wellington, New Plymouth and Nelson in New Zealand. The land was divided into plots of an Imperial acre, and these became known as town acres.
See also
Acre-foot – used in US to measure a large water volume
Anthropic units
Conversion of units
French arpent – used in Louisiana to measure length and area
Jugerum
a Morgen ("morning") of land is normally of a Tagwerk ("day work") of ploughing with an ox
Public Land Survey System
Quarter acre
Section (United States land surveying)
Spanish customary units
Chinese acre
Notes
References
External links
The Units of Measurement Regulations 1995 (United Kingdom)
Customary units of measurement in the United States
Imperial units
Surveying
Units of area | Acre | [
"Mathematics",
"Engineering"
] | 2,920 | [
"Units of area",
"Quantity",
"Surveying",
"Civil engineering",
"Units of measurement"
] |
1,800 | https://en.wikipedia.org/wiki/Adenosine%20triphosphate | Adenosine triphosphate (ATP) is a nucleoside triphosphate that provides energy to drive and support many processes in living cells, such as muscle contraction, nerve impulse propagation, and chemical synthesis. Found in all known forms of life, it is often referred to as the "molecular unit of currency" for intracellular energy transfer.
When consumed in a metabolic process, ATP converts either to adenosine diphosphate (ADP) or to adenosine monophosphate (AMP). Other processes regenerate ATP. It is also a precursor to DNA and RNA, and is used as a coenzyme. An average adult human processes around 50 kilograms (about 100 moles) daily.
From the perspective of biochemistry, ATP is classified as a nucleoside triphosphate, which indicates that it consists of three components: a nitrogenous base (adenine), the sugar ribose, and the triphosphate.
Structure
ATP consists of an adenine attached by the #9-nitrogen atom to the 1′ carbon atom of a sugar (ribose), which in turn is attached at the 5' carbon atom of the sugar to a triphosphate group. In its many reactions related to metabolism, the adenine and sugar groups remain unchanged, but the triphosphate is converted to di- and monophosphate, giving respectively the derivatives ADP and AMP. The three phosphoryl groups are labeled as alpha (α), beta (β), and, for the terminal phosphate, gamma (γ).
In neutral solution, ionized ATP exists mostly as ATP4−, with a small proportion of ATP3−.
Metal cation binding
Polyanionic and featuring a potentially chelating polyphosphate group, ATP binds metal cations with high affinity. The binding constant for is (). The binding of a divalent cation, almost always magnesium, strongly affects the interaction of ATP with various proteins. Due to the strength of the ATP-Mg2+ interaction, ATP exists in the cell mostly as a complex with bonded to the phosphate oxygen centers.
A second magnesium ion is critical for ATP binding in the kinase domain. The presence of Mg2+ regulates kinase activity. It is interesting from an RNA world perspective that ATP can carry a Mg ion which catalyzes RNA polymerization.
Chemical properties
Salts of ATP can be isolated as colorless solids.
ATP is stable in aqueous solutions between pH 6.8 and 7.4 (in the absence of catalysts). At more extreme pH levels, it rapidly hydrolyses to ADP and phosphate. Living cells maintain the ratio of ATP to ADP at a point ten orders of magnitude from equilibrium, with ATP concentrations fivefold higher than the concentration of ADP. In the context of biochemical reactions, the P-O-P bonds are frequently referred to as high-energy bonds.
Reactive aspects
The hydrolysis of ATP into ADP and inorganic phosphate
ATP(aq) + (l) = ADP(aq) + HPO(aq) + H(aq)
releases of enthalpy. This may differ under physiological conditions if the reactant and products are not exactly in these ionization states. The values of the free energy released by cleaving either a phosphate (Pi) or a pyrophosphate (PPi) unit from ATP at standard state concentrations of 1 mol/L at pH 7 are:
ATP + → ADP + Pi ΔG°' = −30.5 kJ/mol (−7.3 kcal/mol)
ATP + → AMP + PPi ΔG°' = −45.6 kJ/mol (−10.9 kcal/mol)
These abbreviated equations at a pH near 7 can be written more explicitly (R = adenosyl):
[RO-P(O)2-O-P(O)2-O-PO3]4− + → [RO-P(O)2-O-PO3]3− + [HPO4]2− + H+
[RO-P(O)2-O-P(O)2-O-PO3]4− + → [RO-PO3]2− + [HO3P-O-PO3]3− + H+
At cytoplasmic conditions, where the ADP/ATP ratio is 10 orders of magnitude from equilibrium, the ΔG is around −57 kJ/mol.
Along with pH, the free energy change of ATP hydrolysis is also associated with Mg2+ concentration, from ΔG°' = −35.7 kJ/mol at a Mg2+ concentration of zero, to ΔG°' = −31 kJ/mol at [Mg2+] = 5 mM. Higher concentrations of Mg2+ decrease free energy released in the reaction due to binding of Mg2+ ions to negatively charged oxygen atoms of ATP at pH 7.
Production from AMP and ADP
Production, aerobic conditions
A typical intracellular concentration of ATP may be 1–10 μmol per gram of tissue in a variety of eukaryotes. The dephosphorylation of ATP and rephosphorylation of ADP and AMP occur repeatedly in the course of aerobic metabolism.
ATP can be produced by a number of distinct cellular processes; the three main pathways in eukaryotes are (1) glycolysis, (2) the citric acid cycle/oxidative phosphorylation, and (3) beta-oxidation. The overall process of oxidizing glucose to carbon dioxide, the combination of pathways 1 and 2, known as cellular respiration, produces about 30 equivalents of ATP from each molecule of glucose.
ATP production by a non-photosynthetic aerobic eukaryote occurs mainly in the mitochondria, which comprise nearly 25% of the volume of a typical cell.
Glycolysis
In glycolysis, glucose and glycerol are metabolized to pyruvate. Glycolysis generates two equivalents of ATP through substrate phosphorylation catalyzed by two enzymes, phosphoglycerate kinase (PGK) and pyruvate kinase. Two equivalents of nicotinamide adenine dinucleotide (NADH) are also produced, which can be oxidized via the electron transport chain and result in the generation of additional ATP by ATP synthase. The pyruvate generated as an end-product of glycolysis is a substrate for the Krebs Cycle.
Glycolysis is viewed as consisting of two phases with five steps each. In phase 1, "the preparatory phase", glucose is converted to 2 d-glyceraldehyde-3-phosphate (g3p). One ATP is invested in Step 1, and another ATP is invested in Step 3. Steps 1 and 3 of glycolysis are referred to as "Priming Steps". In Phase 2, two equivalents of g3p are converted to two pyruvates. In Step 7, two ATP are produced. Also, in Step 10, two further equivalents of ATP are produced. In Steps 7 and 10, ATP is generated from ADP. A net of two ATPs is formed in the glycolysis cycle. The glycolysis pathway is later associated with the Citric Acid Cycle which produces additional equivalents of ATP.
Regulation
In glycolysis, hexokinase is directly inhibited by its product, glucose-6-phosphate, and pyruvate kinase is inhibited by ATP itself. The main control point for the glycolytic pathway is phosphofructokinase (PFK), which is allosterically inhibited by high concentrations of ATP and activated by high concentrations of AMP. The inhibition of PFK by ATP is unusual since ATP is also a substrate in the reaction catalyzed by PFK; the active form of the enzyme is a tetramer that exists in two conformations, only one of which binds the second substrate fructose-6-phosphate (F6P). The protein has two binding sites for ATP – the active site is accessible in either protein conformation, but ATP binding to the inhibitor site stabilizes the conformation that binds F6P poorly. A number of other small molecules can compensate for the ATP-induced shift in equilibrium conformation and reactivate PFK, including cyclic AMP, ammonium ions, inorganic phosphate, and fructose-1,6- and -2,6-biphosphate.
Citric acid cycle
In the mitochondrion, pyruvate is oxidized by the pyruvate dehydrogenase complex to the acetyl group, which is fully oxidized to carbon dioxide by the citric acid cycle (also known as the Krebs cycle). Every "turn" of the citric acid cycle produces two molecules of carbon dioxide, one equivalent of ATP guanosine triphosphate (GTP) through substrate-level phosphorylation catalyzed by succinyl-CoA synthetase, as succinyl-CoA is converted to succinate, three equivalents of NADH, and one equivalent of FADH2. NADH and FADH2 are recycled (to NAD+ and FAD, respectively) by oxidative phosphorylation, generating additional ATP. The oxidation of NADH results in the synthesis of 2–3 equivalents of ATP, and the oxidation of one FADH2 yields between 1–2 equivalents of ATP. The majority of cellular ATP is generated by this process. Although the citric acid cycle itself does not involve molecular oxygen, it is an obligately aerobic process because O2 is used to recycle the NADH and FADH2. In the absence of oxygen, the citric acid cycle ceases.
The generation of ATP by the mitochondrion from cytosolic NADH relies on the malate-aspartate shuttle (and to a lesser extent, the glycerol-phosphate shuttle) because the inner mitochondrial membrane is impermeable to NADH and NAD+. Instead of transferring the generated NADH, a malate dehydrogenase enzyme converts oxaloacetate to malate, which is translocated to the mitochondrial matrix. Another malate dehydrogenase-catalyzed reaction occurs in the opposite direction, producing oxaloacetate and NADH from the newly transported malate and the mitochondrion's interior store of NAD+. A transaminase converts the oxaloacetate to aspartate for transport back across the membrane and into the intermembrane space.
In oxidative phosphorylation, the passage of electrons from NADH and FADH2 through the electron transport chain releases the energy to pump protons out of the mitochondrial matrix and into the intermembrane space. This pumping generates a proton motive force that is the net effect of a pH gradient and an electric potential gradient across the inner mitochondrial membrane. Flow of protons down this potential gradient – that is, from the intermembrane space to the matrix – yields ATP by ATP synthase. Three ATP are produced per turn.
Although oxygen consumption appears fundamental for the maintenance of the proton motive force, in the event of oxygen shortage (hypoxia), intracellular acidosis (mediated by enhanced glycolytic rates and ATP hydrolysis), contributes to mitochondrial membrane potential and directly drives ATP synthesis.
Most of the ATP synthesized in the mitochondria will be used for cellular processes in the cytosol; thus it must be exported from its site of synthesis in the mitochondrial matrix. ATP outward movement is favored by the membrane's electrochemical potential because the cytosol has a relatively positive charge compared to the relatively negative matrix. For every ATP transported out, it costs 1 H+. Producing one ATP costs about 3 H+. Therefore, making and exporting one ATP requires 4H+. The inner membrane contains an antiporter, the ADP/ATP translocase, which is an integral membrane protein used to exchange newly synthesized ATP in the matrix for ADP in the intermembrane space.
Regulation
The citric acid cycle is regulated mainly by the availability of key substrates, particularly the ratio of NAD+ to NADH and the concentrations of calcium, inorganic phosphate, ATP, ADP, and AMP. Citrate – the ion that gives its name to the cycle – is a feedback inhibitor of citrate synthase and also inhibits PFK, providing a direct link between the regulation of the citric acid cycle and glycolysis.
Beta oxidation
In the presence of air and various cofactors and enzymes, fatty acids are converted to acetyl-CoA. The pathway is called beta-oxidation. Each cycle of beta-oxidation shortens the fatty acid chain by two carbon atoms and produces one equivalent each of acetyl-CoA, NADH, and FADH2. The acetyl-CoA is metabolized by the citric acid cycle to generate ATP, while the NADH and FADH2 are used by oxidative phosphorylation to generate ATP. Dozens of ATP equivalents are generated by the beta-oxidation of a single long acyl chain.
Regulation
In oxidative phosphorylation, the key control point is the reaction catalyzed by cytochrome c oxidase, which is regulated by the availability of its substrate – the reduced form of cytochrome c. The amount of reduced cytochrome c available is directly related to the amounts of other substrates:
which directly implies this equation:
Thus, a high ratio of [NADH] to [NAD+] or a high ratio of [ADP] [Pi] to [ATP] imply a high amount of reduced cytochrome c and a high level of cytochrome c oxidase activity. An additional level of regulation is introduced by the transport rates of ATP and NADH between the mitochondrial matrix and the cytoplasm.
Ketosis
Ketone bodies can be used as fuels, yielding 22 ATP and 2 GTP molecules per acetoacetate molecule when oxidized in the mitochondria. Ketone bodies are transported from the liver to other tissues, where acetoacetate and beta-hydroxybutyrate can be reconverted to acetyl-CoA to produce reducing equivalents (NADH and FADH2), via the citric acid cycle. Ketone bodies cannot be used as fuel by the liver, because the liver lacks the enzyme β-ketoacyl-CoA transferase, also called thiolase. Acetoacetate in low concentrations is taken up by the liver and undergoes detoxification through the methylglyoxal pathway which ends with lactate. Acetoacetate in high concentrations is absorbed by cells other than those in the liver and enters a different pathway via 1,2-propanediol. Though the pathway follows a different series of steps requiring ATP, 1,2-propanediol can be turned into pyruvate.
Production, anaerobic conditions
Fermentation is the metabolism of organic compounds in the absence of air. It involves substrate-level phosphorylation in the absence of a respiratory electron transport chain. The equation for the reaction of glucose to form lactic acid is:
+ 2 ADP + 2 Pi → 2 + 2 ATP + 2
Anaerobic respiration is respiration in the absence of . Prokaryotes can utilize a variety of electron acceptors. These include nitrate, sulfate, and carbon dioxide.
ATP replenishment by nucleoside diphosphate kinases
ATP can also be synthesized through several so-called "replenishment" reactions catalyzed by the enzyme families of nucleoside diphosphate kinases (NDKs), which use other nucleoside triphosphates as a high-energy phosphate donor, and the ATP:guanido-phosphotransferase family.
ATP production during photosynthesis
In plants, ATP is synthesized in the thylakoid membrane of the chloroplast. The process is called photophosphorylation. The "machinery" is similar to that in mitochondria except that light energy is used to pump protons across a membrane to produce a proton-motive force. ATP synthase then ensues exactly as in oxidative phosphorylation. Some of the ATP produced in the chloroplasts is consumed in the Calvin cycle, which produces triose sugars.
ATP recycling
The total quantity of ATP in the human body is about 0.1 mol/L. The majority of ATP is recycled from ADP by the aforementioned processes. Thus, at any given time, the total amount of ATP + ADP remains fairly constant.
The energy used by human cells in an adult requires the hydrolysis of 100 to 150 mol/L of ATP daily, which means a human will typically use their body weight worth of ATP over the course of the day. Each equivalent of ATP is recycled 1000–1500 times during a single day (), at approximately 9×1020 molecules/s.
Biochemical functions
Intracellular signaling
ATP is involved in signal transduction by serving as substrate for kinases, enzymes that transfer phosphate groups. Kinases are the most common ATP-binding proteins. They share a small number of common folds. Phosphorylation of a protein by a kinase can activate a cascade such as the mitogen-activated protein kinase cascade.
ATP is also a substrate of adenylate cyclase, most commonly in G protein-coupled receptor signal transduction pathways and is transformed to second messenger, cyclic AMP, which is involved in triggering calcium signals by the release of calcium from intracellular stores. This form of signal transduction is particularly important in brain function, although it is involved in the regulation of a multitude of other cellular processes.
DNA and RNA synthesis
ATP is one of four monomers required in the synthesis of RNA. The process is promoted by RNA polymerases. A similar process occurs in the formation of DNA, except that ATP is first converted to the deoxyribonucleotide dATP. Like many condensation reactions in nature, DNA replication and DNA transcription also consume ATP.
Amino acid activation in protein synthesis
Aminoacyl-tRNA synthetase enzymes consume ATP in the attachment tRNA to amino acids, forming aminoacyl-tRNA complexes. Aminoacyl transferase binds AMP-amino acid to tRNA. The coupling reaction proceeds in two steps:
aa + ATP ⟶ aa-AMP + PPi
aa-AMP + tRNA ⟶ aa-tRNA + AMP
The amino acid is coupled to the penultimate nucleotide at the 3′-end of the tRNA (the A in the sequence CCA) via an ester bond (roll over in illustration).
ATP binding cassette transporter
Transporting chemicals out of a cell against a gradient is often associated with ATP hydrolysis. Transport is mediated by ATP binding cassette transporters. The human genome encodes 48 ABC transporters, that are used for exporting drugs, lipids, and other compounds.
Extracellular signalling and neurotransmission
Cells secrete ATP to communicate with other cells in a process called purinergic signalling. ATP serves as a neurotransmitter in many parts of the nervous system, modulates ciliary beating, affects vascular oxygen supply etc. ATP is either secreted directly across the cell membrane through channel proteins or is pumped into vesicles which then fuse with the membrane. Cells detect ATP using the purinergic receptor proteins P2X and P2Y. ATP has been shown to be a critically important signalling molecule for microglia - neuron interactions in the adult brain, as well as during brain development. Furthermore, tissue-injury induced ATP-signalling is a major factor in rapid microglial phenotype changes.
Muscle contraction
ATP fuels muscle contractions. Muscle contractions are regulated by signaling pathways, although different muscle types being regulated by specific pathways and stimuli based on their particular function. However, in all muscle types, contraction is performed by the proteins actin and myosin.
ATP is initially bound to myosin. When ATPase hydrolyzes the bound ATP into ADP and inorganic phosphate, myosin is positioned in a way that it can bind to actin. Myosin bound by ADP and Pi forms cross-bridges with actin and the subsequent release of ADP and Pi releases energy as the power stroke. The power stroke causes actin filament to slide past the myosin filament, shortening the muscle and causing a contraction. Another ATP molecule can then bind to myosin, releasing it from actin and allowing this process to repeat.
Protein solubility
ATP has recently been proposed to act as a biological hydrotrope and has been shown to affect proteome-wide solubility.
Abiogenic origins
Acetyl phosphate (AcP), a precursor to ATP, can readily be synthesized at modest yields from thioacetate in pH 7 and 20 °C and pH 8 and 50 °C, although acetyl phosphate is less stable in warmer temperatures and alkaline conditions than in cooler and acidic to neutral conditions. It is unable to promote polymerization of ribonucleotides and amino acids and was only capable of phosphorylation of organic compounds. It was shown that it can promote aggregation and stabilization of AMP in the presence of Na+, aggregation of nucleotides could promote polymerization above 75 °C in the absence of Na+. It is possible that polymerization promoted by AcP could occur at mineral surfaces. It was shown that ADP can only be phosphorylated to ATP by AcP and other nucleoside triphosphates were not phosphorylated by AcP. This might explain why all lifeforms use ATP to drive biochemical reactions.
ATP analogues
Biochemistry laboratories often use in vitro studies to explore ATP-dependent molecular processes. ATP analogs are also used in X-ray crystallography to determine a protein structure in complex with ATP, often together with other substrates.
Enzyme inhibitors of ATP-dependent enzymes such as kinases are needed to examine the binding sites and transition states involved in ATP-dependent reactions.
Most useful ATP analogs cannot be hydrolyzed as ATP would be; instead, they trap the enzyme in a structure closely related to the ATP-bound state. Adenosine 5′-(γ-thiotriphosphate) is an extremely common ATP analog in which one of the gamma-phosphate oxygens is replaced by a sulfur atom; this anion is hydrolyzed at a dramatically slower rate than ATP itself and functions as an inhibitor of ATP-dependent processes. In crystallographic studies, hydrolysis transition states are modeled by the bound vanadate ion.
Caution is warranted in interpreting the results of experiments using ATP analogs, since some enzymes can hydrolyze them at appreciable rates at high concentration.
Medical use
ATP is used intravenously for some heart related conditions.
History
ATP was discovered in 1929 by and Jendrassik and, independently, by Cyrus Fiske and Yellapragada Subba Rao of Harvard Medical School, both teams competing against each other to find an assay for phosphorus.
It was proposed to be the intermediary between energy-yielding and energy-requiring reactions in cells by Fritz Albert Lipmann in 1941.
It was first synthesized in the laboratory by Alexander Todd in 1948, and he was awarded the Nobel Prize in Chemistry in 1957 partly for this work.
The 1978 Nobel Prize in Chemistry was awarded to Peter Dennis Mitchell for the discovery of the chemiosmotic mechanism of ATP synthesis.
The 1997 Nobel Prize in Chemistry was divided, one half jointly to Paul D. Boyer and John E. Walker "for their elucidation of the enzymatic mechanism underlying the synthesis of adenosine triphosphate (ATP)" and the other half to Jens C. Skou "for the first discovery of an ion-transporting enzyme, Na+, K+ -ATPase."
See also
Adenosine-tetraphosphatase
Adenosine methylene triphosphate
ATPases
ATP test
Creatine
Cyclic adenosine monophosphate (cAMP)
Nucleotide exchange factor
Phosphagen
References
External links
ATP bound to proteins in the PDB
ScienceAid: Energy ATP and Exercise
PubChem entry for Adenosine Triphosphate
KEGG entry for Adenosine Triphosphate
Adenosine receptor agonists
Cellular respiration
Coenzymes
Ergogenic aids
Exercise physiology
Neurotransmitters
Nucleotides
Phosphate esters
Purinergic signalling
Purines
Substances discovered in the 1920s | Adenosine triphosphate | [
"Chemistry",
"Biology"
] | 5,172 | [
"Cellular respiration",
"Coenzymes",
"Neurotransmitters",
"Organic compounds",
"Biochemistry",
"Neurochemistry",
"Metabolism"
] |
1,805 | https://en.wikipedia.org/wiki/Antibiotic | An antibiotic is a type of antimicrobial substance active against bacteria. It is the most important type of antibacterial agent for fighting bacterial infections, and antibiotic medications are widely used in the treatment and prevention of such infections. They may either kill or inhibit the growth of bacteria. A limited number of antibiotics also possess antiprotozoal activity. Antibiotics are not effective against viruses such as the ones which cause the common cold or influenza. Drugs which inhibit growth of viruses are termed antiviral drugs or antivirals. Antibiotics are also not effective against fungi. Drugs which inhibit growth of fungi are called antifungal drugs.
Sometimes, the term antibiotic—literally "opposing life", from the Greek roots ἀντι anti, "against" and βίος bios, "life"—is broadly used to refer to any substance used against microbes, but in the usual medical usage, antibiotics (such as penicillin) are those produced naturally (by one microorganism fighting another), whereas non-antibiotic antibacterials (such as sulfonamides and antiseptics) are fully synthetic. However, both classes have the same effect of killing or preventing the growth of microorganisms, and both are included in antimicrobial chemotherapy. "Antibacterials" include bactericides, bacteriostatics, antibacterial soaps, and chemical disinfectants, whereas antibiotics are an important class of antibacterials used more specifically in medicine and sometimes in livestock feed.
Antibiotics have been used since ancient times. Many civilizations used topical application of moldy bread, with many references to its beneficial effects arising from ancient Egypt, Nubia, China, Serbia, Greece, and Rome. The first person to directly document the use of molds to treat infections was John Parkinson (1567–1650). Antibiotics revolutionized medicine in the 20th century. Synthetic antibiotic chemotherapy as a science and development of antibacterials began in Germany with Paul Ehrlich in the late 1880s. Alexander Fleming (1881–1955) discovered modern day penicillin in 1928, the widespread use of which proved significantly beneficial during wartime. The first sulfonamide and the first systemically active antibacterial drug, Prontosil, was developed by a research team led by Gerhard Domagk in 1932 or 1933 at the Bayer Laboratories of the IG Farben conglomerate in Germany. However, the effectiveness and easy access to antibiotics have also led to their overuse and some bacteria have evolved resistance to them. Antimicrobial resistance (AMR), a naturally occurring process, is driven largely by the misuse and overuse of antimicrobials. Yet, at the same time, many people around the world do not have access to essential antimicrobials. The World Health Organization has classified AMR as a widespread "serious threat [that] is no longer a prediction for the future, it is happening right now in every region of the world and has the potential to affect anyone, of any age, in any country". Each year, nearly 5 million deaths are associated with AMR globally. Global deaths attributable to AMR numbered 1.27 million in 2019.
Etymology
The term 'antibiosis', meaning "against life", was introduced by the French bacteriologist Jean Paul Vuillemin as a descriptive name of the phenomenon exhibited by these early antibacterial drugs. Antibiosis was first described in 1877 in bacteria when Louis Pasteur and Robert Koch observed that an airborne bacillus could inhibit the growth of Bacillus anthracis. These drugs were later renamed antibiotics by Selman Waksman, an American microbiologist, in 1947.
The term antibiotic was first used in 1942 by Selman Waksman and his collaborators in journal articles to describe any substance produced by a microorganism that is antagonistic to the growth of other microorganisms in high dilution. This definition excluded substances that kill bacteria but that are not produced by microorganisms (such as gastric juices and hydrogen peroxide). It also excluded synthetic antibacterial compounds such as the sulfonamides. In current usage, the term "antibiotic" is applied to any medication that kills bacteria or inhibits their growth, regardless of whether that medication is produced by a microorganism or not.
The term "antibiotic" derives from anti + βιωτικός (biōtikos), "fit for life, lively", which comes from βίωσις (biōsis), "way of life", and that from βίος (bios), "life". The term "antibacterial" derives from Greek ἀντί (anti), "against" + βακτήριον (baktērion), diminutive of βακτηρία (baktēria), "staff, cane", because the first bacteria to be discovered were rod-shaped.
Usage
Medical uses
Antibiotics are used to treat or prevent bacterial infections, and sometimes protozoan infections. (Metronidazole is effective against a number of parasitic diseases). When an infection is suspected of being responsible for an illness but the responsible pathogen has not been identified, an empiric therapy is adopted. This involves the administration of a broad-spectrum antibiotic based on the signs and symptoms presented and is initiated pending laboratory results that can take several days.
When the responsible pathogenic microorganism is already known or has been identified, definitive therapy can be started. This will usually involve the use of a narrow-spectrum antibiotic. The choice of antibiotic given will also be based on its cost. Identification is critically important as it can reduce the cost and toxicity of the antibiotic therapy and also reduce the possibility of the emergence of antimicrobial resistance. To avoid surgery, antibiotics may be given for non-complicated acute appendicitis.
Antibiotics may be given as a preventive measure and this is usually limited to at-risk populations such as those with a weakened immune system (particularly in HIV cases to prevent pneumonia), those taking immunosuppressive drugs, cancer patients, and those having surgery. Their use in surgical procedures is to help prevent infection of incisions. They have an important role in dental antibiotic prophylaxis where their use may prevent bacteremia and consequent infective endocarditis. Antibiotics are also used to prevent infection in cases of neutropenia particularly cancer-related.
The use of antibiotics for secondary prevention of coronary heart disease is not supported by current scientific evidence, and may actually increase cardiovascular mortality, all-cause mortality and the occurrence of stroke.
Routes of administration
There are many different routes of administration for antibiotic treatment. Antibiotics are usually taken by mouth. In more severe cases, particularly deep-seated systemic infections, antibiotics can be given intravenously or by injection. Where the site of infection is easily accessed, antibiotics may be given topically in the form of eye drops onto the conjunctiva for conjunctivitis or ear drops for ear infections and acute cases of swimmer's ear. Topical use is also one of the treatment options for some skin conditions including acne and cellulitis. Advantages of topical application include achieving high and sustained concentration of antibiotic at the site of infection; reducing the potential for systemic absorption and toxicity, and total volumes of antibiotic required are reduced, thereby also reducing the risk of antibiotic misuse. Topical antibiotics applied over certain types of surgical wounds have been reported to reduce the risk of surgical site infections. However, there are certain general causes for concern with topical administration of antibiotics. Some systemic absorption of the antibiotic may occur; the quantity of antibiotic applied is difficult to accurately dose, and there is also the possibility of local hypersensitivity reactions or contact dermatitis occurring. It is recommended to administer antibiotics as soon as possible, especially in life-threatening infections. Many emergency departments stock antibiotics for this purpose.
Global consumption
Antibiotic consumption varies widely between countries. The WHO report on surveillance of antibiotic consumption published in 2018 analysed 2015 data from 65 countries. As measured in defined daily doses per 1,000 inhabitants per day. Mongolia had the highest consumption with a rate of 64.4. Burundi had the lowest at 4.4. Amoxicillin and amoxicillin/clavulanic acid were the most frequently consumed.
Side effects
Antibiotics are screened for any negative effects before their approval for clinical use, and are usually considered safe and well tolerated. However, some antibiotics have been associated with a wide extent of adverse side effects ranging from mild to very severe depending on the type of antibiotic used, the microbes targeted, and the individual patient. Side effects may reflect the pharmacological or toxicological properties of the antibiotic or may involve hypersensitivity or allergic reactions. Adverse effects range from fever and nausea to major allergic reactions, including photodermatitis and anaphylaxis.
Common side effects of oral antibiotics include diarrhea, resulting from disruption of the species composition in the intestinal flora, resulting, for example, in overgrowth of pathogenic bacteria, such as Clostridioides difficile. Taking probiotics during the course of antibiotic treatment can help prevent antibiotic-associated diarrhea. Antibacterials can also affect the vaginal flora, and may lead to overgrowth of yeast species of the genus Candida in the vulvo-vaginal area. Additional side effects can result from interaction with other drugs, such as the possibility of tendon damage from the administration of a quinolone antibiotic with a systemic corticosteroid.
Some antibiotics may also damage the mitochondrion, a bacteria-derived organelle found in eukaryotic, including human, cells. Mitochondrial damage cause oxidative stress in cells and has been suggested as a mechanism for side effects from fluoroquinolones. They are also known to affect chloroplasts.
Interactions
Birth control pills
There are few well-controlled studies on whether antibiotic use increases the risk of oral contraceptive failure. The majority of studies indicate antibiotics do not interfere with birth control pills, such as clinical studies that suggest the failure rate of contraceptive pills caused by antibiotics is very low (about 1%). Situations that may increase the risk of oral contraceptive failure include non-compliance (missing taking the pill), vomiting, or diarrhea. Gastrointestinal disorders or interpatient variability in oral contraceptive absorption affecting ethinylestradiol serum levels in the blood. Women with menstrual irregularities may be at higher risk of failure and should be advised to use backup contraception during antibiotic treatment and for one week after its completion. If patient-specific risk factors for reduced oral contraceptive efficacy are suspected, backup contraception is recommended.
In cases where antibiotics have been suggested to affect the efficiency of birth control pills, such as for the broad-spectrum antibiotic rifampicin, these cases may be due to an increase in the activities of hepatic liver enzymes' causing increased breakdown of the pill's active ingredients. Effects on the intestinal flora, which might result in reduced absorption of estrogens in the colon, have also been suggested, but such suggestions have been inconclusive and controversial. Clinicians have recommended that extra contraceptive measures be applied during therapies using antibiotics that are suspected to interact with oral contraceptives. More studies on the possible interactions between antibiotics and birth control pills (oral contraceptives) are required as well as careful assessment of patient-specific risk factors for potential oral contractive pill failure prior to dismissing the need for backup contraception.
Alcohol
Interactions between alcohol and certain antibiotics may occur and may cause side effects and decreased effectiveness of antibiotic therapy. While moderate alcohol consumption is unlikely to interfere with many common antibiotics, there are specific types of antibiotics with which alcohol consumption may cause serious side effects. Therefore, potential risks of side effects and effectiveness depend on the type of antibiotic administered.
Antibiotics such as metronidazole, tinidazole, cephamandole, latamoxef, cefoperazone, cefmenoxime, and furazolidone, cause a disulfiram-like chemical reaction with alcohol by inhibiting its breakdown by acetaldehyde dehydrogenase, which may result in vomiting, nausea, and shortness of breath. In addition, the efficacy of doxycycline and erythromycin succinate may be reduced by alcohol consumption. Other effects of alcohol on antibiotic activity include altered activity of the liver enzymes that break down the antibiotic compound.
Pharmacodynamics
The successful outcome of antimicrobial therapy with antibacterial compounds depends on several factors. These include host defense mechanisms, the location of infection, and the pharmacokinetic and pharmacodynamic properties of the antibacterial. The bactericidal activity of antibacterials may depend on the bacterial growth phase, and it often requires ongoing metabolic activity and division of bacterial cells. These findings are based on laboratory studies, and in clinical settings have also been shown to eliminate bacterial infection. Since the activity of antibacterials depends frequently on its concentration, in vitro characterization of antibacterial activity commonly includes the determination of the minimum inhibitory concentration and minimum bactericidal concentration of an antibacterial.
To predict clinical outcome, the antimicrobial activity of an antibacterial is usually combined with its pharmacokinetic profile, and several pharmacological parameters are used as markers of drug efficacy.
Combination therapy
In important infectious diseases, including tuberculosis, combination therapy (i.e., the concurrent application of two or more antibiotics) has been used to delay or prevent the emergence of resistance. In acute bacterial infections, antibiotics as part of combination therapy are prescribed for their synergistic effects to improve treatment outcome as the combined effect of both antibiotics is better than their individual effect. Fosfomycin has the highest number of synergistic combinations among antibiotics and is almost always used as a partner drug. Methicillin-resistant Staphylococcus aureus infections may be treated with a combination therapy of fusidic acid and rifampicin. Antibiotics used in combination may also be antagonistic and the combined effects of the two antibiotics may be less than if one of the antibiotics was given as a monotherapy. For example, chloramphenicol and tetracyclines are antagonists to penicillins. However, this can vary depending on the species of bacteria. In general, combinations of a bacteriostatic antibiotic and bactericidal antibiotic are antagonistic.
In addition to combining one antibiotic with another, antibiotics are sometimes co-administered with resistance-modifying agents. For example, β-lactam antibiotics may be used in combination with β-lactamase inhibitors, such as clavulanic acid or sulbactam, when a patient is infected with a β-lactamase-producing strain of bacteria.
Classes
Antibiotics are commonly classified based on their mechanism of action, chemical structure, or spectrum of activity. Most target bacterial functions or growth processes. Those that target the bacterial cell wall (penicillins and cephalosporins) or the cell membrane (polymyxins), or interfere with essential bacterial enzymes (rifamycins, lipiarmycins, quinolones, and sulfonamides) have bactericidal activities, killing the bacteria. Protein synthesis inhibitors (macrolides, lincosamides, and tetracyclines) are usually bacteriostatic, inhibiting further growth (with the exception of bactericidal aminoglycosides). Further categorization is based on their target specificity. "Narrow-spectrum" antibiotics target specific types of bacteria, such as gram-negative or gram-positive, whereas broad-spectrum antibiotics affect a wide range of bacteria. Following a 40-year break in discovering classes of antibacterial compounds, four new classes of antibiotics were introduced to clinical use in the late 2000s and early 2010s: cyclic lipopeptides (such as daptomycin), glycylcyclines (such as tigecycline), oxazolidinones (such as linezolid), and lipiarmycins (such as fidaxomicin).
Production
With advances in medicinal chemistry, most modern antibacterials are semisynthetic modifications of various natural compounds. These include, for example, the beta-lactam antibiotics, which include the penicillins (produced by fungi in the genus Penicillium), the cephalosporins, and the carbapenems. Compounds that are still isolated from living organisms are the aminoglycosides, whereas other antibacterials—for example, the sulfonamides, the quinolones, and the oxazolidinones—are produced solely by chemical synthesis. Many antibacterial compounds are relatively small molecules with a molecular weight of less than 1000 daltons.
Since the first pioneering efforts of Howard Florey and Chain in 1939, the importance of antibiotics, including antibacterials, to medicine has led to intense research into producing antibacterials at large scales. Following screening of antibacterials against a wide range of bacteria, production of the active compounds is carried out using fermentation, usually in strongly aerobic conditions.
Resistance
Antimicrobial resistance (AMR or AR) is a naturally occurring process. AMR is driven largely by the misuse and overuse of antimicrobials. Yet, at the same time, many people around the world do not have access to essential antimicrobials. The emergence of antibiotic-resistant bacteria is a common phenomenon mainly caused by the overuse/misuse. It represents a threat to health globally. Each year, nearly 5 million deaths are associated with AMR globally.
Emergence of resistance often reflects evolutionary processes that take place during antibiotic therapy. The antibiotic treatment may select for bacterial strains with physiologically or genetically enhanced capacity to survive high doses of antibiotics. Under certain conditions, it may result in preferential growth of resistant bacteria, while growth of susceptible bacteria is inhibited by the drug. For example, antibacterial selection for strains having previously acquired antibacterial-resistance genes was demonstrated in 1943 by the Luria–Delbrück experiment. Antibiotics such as penicillin and erythromycin, which used to have a high efficacy against many bacterial species and strains, have become less effective, due to the increased resistance of many bacterial strains.
Resistance may take the form of biodegradation of pharmaceuticals, such as sulfamethazine-degrading soil bacteria introduced to sulfamethazine through medicated pig feces.
The survival of bacteria often results from an inheritable resistance, but the growth of resistance to antibacterials also occurs through horizontal gene transfer. Horizontal transfer is more likely to happen in locations of frequent antibiotic use.
Antibacterial resistance may impose a biological cost, thereby reducing fitness of resistant strains, which can limit the spread of antibacterial-resistant bacteria, for example, in the absence of antibacterial compounds. Additional mutations, however, may compensate for this fitness cost and can aid the survival of these bacteria.
Paleontological data show that both antibiotics and antibiotic resistance are ancient compounds and mechanisms. Useful antibiotic targets are those for which mutations negatively impact bacterial reproduction or viability.
Several molecular mechanisms of antibacterial resistance exist. Intrinsic antibacterial resistance may be part of the genetic makeup of bacterial strains. For example, an antibiotic target may be absent from the bacterial genome. Acquired resistance results from a mutation in the bacterial chromosome or the acquisition of extra-chromosomal DNA. Antibacterial-producing bacteria have evolved resistance mechanisms that have been shown to be similar to, and may have been transferred to, antibacterial-resistant strains. The spread of antibacterial resistance often occurs through vertical transmission of mutations during growth and by genetic recombination of DNA by horizontal genetic exchange. For instance, antibacterial resistance genes can be exchanged between different bacterial strains or species via plasmids that carry these resistance genes. Plasmids that carry several different resistance genes can confer resistance to multiple antibacterials. Cross-resistance to several antibacterials may also occur when a resistance mechanism encoded by a single gene conveys resistance to more than one antibacterial compound.
Antibacterial-resistant strains and species, sometimes referred to as "superbugs", now contribute to the emergence of diseases that were, for a while, well controlled. For example, emergent bacterial strains causing tuberculosis that are resistant to previously effective antibacterial treatments pose many therapeutic challenges. Every year, nearly half a million new cases of multidrug-resistant tuberculosis (MDR-TB) are estimated to occur worldwide. For example, NDM-1 is a newly identified enzyme conveying bacterial resistance to a broad range of beta-lactam antibacterials. The United Kingdom's Health Protection Agency has stated that "most isolates with NDM-1 enzyme are resistant to all standard intravenous antibiotics for treatment of severe infections." On 26 May 2016, an E. coli "superbug" was identified in the United States resistant to colistin, "the last line of defence" antibiotic.
In recent years, even anaerobic bacteria, historically considered less concerning in terms of resistance, have demonstrated high rates of antibiotic resistance, particularly Bacteroides, for which resistance rates to penicillin have been reported to exceed 90%.
Misuse
Per The ICU Book, "The first rule of antibiotics is to try not to use them, and the second rule is try not to use too many of them." Inappropriate antibiotic treatment and overuse of antibiotics have contributed to the emergence of antibiotic-resistant bacteria. However, potential harm from antibiotics extends beyond selection of antimicrobial resistance and their overuse is associated with adverse effects for patients themselves, seen most clearly in critically ill patients in Intensive care units. Self-prescribing of antibiotics is an example of misuse. Many antibiotics are frequently prescribed to treat symptoms or diseases that do not respond to antibiotics or that are likely to resolve without treatment. Also, incorrect or suboptimal antibiotics are prescribed for certain bacterial infections. The overuse of antibiotics, like penicillin and erythromycin, has been associated with emerging antibiotic resistance since the 1950s. Widespread usage of antibiotics in hospitals has also been associated with increases in bacterial strains and species that no longer respond to treatment with the most common antibiotics.
Common forms of antibiotic misuse include excessive use of prophylactic antibiotics in travelers and failure of medical professionals to prescribe the correct dosage of antibiotics on the basis of the patient's weight and history of prior use. Other forms of misuse include failure to take the entire prescribed course of the antibiotic, incorrect dosage and administration, or failure to rest for sufficient recovery. Inappropriate antibiotic treatment, for example, is their prescription to treat viral infections such as the common cold. One study on respiratory tract infections found "physicians were more likely to prescribe antibiotics to patients who appeared to expect them". Multifactorial interventions aimed at both physicians and patients can reduce inappropriate prescription of antibiotics. The lack of rapid point of care diagnostic tests, particularly in resource-limited settings is considered one of the drivers of antibiotic misuse.
Several organizations concerned with antimicrobial resistance are lobbying to eliminate the unnecessary use of antibiotics. The issues of misuse and overuse of antibiotics have been addressed by the formation of the US Interagency Task Force on Antimicrobial Resistance. This task force aims to actively address antimicrobial resistance, and is coordinated by the US Centers for Disease Control and Prevention, the Food and Drug Administration (FDA), and the National Institutes of Health, as well as other US agencies. A non-governmental organization campaign group is Keep Antibiotics Working. In France, an "Antibiotics are not automatic" government campaign started in 2002 and led to a marked reduction of unnecessary antibiotic prescriptions, especially in children.
The emergence of antibiotic resistance has prompted restrictions on their use in the UK in 1970 (Swann report 1969), and the European Union has banned the use of antibiotics as growth-promotional agents since 2003. Moreover, several organizations (including the World Health Organization, the National Academy of Sciences, and the U.S. Food and Drug Administration) have advocated restricting the amount of antibiotic use in food animal production. However, commonly there are delays in regulatory and legislative actions to limit the use of antibiotics, attributable partly to resistance against such regulation by industries using or selling antibiotics, and to the time required for research to test causal links between their use and resistance to them. Two federal bills (S.742 and H.R. 2562) aimed at phasing out nontherapeutic use of antibiotics in US food animals were proposed, but have not passed. These bills were endorsed by public health and medical organizations, including the American Holistic Nurses' Association, the American Medical Association, and the American Public Health Association.
Despite pledges by food companies and restaurants to reduce or eliminate meat that comes from animals treated with antibiotics, the purchase of antibiotics for use on farm animals has been increasing every year.
There has been extensive use of antibiotics in animal husbandry. In the United States, the question of emergence of antibiotic-resistant bacterial strains due to use of antibiotics in livestock was raised by the US Food and Drug Administration (FDA) in 1977. In March 2012, the United States District Court for the Southern District of New York, ruling in an action brought by the Natural Resources Defense Council and others, ordered the FDA to revoke approvals for the use of antibiotics in livestock, which violated FDA regulations.
Studies have shown that common misconceptions about the effectiveness and necessity of antibiotics to treat common mild illnesses contribute to their overuse.
Other forms of antibiotic-associated harm include anaphylaxis, drug toxicity most notably kidney and liver damage, and super-infections with resistant organisms. Antibiotics are also known to affect mitochondrial function, and this may contribute to the bioenergetic failure of immune cells seen in sepsis. They also alter the microbiome of the gut, lungs, and skin, which may be associated with adverse effects such as Clostridioides difficile associated diarrhoea. Whilst antibiotics can clearly be lifesaving in patients with bacterial infections, their overuse, especially in patients where infections are hard to diagnose, can lead to harm via multiple mechanisms.
History
Before the early 20th century, treatments for infections were based primarily on medicinal folklore. Mixtures with antimicrobial properties that were used in treatments of infections were described over 2,000 years ago. Many ancient cultures, including the ancient Egyptians and ancient Greeks, used specially selected mold and plant materials to treat infections. Nubian mummies studied in the 1990s were found to contain significant levels of tetracycline. The beer brewed at that time was conjectured to have been the source.
The use of antibiotics in modern medicine began with the discovery of synthetic antibiotics derived from dyes. Various Essential oils have been shown to have anti-microbial properties. Along with this, the plants from which these oils have been derived can be used as niche anti-microbial agents.
Synthetic antibiotics derived from dyes
Synthetic antibiotic chemotherapy as a science and development of antibacterials began in Germany with Paul Ehrlich in the late 1880s. Ehrlich noted certain dyes would colour human, animal, or bacterial cells, whereas others did not. He then proposed the idea that it might be possible to create chemicals that would act as a selective drug that would bind to and kill bacteria without harming the human host. After screening hundreds of dyes against various organisms, in 1907, he discovered a medicinally useful drug, the first synthetic antibacterial organoarsenic compound salvarsan, now called arsphenamine.
This heralded the era of antibacterial treatment that was begun with the discovery of a series of arsenic-derived synthetic antibiotics by both Alfred Bertheim and Ehrlich in 1907. Ehrlich and Bertheim had experimented with various chemicals derived from dyes to treat trypanosomiasis in mice and spirochaeta infection in rabbits. While their early compounds were too toxic, Ehrlich and Sahachiro Hata, a Japanese bacteriologist working with Ehrlich in the quest for a drug to treat syphilis, achieved success with the 606th compound in their series of experiments. In 1910, Ehrlich and Hata announced their discovery, which they called drug "606", at the Congress for Internal Medicine at Wiesbaden. The Hoechst company began to market the compound toward the end of 1910 under the name Salvarsan, now known as arsphenamine. The drug was used to treat syphilis in the first half of the 20th century. In 1908, Ehrlich received the Nobel Prize in Physiology or Medicine for his contributions to immunology. Hata was nominated for the Nobel Prize in Chemistry in 1911 and for the Nobel Prize in Physiology or Medicine in 1912 and 1913.
The first sulfonamide and the first systemically active antibacterial drug, Prontosil, was developed by a research team led by Gerhard Domagk in 1932 or 1933 at the Bayer Laboratories of the IG Farben conglomerate in Germany, for which Domagk received the 1939 Nobel Prize in Physiology or Medicine. Sulfanilamide, the active drug of Prontosil, was not patentable as it had already been in use in the dye industry for some years. Prontosil had a relatively broad effect against Gram-positive cocci, but not against enterobacteria. Research was stimulated apace by its success. The discovery and development of this sulfonamide drug opened the era of antibacterials.
Penicillin and other natural antibiotics
Observations about the growth of some microorganisms inhibiting the growth of other microorganisms have been reported since the late 19th century. These observations of antibiosis between microorganisms led to the discovery of natural antibacterials. Louis Pasteur observed, "if we could intervene in the antagonism observed between some bacteria, it would offer perhaps the greatest hopes for therapeutics".
In 1874, physician Sir William Roberts noted that cultures of the mould Penicillium glaucum that is used in the making of some types of blue cheese did not display bacterial contamination.
In 1895 Vincenzo Tiberio, Italian physician, published a paper on the antibacterial power of some extracts of mold.
In 1897, doctoral student Ernest Duchesne submitted a dissertation, "" (Contribution to the study of vital competition in micro-organisms: antagonism between moulds and microbes), the first known scholarly work to consider the therapeutic capabilities of moulds resulting from their anti-microbial activity. In his thesis, Duchesne proposed that bacteria and moulds engage in a perpetual battle for survival. Duchesne observed that E. coli was eliminated by Penicillium glaucum when they were both grown in the same culture. He also observed that when he inoculated laboratory animals with lethal doses of typhoid bacilli together with Penicillium glaucum, the animals did not contract typhoid. Duchesne's army service after getting his degree prevented him from doing any further research. Duchesne died of tuberculosis, a disease now treated by antibiotics.
In 1928, Sir Alexander Fleming postulated the existence of penicillin, a molecule produced by certain moulds that kills or stops the growth of certain kinds of bacteria. Fleming was working on a culture of disease-causing bacteria when he noticed the spores of a green mold, Penicillium rubens, in one of his culture plates. He observed that the presence of the mould killed or prevented the growth of the bacteria. Fleming postulated that the mould must secrete an antibacterial substance, which he named penicillin in 1928. Fleming believed that its antibacterial properties could be exploited for chemotherapy. He initially characterised some of its biological properties, and attempted to use a crude preparation to treat some infections, but he was unable to pursue its further development without the aid of trained chemists.
Ernst Chain, Howard Florey and Edward Abraham succeeded in purifying the first penicillin, penicillin G, in 1942, but it did not become widely available outside the Allied military before 1945. Later, Norman Heatley developed the back extraction technique for efficiently purifying penicillin in bulk. The chemical structure of penicillin was first proposed by Abraham in 1942 and then later confirmed by Dorothy Crowfoot Hodgkin in 1945. Purified penicillin displayed potent antibacterial activity against a wide range of bacteria and had low toxicity in humans. Furthermore, its activity was not inhibited by biological constituents such as pus, unlike the synthetic sulfonamides. (see below) The development of penicillin led to renewed interest in the search for antibiotic compounds with similar efficacy and safety. For their successful development of penicillin, which Fleming had accidentally discovered but could not develop himself, as a therapeutic drug, Chain and Florey shared the 1945 Nobel Prize in Medicine with Fleming.
Florey credited René Dubos with pioneering the approach of deliberately and systematically searching for antibacterial compounds, which had led to the discovery of gramicidin and had revived Florey's research in penicillin. In 1939, coinciding with the start of World War II, Dubos had reported the discovery of the first naturally derived antibiotic, tyrothricin, a compound of 20% gramicidin and 80% tyrocidine, from Bacillus brevis. It was one of the first commercially manufactured antibiotics and was very effective in treating wounds and ulcers during World War II. Gramicidin, however, could not be used systemically because of toxicity. Tyrocidine also proved too toxic for systemic usage. Research results obtained during that period were not shared between the Axis and the Allied powers during World War II and limited access during the Cold War.
Late 20th century
During the mid-20th century, the number of new antibiotic substances introduced for medical use increased significantly. From 1935 to 1968, 12 new classes were launched. However, after this, the number of new classes dropped markedly, with only two new classes introduced between 1969 and 2003.
Antibiotic pipeline
Both the WHO and the Infectious Disease Society of America report that the weak antibiotic pipeline does not match bacteria's increasing ability to develop resistance. The Infectious Disease Society of America report noted that the number of new antibiotics approved for marketing per year had been declining and identified seven antibiotics against the Gram-negative bacilli currently in phase 2 or phase 3 clinical trials. However, these drugs did not address the entire spectrum of resistance of Gram-negative bacilli. According to the WHO fifty one new therapeutic entities - antibiotics (including combinations), are in phase 1–3 clinical trials as of May 2017. Antibiotics targeting multidrug-resistant Gram-positive pathogens remains a high priority.
A few antibiotics have received marketing authorization in the last seven years. The cephalosporin ceftaroline and the lipoglycopeptides oritavancin and telavancin have been approved for the treatment of acute bacterial skin and skin structure infection and community-acquired bacterial pneumonia. The lipoglycopeptide dalbavancin and the oxazolidinone tedizolid has also been approved for use for the treatment of acute bacterial skin and skin structure infection. The first in a new class of narrow-spectrum macrocyclic antibiotics, fidaxomicin, has been approved for the treatment of C. difficile colitis. New cephalosporin-lactamase inhibitor combinations also approved include ceftazidime-avibactam and ceftolozane-avibactam for complicated urinary tract infection and intra-abdominal infection.
Possible improvements include clarification of clinical trial regulations by FDA. Furthermore, appropriate economic incentives could persuade pharmaceutical companies to invest in this endeavor. In the US, the Antibiotic Development to Advance Patient Treatment (ADAPT) Act was introduced with the aim of fast tracking the drug development of antibiotics to combat the growing threat of 'superbugs'. Under this Act, FDA can approve antibiotics and antifungals treating life-threatening infections based on smaller clinical trials. The CDC will monitor the use of antibiotics and the emerging resistance, and publish the data. The FDA antibiotics labeling process, 'Susceptibility Test Interpretive Criteria for Microbial Organisms' or 'breakpoints', will provide accurate data to healthcare professionals. According to Allan Coukell, senior director for health programs at The Pew Charitable Trusts, "By allowing drug developers to rely on smaller datasets, and clarifying FDA's authority to tolerate a higher level of uncertainty for these drugs when making a risk/benefit calculation, ADAPT would make the clinical trials more feasible."
Replenishing the antibiotic pipeline and developing other new therapies
Because antibiotic-resistant bacterial strains continue to emerge and spread, there is a constant need to develop new antibacterial treatments. Current strategies include traditional chemistry-based approaches such as natural product-based drug discovery, newer chemistry-based approaches such as drug design, traditional biology-based approaches such as immunoglobulin therapy, and experimental biology-based approaches such as phage therapy, fecal microbiota transplants, antisense RNA-based treatments, and CRISPR-Cas9-based treatments.
Natural product-based antibiotic discovery
Most of the antibiotics in current use are natural products or natural product derivatives, and bacterial, fungal, plant and animal extracts are being screened in the search for new antibiotics. Organisms may be selected for testing based on ecological, ethnomedical, genomic, or historical rationales. Medicinal plants, for example, are screened on the basis that they are used by traditional healers to prevent or cure infection and may therefore contain antibacterial compounds. Also, soil bacteria are screened on the basis that, historically, they have been a very rich source of antibiotics (with 70 to 80% of antibiotics in current use derived from the actinomycetes).
In addition to screening natural products for direct antibacterial activity, they are sometimes screened for the ability to suppress antibiotic resistance and antibiotic tolerance. For example, some secondary metabolites inhibit drug efflux pumps, thereby increasing the concentration of antibiotic able to reach its cellular target and decreasing bacterial resistance to the antibiotic. Natural products known to inhibit bacterial efflux pumps include the alkaloid lysergol, the carotenoids capsanthin and capsorubin, and the flavonoids rotenone and chrysin. Other natural products, this time primary metabolites rather than secondary metabolites, have been shown to eradicate antibiotic tolerance. For example, glucose, mannitol, and fructose reduce antibiotic tolerance in Escherichia coli and Staphylococcus aureus, rendering them more susceptible to killing by aminoglycoside antibiotics.
Natural products may be screened for the ability to suppress bacterial virulence factors too. Virulence factors are molecules, cellular structures and regulatory systems that enable bacteria to evade the body's immune defenses (e.g. urease, staphyloxanthin), move towards, attach to, and/or invade human cells (e.g. type IV pili, adhesins, internalins), coordinate the activation of virulence genes (e.g. quorum sensing), and cause disease (e.g. exotoxins). Examples of natural products with antivirulence activity include the flavonoid epigallocatechin gallate (which inhibits listeriolysin O), the quinone tetrangomycin (which inhibits staphyloxanthin), and the sesquiterpene zerumbone (which inhibits Acinetobacter baumannii motility).
Immunoglobulin therapy
Antibodies (anti-tetanus immunoglobulin) have been used in the treatment and prevention of tetanus since the 1910s, and this approach continues to be a useful way of controlling bacterial diseases. The monoclonal antibody bezlotoxumab, for example, has been approved by the US FDA and EMA for recurrent Clostridioides difficile infection, and other monoclonal antibodies are in development (e.g. AR-301 for the adjunctive treatment of S. aureus ventilator-associated pneumonia). Antibody treatments act by binding to and neutralizing bacterial exotoxins and other virulence factors.
Phage therapy
Phage therapy is under investigation as a method of treating antibiotic-resistant strains of bacteria. Phage therapy involves infecting bacterial pathogens with viruses. Bacteriophages and their host ranges are extremely specific for certain bacteria, thus, unlike antibiotics, they do not disturb the host organism's intestinal microbiota. Bacteriophages, also known as phages, infect and kill bacteria primarily during lytic cycles. Phages insert their DNA into the bacterium, where it is transcribed and used to make new phages, after which the cell will lyse, releasing new phage that are able to infect and destroy further bacteria of the same strain. The high specificity of phage protects "good" bacteria from destruction.
Some disadvantages to the use of bacteriophages also exist, however. Bacteriophages may harbour virulence factors or toxic genes in their genomes and, prior to use, it may be prudent to identify genes with similarity to known virulence factors or toxins by genomic sequencing. In addition, the oral and IV administration of phages for the eradication of bacterial infections poses a much higher safety risk than topical application. Also, there is the additional concern of uncertain immune responses to these large antigenic cocktails.
There are considerable regulatory hurdles that must be cleared for such therapies. Despite numerous challenges, the use of bacteriophages as a replacement for antimicrobial agents against MDR pathogens that no longer respond to conventional antibiotics, remains an attractive option.
Fecal microbiota transplants
Fecal microbiota transplants involve transferring the full intestinal microbiota from a healthy human donor (in the form of stool) to patients with C. difficile infection. Although this procedure has not been officially approved by the US FDA, its use is permitted under some conditions in patients with antibiotic-resistant C. difficile infection. Cure rates are around 90%, and work is underway to develop stool banks, standardized products, and methods of oral delivery. Fecal microbiota transplantation has also been used more recently for inflammatory bowel diseases.
Antisense RNA-based treatments
Antisense RNA-based treatment (also known as gene silencing therapy) involves (a) identifying bacterial genes that encode essential proteins (e.g. the Pseudomonas aeruginosa genes acpP, lpxC, and rpsJ), (b) synthesizing single-stranded RNA that is complementary to the mRNA encoding these essential proteins, and (c) delivering the single-stranded RNA to the infection site using cell-penetrating peptides or liposomes. The antisense RNA then hybridizes with the bacterial mRNA and blocks its translation into the essential protein. Antisense RNA-based treatment has been shown to be effective in in vivo models of P. aeruginosa pneumonia.
In addition to silencing essential bacterial genes, antisense RNA can be used to silence bacterial genes responsible for antibiotic resistance. For example, antisense RNA has been developed that silences the S. aureus mecA gene (the gene that encodes modified penicillin-binding protein 2a and renders S. aureus strains methicillin-resistant). Antisense RNA targeting mecA mRNA has been shown to restore the susceptibility of methicillin-resistant staphylococci to oxacillin in both in vitro and in vivo studies.
CRISPR-Cas9-based treatments
In the early 2000s, a system was discovered that enables bacteria to defend themselves against invading viruses. The system, known as CRISPR-Cas9, consists of (a) an enzyme that destroys DNA (the nuclease Cas9) and (b) the DNA sequences of previously encountered viral invaders (CRISPR). These viral DNA sequences enable the nuclease to target foreign (viral) rather than self (bacterial) DNA.
Although the function of CRISPR-Cas9 in nature is to protect bacteria, the DNA sequences in the CRISPR component of the system can be modified so that the Cas9 nuclease targets bacterial resistance genes or bacterial virulence genes instead of viral genes. The modified CRISPR-Cas9 system can then be administered to bacterial pathogens using plasmids or bacteriophages. This approach has successfully been used to silence antibiotic resistance and reduce the virulence of enterohemorrhagic E. coli in an in vivo model of infection.
Reducing the selection pressure for antibiotic resistance
In addition to developing new antibacterial treatments, it is important to reduce the selection pressure for the emergence and spread of antimicrobial resistance (AMR), such as antibiotic resistance. Strategies to accomplish this include well-established infection control measures such as infrastructure improvement (e.g. less crowded housing), better sanitation (e.g. safe drinking water and food), better use of vaccines and vaccine development, other approaches such as antibiotic stewardship, and experimental approaches such as the use of prebiotics and probiotics to prevent infection. Antibiotic cycling, where antibiotics are alternated by clinicians to treat microbial diseases, is proposed, but recent studies revealed such strategies are ineffective against antibiotic resistance.
Vaccines
Vaccines are an essential part of the response to reduce AMR as they prevent infections, reduce the use and overuse of antimicrobials, and slow the emergence and spread of drug-resistant pathogens. Vaccination either excites or reinforces the immune competence of a host to ward off infection, leading to the activation of macrophages, the production of antibodies, inflammation, and other classic immune reactions. Antibacterial vaccines have been responsible for a drastic reduction in global bacterial diseases.
See also
References
Further reading
External links
Anti-infective agents
. | Antibiotic | [
"Chemistry",
"Biology"
] | 9,794 | [
"Biotechnology products",
"Anti-infective agents",
"Antibiotics",
"Bactericides",
"Chemicals in medicine",
"Biocides"
] |
1,839 | https://en.wikipedia.org/wiki/Allotropy | Allotropy or allotropism () is the property of some chemical elements to exist in two or more different forms, in the same physical state, known as allotropes of the elements. Allotropes are different structural modifications of an element: the atoms of the element are bonded together in different manners.
For example, the allotropes of carbon include diamond (the carbon atoms are bonded together to form a cubic lattice of tetrahedra), graphite (the carbon atoms are bonded together in sheets of a hexagonal lattice), graphene (single sheets of graphite), and fullerenes (the carbon atoms are bonded together in spherical, tubular, or ellipsoidal formations).
The term allotropy is used for elements only, not for compounds. The more general term, used for any compound, is polymorphism, although its use is usually restricted to solid materials such as crystals. Allotropy refers only to different forms of an element within the same physical phase (the state of matter, such as a solid, liquid or gas). The differences between these states of matter would not alone constitute examples of allotropy. Allotropes of chemical elements are frequently referred to as polymorphs or as phases of the element.
For some elements, allotropes have different molecular formulae or different crystalline structures, as well as a difference in physical phase; for example, two allotropes of oxygen (dioxygen, O2, and ozone, O3) can both exist in the solid, liquid and gaseous states. Other elements do not maintain distinct allotropes in different physical phases; for example, phosphorus has numerous solid allotropes, which all revert to the same P4 form when melted to the liquid state.
History
The concept of allotropy was originally proposed in 1840 by the Swedish scientist Baron Jöns Jakob Berzelius (1779–1848). The term is derived . After the acceptance of Avogadro's hypothesis in 1860, it was understood that elements could exist as polyatomic molecules, and two allotropes of oxygen were recognized as O2 and O3. In the early 20th century, it was recognized that other cases such as carbon were due to differences in crystal structure.
By 1912, Ostwald noted that the allotropy of elements is just a special case of the phenomenon of polymorphism known for compounds, and proposed that the terms allotrope and allotropy be abandoned and replaced by polymorph and polymorphism. Although many other chemists have repeated this advice, IUPAC and most chemistry texts still favour the usage of allotrope and allotropy for elements only.
Differences in properties of an element's allotropes
Allotropes are different structural forms of the same element and can exhibit quite different physical properties and chemical behaviours. The change between allotropic forms is triggered by the same forces that affect other structures, i.e., pressure, light, and temperature. Therefore, the stability of the particular allotropes depends on particular conditions. For instance, iron changes from a body-centered cubic structure (ferrite) to a face-centered cubic structure (austenite) above 906 °C, and tin undergoes a modification known as tin pest from a metallic form to a semimetallic form below 13.2 °C (55.8 °F). As an example of allotropes having different chemical behaviour, ozone (O3) is a much stronger oxidizing agent than dioxygen (O2).
List of allotropes
Typically, elements capable of variable coordination number and/or oxidation states tend to exhibit greater numbers of allotropic forms. Another contributing factor is the ability of an element to catenate.
Examples of allotropes include:
Non-metals
Metalloids
Metals
Among the metallic elements that occur in nature in significant quantities (56 up to U, without Tc and Pm), almost half (27) are allotropic at ambient pressure: Li, Be, Na, Ca, Ti, Mn, Fe, Co, Sr, Y, Zr, Sn, La, Ce, Pr, Nd, Sm, Gd, Tb, Dy, Yb, Hf, Tl, Th, Pa and U. Some phase transitions between allotropic forms of technologically relevant metals are those of Ti at 882 °C, Fe at 912 °C and 1,394 °C, Co at 422 °C, Zr at 863 °C, Sn at 13 °C and U at 668 °C and 776 °C.
Most stable structure under standard conditions.
Structures stable below room temperature.
Structures stable above room temperature.
Structures stable above atmospheric pressure.
Lanthanides and actinides
Cerium, samarium, dysprosium and ytterbium have three allotropes.
Praseodymium, neodymium, gadolinium and terbium have two allotropes.
Plutonium has six distinct solid allotropes under "normal" pressures. Their densities vary within a ratio of some 4:3, which vastly complicates all kinds of work with the metal (particularly casting, machining, and storage). A seventh plutonium allotrope exists at very high pressures. The transuranium metals Np, Am, and Cm are also allotropic.
Promethium, americium, berkelium and californium have three allotropes each.
Nanoallotropes
In 2017, the concept of nanoallotropy was proposed. Nanoallotropes, or allotropes of nanomaterials, are nanoporous materials that have the same chemical composition (e.g., Au), but differ in their architecture at the nanoscale (that is, on a scale 10 to 100 times the dimensions of individual atoms). Such nanoallotropes may help create ultra-small electronic devices and find other industrial applications. The different nanoscale architectures translate into different properties, as was demonstrated for surface-enhanced Raman scattering performed on several different nanoallotropes of gold. A two-step method for generating nanoallotropes was also created.
See also
Isomer
Polymorphism (materials science)
Notes
References
External links
Allotropes – Chemistry Encyclopedia
Chemistry
Inorganic chemistry
Physical chemistry | Allotropy | [
"Physics",
"Chemistry"
] | 1,345 | [
"Periodic table",
"Applied and interdisciplinary physics",
"Properties of chemical elements",
"Allotropes",
"Materials",
"nan",
"Physical chemistry",
"Matter"
] |
1,842 | https://en.wikipedia.org/wiki/Augustin-Louis%20Cauchy | Baron Augustin-Louis Cauchy ( , , ; ; 21 August 1789 – 23 May 1857) was a French mathematician, engineer, and physicist. He was one of the first to rigorously state and prove the key theorems of calculus (thereby creating real analysis), pioneered the field complex analysis, and the study of permutation groups in abstract algebra. Cauchy also contributed to a number of topics in mathematical physics, notably continuum mechanics.
A profound mathematician, Cauchy had a great influence over his contemporaries and successors; Hans Freudenthal stated:
"More concepts and theorems have been named for Cauchy than for any other mathematician (in elasticity alone there are sixteen concepts and theorems named for Cauchy)."
Cauchy was a prolific worker; he wrote approximately eight hundred research articles and five complete textbooks on a variety of topics in the fields of mathematics and mathematical physics.
Biography
Youth and education
Cauchy was the son of Louis François Cauchy (1760–1848) and Marie-Madeleine Desestre. Cauchy had two brothers: Alexandre Laurent Cauchy (1792–1857), who became a president of a division of the court of appeal in 1847 and a judge of the court of cassation in 1849, and Eugene François Cauchy (1802–1877), a publicist who also wrote several mathematical works. From his childhood he was good at math.
Cauchy married Aloise de Bure in 1818. She was a close relative of the publisher who published most of Cauchy's works. They had two daughters, Marie Françoise Alicia (1819) and Marie Mathilde (1823).
Cauchy's father was a highly ranked official in the Parisian police of the
Ancien Régime, but lost this position due to the French Revolution (14 July 1789), which broke out one month before Augustin-Louis was born. The Cauchy family survived the revolution and the following Reign of Terror during 1793–94 by escaping to Arcueil, where Cauchy received his first education, from his father. After the execution of Robespierre in 1794, it was safe for the family to return to Paris. There, Louis-François Cauchy found a bureaucratic job in 1800, and quickly advanced his career. When Napoleon came to power in 1799, Louis-François Cauchy was further promoted, and became Secretary-General of the Senate, working directly under Laplace (who is now better known for his work on mathematical physics). The mathematician Lagrange was also a friend of the Cauchy family.
On Lagrange's advice, Augustin-Louis was enrolled in the École Centrale du Panthéon, the best secondary school of Paris at that time, in the fall of 1802. Most of the curriculum consisted of classical languages; the ambitious Cauchy, being a brilliant student, won many prizes in Latin and the humanities. In spite of these successes, Cauchy chose an engineering career, and prepared himself for the entrance examination to the École Polytechnique.
In 1805, he placed second of 293 applicants on this exam and was admitted. One of the main purposes of this school was to give future civil and military engineers a high-level scientific and mathematical education. The school functioned under military discipline, which caused Cauchy some problems in adapting. Nevertheless, he completed the course in 1807, at age 18, and went on to the École des Ponts et Chaussées (School for Bridges and Roads). He graduated in civil engineering, with the highest honors.
Engineering days
After finishing school in 1810, Cauchy accepted a job as a junior engineer in Cherbourg, where Napoleon intended to build a naval base. Here Cauchy stayed for three years, and was assigned the Ourcq Canal project and the Saint-Cloud Bridge project, and worked at the Harbor of Cherbourg. Although he had an extremely busy managerial job, he still found time to prepare three mathematical manuscripts, which he submitted to the Première Classe (First Class) of the Institut de France. Cauchy's first two manuscripts (on polyhedra) were accepted; the third one (on directrices of conic sections) was rejected.
In September 1812, at 23 years old, Cauchy returned to Paris after becoming ill from overwork. Another reason for his return to the capital was that he was losing interest in his engineering job, being more and more attracted to the abstract beauty of mathematics; in Paris, he would have a much better chance to find a mathematics related position. When his health improved in 1813, Cauchy chose not to return to Cherbourg. Although he formally kept his engineering position, he was transferred from the payroll of the Ministry of the Marine to the Ministry of the Interior. The next three years Cauchy was mainly on unpaid sick leave; he spent his time fruitfully, working on mathematics (on the related topics of symmetric functions, the symmetric group and the theory of higher-order algebraic equations). He attempted admission to the First Class of the Institut de France but failed on three different occasions between 1813 and 1815. In 1815 Napoleon was defeated at Waterloo, and the newly installed king Louis XVIII took the restoration in hand. The Académie des Sciences was re-established in March 1816; Lazare Carnot and Gaspard Monge were removed from this academy for political reasons, and the king appointed Cauchy to take the place of one of them. The reaction of Cauchy's peers was harsh; they considered the acceptance of his membership in the academy an outrage, and Cauchy created many enemies in scientific circles.
Professor at École Polytechnique
In November 1815, Louis Poinsot, who was an associate professor at the École Polytechnique, asked to be exempted from his teaching duties for health reasons. Cauchy was by then a rising mathematical star. One of his great successes at that time was the proof of Fermat's polygonal number theorem. He quit his engineering job, and received a one-year contract for teaching mathematics to second-year students of the École Polytechnique. In 1816, this Bonapartist, non-religious school was reorganized, and several liberal professors were fired; Cauchy was promoted to full professor.
When Cauchy was 28 years old, he was still living with his parents. His father found it time for his son to marry; he found him a suitable bride, Aloïse de Bure, five years his junior. The de Bure family were printers and booksellers, and published most of Cauchy's works. Aloïse and Augustin were married on April 4, 1818, with great Roman Catholic ceremony, in the Church of Saint-Sulpice. In 1819 the couple's first daughter, Marie Françoise Alicia, was born, and in 1823 the second and last daughter, Marie Mathilde.
The conservative political climate that lasted until 1830 suited Cauchy perfectly. In 1824 Louis XVIII died, and was succeeded by his even more conservative brother Charles X. During these years Cauchy was highly productive, and published one important mathematical treatise after another. He received cross-appointments at the Collège de France, and the .
In exile
In July 1830, the July Revolution occurred in France. Charles X fled the country, and was succeeded by Louis-Philippe. Riots, in which uniformed students of the École Polytechnique took an active part, raged close to Cauchy's home in Paris.
These events marked a turning point in Cauchy's life, and a break in his mathematical productivity. Shaken by the fall of the government and moved by a deep hatred of the liberals who were taking power, Cauchy left France to go abroad, leaving his family behind. He spent a short time at Fribourg in Switzerland, where he had to decide whether he would swear a required oath of allegiance to the new regime. He refused to do this, and consequently lost all his positions in Paris, except his membership of the academy, for which an oath was not required. In 1831 Cauchy went to the Italian city of Turin, and after some time there, he accepted an offer from the King of Sardinia (who ruled Turin and the surrounding Piedmont region) for a chair of theoretical physics, which was created especially for him. He taught in Turin during 1832–1833. In 1831, he was elected a foreign member of the Royal Swedish Academy of Sciences, and the following year a Foreign Honorary Member of the American Academy of Arts and Sciences.
In August 1833 Cauchy left Turin for Prague to become the science tutor of the thirteen-year-old Duke of Bordeaux, Henri d'Artois (1820–1883), the exiled Crown Prince and grandson of Charles X. As a professor of the École Polytechnique, Cauchy had been a notoriously bad lecturer, assuming levels of understanding that only a few of his best students could reach, and cramming his allotted time with too much material. Henri d'Artois had neither taste nor talent for either mathematics or science. Although Cauchy took his mission very seriously, he did this with great clumsiness, and with surprising lack of authority over Henri d'Artois. During his civil engineering days, Cauchy once had been briefly in charge of repairing a few of the Parisian sewers, and he made the mistake of mentioning this to his pupil; with great malice, Henri d'Artois went about saying Cauchy started his career in the sewers of Paris. Cauchy's role as tutor lasted until Henri d'Artois became eighteen years old, in September 1838. Cauchy did hardly any research during those five years, while Henri d'Artois acquired a lifelong dislike of mathematics. Cauchy was named a baron, a title by which Cauchy set great store.
In 1834, his wife and two daughters moved to Prague, and Cauchy was reunited with his family after four years in exile.
Last years
Cauchy returned to Paris and his position at the Academy of Sciences late in 1838. He could not regain his teaching positions, because he still refused to swear an oath of allegiance.
In August 1839 a vacancy appeared in the Bureau des Longitudes. This Bureau bore some resemblance to the academy; for instance, it had the right to co-opt its members. Further, it was believed that members of the Bureau could "forget about" the oath of allegiance, although formally, unlike the Academicians, they were obliged to take it. The Bureau des Longitudes was an organization founded in 1795 to solve the problem of determining position at sea — mainly the longitudinal coordinate, since latitude is easily determined from the position of the sun. Since it was thought that position at sea was best determined by astronomical observations, the Bureau had developed into an organization resembling an academy of astronomical sciences.
In November 1839 Cauchy was elected to the Bureau, and discovered that the matter of the oath was not so easily dispensed with. Without his oath, the king refused to approve his election. For four years Cauchy was in the position of being elected but not approved; accordingly, he was not a formal member of the Bureau, did not receive payment, could not participate in meetings, and could not submit papers. Still Cauchy refused to take any oaths; however, he did feel loyal enough to direct his research to celestial mechanics. In 1840, he presented a dozen papers on this topic to the academy. He described and illustrated the signed-digit representation of numbers, an innovation presented in England in 1727 by John Colson. The confounded membership of the Bureau lasted until the end of 1843, when Cauchy was replaced by Poinsot.
Throughout the nineteenth century the French educational system struggled over the separation of church and state. After losing control of the public education system, the Catholic Church sought to establish its own branch of education and found in Cauchy a staunch and illustrious ally. He lent his prestige and knowledge to the École Normale Écclésiastique, a school in Paris run by Jesuits, for training teachers for their colleges. He took part in the founding of the Institut Catholique. The purpose of this institute was to counter the effects of the absence of Catholic university education in France. These activities did not make Cauchy popular with his colleagues, who, on the whole, supported the Enlightenment ideals of the French Revolution. When a chair of mathematics became vacant at the Collège de France in 1843, Cauchy applied for it, but received just three of 45 votes.
In 1848 King Louis-Philippe fled to England. The oath of allegiance was abolished, and the road to an academic appointment was clear for Cauchy. On March 1, 1849, he was reinstated at the Faculté de Sciences, as a professor of mathematical astronomy. After political turmoil all through 1848, France chose to become a Republic, under the Presidency of Napoleon III of France. Early 1852 the President made himself Emperor of France, and took the name Napoleon III.
The idea came up in bureaucratic circles that it would be useful to again require a loyalty oath from all state functionaries, including university professors. This time a cabinet minister was able to convince the Emperor to exempt Cauchy from the oath. In 1853, Cauchy was elected an International Member of the American Philosophical Society. Cauchy remained a professor at the university until his death at the age of 67. He received the Last Rites and died of a bronchial condition at 4 a.m. on 23 May 1857.
His name is one of the 72 names inscribed on the Eiffel Tower.
Work
Early work
The genius of Cauchy was illustrated in his simple solution of the problem of Apollonius—describing a circle touching three given circles—which he discovered in 1805, his generalization of Euler's formula on polyhedra in 1811, and in several other elegant problems. More important is his memoir on wave propagation, which obtained the Grand Prix of the French Academy of Sciences in 1816. Cauchy's writings covered notable topics. In the theory of series he developed the notion of convergence and discovered many of the basic formulas for q-series. In the theory of numbers and complex quantities, he was the first to define complex numbers as pairs of real numbers. He also wrote on the theory of groups and substitutions, the theory of functions, differential equations and determinants.
Wave theory, mechanics, elasticity
In the theory of light he worked on Fresnel's wave theory and on the dispersion and polarization of light. He also contributed research in mechanics, substituting the notion of the continuity of geometrical displacements for the principle of the continuity of matter. He wrote on the equilibrium of rods and elastic membranes and on waves in elastic media. He introduced a 3 × 3 symmetric matrix of numbers that is now known as the Cauchy stress tensor. In elasticity, he originated the theory of stress, and his results are nearly as valuable as those of Siméon Poisson.
Number theory
Other significant contributions include being the first to prove the Fermat polygonal number theorem.
Complex functions
Cauchy is most famous for his single-handed development of complex function theory. The first pivotal theorem proved by Cauchy, now known as Cauchy's integral theorem, was the following:
where f(z) is a complex-valued function holomorphic on and within the non-self-intersecting closed curve C (contour) lying in the complex plane. The contour integral is taken along the contour C. The rudiments of this theorem can already be found in a paper that the 24-year-old Cauchy presented to the Académie des Sciences (then still called "First Class of the Institute") on August 11, 1814. In full form the theorem was given in 1825.
In 1826 Cauchy gave a formal definition of a residue of a function. This concept concerns functions that have poles—isolated singularities, i.e., points where a function goes to positive or negative infinity. If the complex-valued function f(z) can be expanded in the neighborhood of a singularity a as
where φ(z) is analytic (i.e., well-behaved without singularities), then f is said to have a pole of order n in the point a. If n = 1, the pole is called simple.
The coefficient B1 is called by Cauchy the residue of function f at a. If f is non-singular at a then the residue of f is zero at a. Clearly, the residue is in the case of a simple pole equal to
where we replaced B1 by the modern notation of the residue.
In 1831, while in Turin, Cauchy submitted two papers to the Academy of Sciences of Turin. In the first he proposed the formula now known as Cauchy's integral formula,
where f(z) is analytic on C and within the region bounded by the contour C and the complex number a is somewhere in this region. The contour integral is taken counter-clockwise. Clearly, the integrand has a simple pole at z = a. In the second paper he presented the residue theorem,
where the sum is over all the n poles of f(z) on and within the contour C. These results of Cauchy's still form the core of complex function theory as it is taught today to physicists and electrical engineers. For quite some time, contemporaries of Cauchy ignored his theory, believing it to be too complicated. Only in the 1840s the theory started to get response, with Pierre Alphonse Laurent being the first mathematician besides Cauchy to make a substantial contribution (his work on what are now known as Laurent series, published in 1843).
Cours d'Analyse
In his book Cours d'Analyse Cauchy stressed the importance of rigor in analysis. Rigor in this case meant the rejection of the principle of Generality of algebra (of earlier authors such as Euler and Lagrange) and its replacement by geometry and infinitesimals. Judith Grabiner wrote Cauchy was "the man who taught rigorous analysis to all of Europe". The book is frequently noted as being the first place that inequalities, and arguments were introduced into calculus. Here Cauchy defined continuity as follows: The function f(x) is continuous with respect to x between the given limits if, between these limits, an infinitely small increment in the variable always produces an infinitely small increment in the function itself.
M. Barany claims that the École mandated the inclusion of infinitesimal methods against Cauchy's better judgement. Gilain notes that when the portion of the curriculum devoted to Analyse Algébrique was reduced in 1825, Cauchy insisted on placing the topic of continuous functions (and therefore also infinitesimals) at the beginning of the Differential Calculus. Laugwitz (1989) and Benis-Sinaceur (1973) point out that Cauchy continued to use infinitesimals in his own research as late as 1853.
Cauchy gave an explicit definition of an infinitesimal in terms of a sequence tending to zero. There has been a vast body of literature written about Cauchy's notion of "infinitesimally small quantities", arguing that they lead from everything from the usual "epsilontic" definitions or to the notions of non-standard analysis. The consensus is that Cauchy omitted or left implicit the important ideas to make clear the precise meaning of the infinitely small quantities he used.
Taylor's theorem
He was the first to prove Taylor's theorem rigorously, establishing his well-known form of the remainder. He wrote a textbook (see the illustration) for his students at the École Polytechnique in which he developed the basic theorems of mathematical analysis as rigorously as possible. In this book he gave the necessary and sufficient condition for the existence of a limit in the form that is still taught. Also Cauchy's well-known test for absolute convergence stems from this book: Cauchy condensation test. In 1829 he defined for the first time a complex function of a complex variable in another textbook. In spite of these, Cauchy's own research papers often used intuitive, not rigorous, methods; thus one of his theorems was exposed to a "counter-example" by Abel, later fixed by the introduction of the notion of uniform continuity.
Argument principle, stability
In a paper published in 1855, two years before Cauchy's death, he discussed some theorems, one of which is similar to the "Principle of the argument" in many modern textbooks on complex analysis. In modern control theory textbooks, the Cauchy argument principle is quite frequently used to derive the Nyquist stability criterion, which can be used to predict the stability of negative feedback amplifier and negative feedback control systems. Thus Cauchy's work has a strong impact on both pure mathematics and practical engineering.
Published works
Cauchy was very productive, in number of papers second only to Leonhard Euler. It took almost a century to collect all his writings into 27 large volumes:
(Paris : Gauthier-Villars et fils, 1882–1974)
His greatest contributions to mathematical science are enveloped in the rigorous methods which he introduced; these are mainly embodied in his three great treatises:
Le Calcul infinitésimal (1823)
Leçons sur les applications de calcul infinitésimal; La géométrie (1826–1828)
His other works include:
Exercices d'analyse et de physique mathematique (Volume 1)
Exercices d'analyse et de physique mathematique (Volume 2)
Exercices d'analyse et de physique mathematique (Volume 3)
Exercices d'analyse et de physique mathematique (Volume 4) (Paris: Bachelier, 1840–1847)
Analyse algèbrique (Imprimerie Royale, 1821)
Nouveaux exercices de mathématiques (Paris : Gauthier-Villars, 1895)
Courses of mechanics (for the École Polytechnique)
Higher algebra (for the )
Mathematical physics (for the Collège de France).
Mémoire sur l'emploi des equations symboliques dans le calcul infinitésimal et dans le calcul aux différences finis CR Ac ad. Sci. Paris, t. XVII, 449–458 (1843) credited as originating the operational calculus.
Politics and religious beliefs
Augustin-Louis Cauchy grew up in the house of a staunch royalist. This made his father flee with the family to Arcueil during the French Revolution. Their life there during that time was apparently hard; Augustin-Louis's father, Louis François, spoke of living on rice, bread, and crackers during the period. A paragraph from an undated letter from Louis François to his mother in Rouen says:
In any event, he inherited his father's staunch royalism and hence refused to take oaths to any government after the overthrow of Charles X.
He was an equally staunch Catholic and a member of the Society of Saint Vincent de Paul. He also had links to the Society of Jesus and defended them at the academy when it was politically unwise to do so. His zeal for his faith may have led to his caring for Charles Hermite during his illness and leading Hermite to become a faithful Catholic. It also inspired Cauchy to plead on behalf of the Irish during the Great Famine of Ireland.
His royalism and religious zeal made him contentious, which caused difficulties with his colleagues. He felt that he was mistreated for his beliefs, but his opponents felt he intentionally provoked people by berating them over religious matters or by defending the Jesuits after they had been suppressed. Niels Henrik Abel called him a "bigoted Catholic" and added he was "mad and there is nothing that can be done about him", but at the same time praised him as a mathematician. Cauchy's views were widely unpopular among mathematicians and when Guglielmo Libri Carucci dalla Sommaja was made chair in mathematics before him he, and many others, felt his views were the cause. When Libri was accused of stealing books he was replaced by Joseph Liouville rather than Cauchy, which caused a rift between Liouville and Cauchy. Another dispute with political overtones concerned Jean-Marie Constant Duhamel and a claim on inelastic shocks. Cauchy was later shown, by Jean-Victor Poncelet, to be wrong.
See also
List of topics named after Augustin-Louis Cauchy
Cauchy–Binet formula
Cauchy boundary condition
Cauchy's convergence test
Cauchy (crater)
Cauchy determinant
Cauchy distribution
Cauchy's equation
Cauchy–Euler equation
Cauchy's functional equation
Cauchy horizon
Cauchy formula for repeated integration
Cauchy–Frobenius lemma
Cauchy–Hadamard theorem
Cauchy–Kovalevskaya theorem
Cauchy momentum equation
Cauchy–Peano theorem
Cauchy principal value
Cauchy problem
Cauchy product
Cauchy's radical test
Cauchy–Rassias stability
Cauchy–Riemann equations
Cauchy–Schwarz inequality
Cauchy sequence
Cauchy surface
Cauchy's theorem (geometry)
Cauchy's theorem (group theory)
Maclaurin–Cauchy test
References
Notes
Citations
Sources
Further reading
Boyer, C.: The concepts of the calculus. Hafner Publishing Company, 1949.
.
External links
Augustin-Louis Cauchy – Œuvres complètes (in 2 series) Gallica-Math
Augustin-Louis Cauchy – Cauchy's Life by Robin Hartshorne
1789 births
1857 deaths
19th-century French mathematicians
Corps des ponts
École des Ponts ParisTech alumni
École Polytechnique alumni
Fellows of the American Academy of Arts and Sciences
Foreign members of the Royal Society
French Roman Catholics
French geometers
History of calculus
French mathematical analysts
Linear algebraists
Members of the French Academy of Sciences
Members of the Royal Swedish Academy of Sciences
Recipients of the Pour le Mérite (civil class)
French textbook writers
Academic staff of the University of Turin
Members of the American Philosophical Society | Augustin-Louis Cauchy | [
"Mathematics"
] | 5,458 | [
"Mathematics of infinitesimals",
"History of calculus",
"Calculus"
] |
1,844 | https://en.wikipedia.org/wiki/Archimedes | Archimedes of Syracuse ( ; ) was an Ancient Greek mathematician, physicist, engineer, astronomer, and inventor from the ancient city of Syracuse in Sicily. Although few details of his life are known, he is considered one of the leading scientists in classical antiquity. Regarded as the greatest mathematician of ancient history, and one of the greatest of all time, Archimedes anticipated modern calculus and analysis by applying the concept of the infinitely small and the method of exhaustion to derive and rigorously prove a range of geometrical theorems. These include the area of a circle, the surface area and volume of a sphere, the area of an ellipse, the area under a parabola, the volume of a segment of a paraboloid of revolution, the volume of a segment of a hyperboloid of revolution, and the area of a spiral.
Archimedes' other mathematical achievements include deriving an approximation of pi (), defining and investigating the Archimedean spiral, and devising a system using exponentiation for expressing very large numbers. He was also one of the first to apply mathematics to physical phenomena, working on statics and hydrostatics. Archimedes' achievements in this area include a proof of the law of the lever, the widespread use of the concept of center of gravity, and the enunciation of the law of buoyancy known as Archimedes' principle. He is also credited with designing innovative machines, such as his screw pump, compound pulleys, and defensive war machines to protect his native Syracuse from invasion.
Archimedes died during the siege of Syracuse, when he was killed by a Roman soldier despite orders that he should not be harmed. Cicero describes visiting Archimedes' tomb, which was surmounted by a sphere and a cylinder that Archimedes requested be placed there to represent his most valued mathematical discovery.
Unlike his inventions, Archimedes' mathematical writings were little known in antiquity. Alexandrian mathematicians read and quoted him, but the first comprehensive compilation was not made until by Isidore of Miletus in Byzantine Constantinople, while Eutocius' commentaries on Archimedes' works in the same century opened them to wider readership for the first time. The relatively few copies of Archimedes' written work that survived through the Middle Ages were an influential source of ideas for scientists during the Renaissance and again in the 17th century, while the discovery in 1906 of previously lost works by Archimedes in the Archimedes Palimpsest has provided new insights into how he obtained mathematical results.
Biography
Early life
Archimedes was born c. 287 BC in the seaport city of Syracuse, Sicily, at that time a self-governing colony in Magna Graecia. The date of birth is based on a statement by the Byzantine Greek scholar John Tzetzes that Archimedes lived for 75 years before his death in 212 BC. Plutarch wrote in his Parallel Lives that Archimedes was related to King Hiero II, the ruler of Syracuse, although Cicero suggests he was of humble origin. In the Sand-Reckoner, Archimedes gives his father's name as Phidias, an astronomer about whom nothing else is known. A biography of Archimedes was written by his friend Heracleides, but this work has been lost, leaving the details of his life obscure. It is unknown, for instance, whether he ever married or had children, or if he ever visited Alexandria, Egypt, during his youth. From his surviving written works, it is clear that he maintained collegial relations with scholars based there, including his friend Conon of Samos and the head librarian Eratosthenes of Cyrene.
Career
The standard versions of Archimedes' life were written long after his death by Greek and Roman historians. The earliest reference to Archimedes occurs in the Histories by Polybius ( 200–118 BC), written about 70 years after his death. It sheds little light on Archimedes as a person, and focuses on the war machines that he is said to have built in order to defend the city from the Romans. Polybius remarks how, during the Second Punic War, Syracuse switched allegiances from Rome to Carthage, resulting in a military campaign under the command of Marcus Claudius Marcellus and Appius Claudius Pulcher, who besieged the city from 213 to 212 BC. He notes that the Romans underestimated Syracuse's defenses, and mentions several machines Archimedes designed, including improved catapults, crane-like machines that could be swung around in an arc, and other stone-throwers. Although the Romans ultimately captured the city, they suffered considerable losses due to Archimedes' inventiveness.
Cicero (106–43 BC) mentions Archimedes in some of his works. While serving as a quaestor in Sicily, Cicero found what was presumed to be Archimedes' tomb near the Agrigentine gate in Syracuse, in a neglected condition and overgrown with bushes. Cicero had the tomb cleaned up and was able to see the carving and read some of the verses that had been added as an inscription. The tomb carried a sculpture illustrating Archimedes' favorite mathematical proof, that the volume and surface area of the sphere are two-thirds that of an enclosing cylinder including its bases. He also mentions that Marcellus brought to Rome two planetariums Archimedes built. The Roman historian Livy (59 BC–17 AD) retells Polybius' story of the capture of Syracuse and Archimedes' role in it.
Death
Plutarch (45–119 AD) provides at least two accounts on how Archimedes died after Syracuse was taken. According to the most popular account, Archimedes was contemplating a mathematical diagram when the city was captured. A Roman soldier commanded him to come and meet Marcellus, but he declined, saying that he had to finish working on the problem. This enraged the soldier, who killed Archimedes with his sword. Another story has Archimedes carrying mathematical instruments before being killed because a soldier thought they were valuable items. Marcellus was reportedly angered by Archimedes' death, as he considered him a valuable scientific asset (he called Archimedes "a geometrical Briareus") and had ordered that he should not be harmed.
The last words attributed to Archimedes are "Do not disturb my circles" (; ), a reference to the mathematical drawing that he was supposedly studying when disturbed by the Roman soldier. There is no reliable evidence that Archimedes uttered these words and they do not appear in Plutarch's account. A similar quotation is found in the work of Valerius Maximus (fl. 30 AD), who wrote in Memorable Doings and Sayings, "" ("... but protecting the dust with his hands, said 'I beg of you, do not disturb this).
Discoveries and inventions
Archimedes' principle
The most widely known anecdote about Archimedes tells of how he invented a method for determining the volume of an object with an irregular shape. According to Vitruvius, a crown for a temple had been made for King Hiero II of Syracuse, who supplied the pure gold to be used. The crown was likely made in the shape of a votive wreath. Archimedes was asked to determine whether some silver had been substituted by the goldsmith without damaging the crown, so he could not melt it down into a regularly shaped body in order to calculate its density.
In this account, Archimedes noticed while taking a bath that the level of the water in the tub rose as he got in, and realized that this effect could be used to determine the golden crown's volume. Archimedes was so excited by this discovery that he took to the streets naked, having forgotten to dress, crying "Eureka!" (, heúrēka!, ). For practical purposes water is incompressible, so the submerged crown would displace an amount of water equal to its own volume. By dividing the mass of the crown by the volume of water displaced, its density could be obtained; if cheaper and less dense metals had been added, the density would be lower than that of gold. Archimedes found that this is what had happened, proving that silver had been mixed in.
The story of the golden crown does not appear anywhere in Archimedes' known works. The practicality of the method described has been called into question due to the extreme accuracy that would be required to measure water displacement. Archimedes may have instead sought a solution that applied the hydrostatics principle known as Archimedes' principle, found in his treatise On Floating Bodies: a body immersed in a fluid experiences a buoyant force equal to the weight of the fluid it displaces. Using this principle, it would have been possible to compare the density of the crown to that of pure gold by balancing it on a scale with a pure gold reference sample of the same weight, then immersing the apparatus in water. The difference in density between the two samples would cause the scale to tip accordingly. Galileo Galilei, who invented a hydrostatic balance in 1586 inspired by Archimedes' work, considered it "probable that this method is the same that Archimedes followed, since, besides being very accurate, it is based on demonstrations found by Archimedes himself."
Law of the lever
While Archimedes did not invent the lever, he gave a mathematical proof of the principle involved in his work On the Equilibrium of Planes. Earlier descriptions of the principle of the lever are found in a work by Euclid and in the Mechanical Problems, belonging to the Peripatetic school of the followers of Aristotle, the authorship of which has been attributed by some to Archytas.
There are several, often conflicting, reports regarding Archimedes' feats using the lever to lift very heavy objects. Plutarch describes how Archimedes designed block-and-tackle pulley systems, allowing sailors to use the principle of leverage to lift objects that would otherwise have been too heavy to move. According to Pappus of Alexandria, Archimedes' work on levers and his understanding of mechanical advantage caused him to remark: "Give me a place to stand on, and I will move the Earth" (). Olympiodorus later attributed the same boast to Archimedes' invention of the baroulkos, a kind of windlass, rather than the lever.
Archimedes' screw
A large part of Archimedes' work in engineering probably arose from fulfilling the needs of his home city of Syracuse. Athenaeus of Naucratis quotes a certain Moschion in a description on how King Hiero II commissioned the design of a huge ship, the Syracusia, which could be used for luxury travel, carrying supplies, and as a display of naval power. The Syracusia is said to have been the largest ship built in classical antiquity and, according to Moschion's account, it was launched by Archimedes. The ship presumably was capable of carrying 600 people and included garden decorations, a gymnasium, and a temple dedicated to the goddess Aphrodite among its facilities. The account also mentions that, in order to remove any potential water leaking through the hull, a device with a revolving screw-shaped blade inside a cylinder was designed by Archimedes.
Archimedes' screw was turned by hand, and could also be used to transfer water from a body of water into irrigation canals. The screw is still in use today for pumping liquids and granulated solids such as coal and grain. Described by Vitruvius, Archimedes' device may have been an improvement on a screw pump that was used to irrigate the Hanging Gardens of Babylon. The world's first seagoing steamship with a screw propeller was the SS Archimedes, which was launched in 1839 and named in honor of Archimedes and his work on the screw.
Archimedes' claw
Archimedes is said to have designed a claw as a weapon to defend the city of Syracuse. Also known as "", the claw consisted of a crane-like arm from which a large metal grappling hook was suspended. When the claw was dropped onto an attacking ship the arm would swing upwards, lifting the ship out of the water and possibly sinking it. There have been modern experiments to test the feasibility of the claw, and in 2005 a television documentary entitled Superweapons of the Ancient World built a version of the claw and concluded that it was a workable device.
Archimedes has also been credited with improving the power and accuracy of the catapult, and with inventing the odometer during the First Punic War. The odometer was described as a cart with a gear mechanism that dropped a ball into a container after each mile traveled.
Heat ray
As legend has it, Archimedes arranged mirrors as a parabolic reflector to burn ships attacking Syracuse using focused sunlight. While there is no extant contemporary evidence of this feat and modern scholars believe it did not happen, Archimedes may have written a work on mirrors entitled Catoptrica, and Lucian and Galen, writing in the second century AD, mentioned that during the siege of Syracuse Archimedes had burned enemy ships. Nearly four hundred years later, Anthemius, despite skepticism, tried to reconstruct Archimedes' hypothetical reflector geometry.
The purported device, sometimes called "Archimedes' heat ray", has been the subject of an ongoing debate about its credibility since the Renaissance. René Descartes rejected it as false, while modern researchers have attempted to recreate the effect using only the means that would have been available to Archimedes, mostly with negative results. It has been suggested that a large array of highly polished bronze or copper shields acting as mirrors could have been employed to focus sunlight onto a ship, but the overall effect would have been blinding, dazzling, or distracting the crew of the ship rather than fire. Using modern materials and larger scale, sunlight-concentrating solar furnaces can reach very high temperatures, and are sometimes used for generating electricity.
Astronomical instruments
Archimedes discusses astronomical measurements of the Earth, Sun, and Moon, as well as Aristarchus' heliocentric model of the universe, in the Sand-Reckoner. Without the use of either trigonometry or a table of chords, Archimedes determines the Sun's apparent diameter by first describing the procedure and instrument used to make observations (a straight rod with pegs or grooves), applying correction factors to these measurements, and finally giving the result in the form of upper and lower bounds to account for observational error. Ptolemy, quoting Hipparchus, also references Archimedes' solstice observations in the Almagest. This would make Archimedes the first known Greek to have recorded multiple solstice dates and times in successive years.
Cicero's De re publica portrays a fictional conversation taking place in 129 BC. After the capture of Syracuse in the Second Punic War, Marcellus is said to have taken back to Rome two mechanisms which were constructed by Archimedes and which showed the motion of the Sun, Moon and five planets. Cicero also mentions similar mechanisms designed by Thales of Miletus and Eudoxus of Cnidus. The dialogue says that Marcellus kept one of the devices as his only personal loot from Syracuse, and donated the other to the Temple of Virtue in Rome. Marcellus's mechanism was demonstrated, according to Cicero, by Gaius Sulpicius Gallus to Lucius Furius Philus, who described it thus:
This is a description of a small planetarium. Pappus of Alexandria reports on a now lost treatise by Archimedes dealing with the construction of these mechanisms entitled On Sphere-Making. Modern research in this area has been focused on the Antikythera mechanism, another device built BC probably designed with a similar purpose. Constructing mechanisms of this kind would have required a sophisticated knowledge of differential gearing. This was once thought to have been beyond the range of the technology available in ancient times, but the discovery of the Antikythera mechanism in 1902 has confirmed that devices of this kind were known to the ancient Greeks.
Mathematics
While he is often regarded as a designer of mechanical devices, Archimedes also made contributions to the field of mathematics. Plutarch wrote that Archimedes "placed his whole affection and ambition in those purer speculations where there can be no reference to the vulgar needs of life", though some scholars believe this may be a mischaracterization.
Method of exhaustion
Archimedes was able to use indivisibles (a precursor to infinitesimals) in a way that is similar to modern integral calculus. Through proof by contradiction (reductio ad absurdum), he could give answers to problems to an arbitrary degree of accuracy, while specifying the limits within which the answer lay. This technique is known as the method of exhaustion, and he employed it to approximate the areas of figures and the value of .
In Measurement of a Circle, he did this by drawing a larger regular hexagon outside a circle then a smaller regular hexagon inside the circle, and progressively doubling the number of sides of each regular polygon, calculating the length of a side of each polygon at each step. As the number of sides increases, it becomes a more accurate approximation of a circle. After four such steps, when the polygons had 96 sides each, he was able to determine that the value of lay between 3 (approx. 3.1429) and 3 (approx. 3.1408), consistent with its actual value of approximately 3.1416. He also proved that the area of a circle was equal to multiplied by the square of the radius of the circle ().
Archimedean property
In On the Sphere and Cylinder, Archimedes postulates that any magnitude when added to itself enough times will exceed any given magnitude. Today this is known as the Archimedean property of real numbers.
Archimedes gives the value of the square root of 3 as lying between (approximately 1.7320261) and (approximately 1.7320512) in Measurement of a Circle. The actual value is approximately 1.7320508, making this a very accurate estimate. He introduced this result without offering any explanation of how he had obtained it. This aspect of the work of Archimedes caused John Wallis to remark that he was: "as it were of set purpose to have covered up the traces of his investigation as if he had grudged posterity the secret of his method of inquiry while he wished to extort from them assent to his results." It is possible that he used an iterative procedure to calculate these values.
The infinite series
In Quadrature of the Parabola, Archimedes proved that the area enclosed by a parabola and a straight line is times the area of a corresponding inscribed triangle as shown in the figure at right. He expressed the solution to the problem as an infinite geometric series with the common ratio :
If the first term in this series is the area of the triangle, then the second is the sum of the areas of two triangles whose bases are the two smaller secant lines, and whose third vertex is where the line that is parallel to the parabola's axis and that passes through the midpoint of the base intersects the parabola, and so on. This proof uses a variation of the series which sums to .
Myriad of myriads
In The Sand Reckoner, Archimedes set out to calculate a number that was greater than the grains of sand needed to fill the universe. In doing so, he challenged the notion that the number of grains of sand was too large to be counted. He wrote:There are some, King Gelo, who think that the number of the sand is infinite in multitude; and I mean by the sand not only that which exists about Syracuse and the rest of Sicily but also that which is found in every region whether inhabited or uninhabited.To solve the problem, Archimedes devised a system of counting based on the myriad. The word itself derives from the Greek , for the number 10,000. He proposed a number system using powers of a myriad of myriads (100 million, i.e., 10,000 x 10,000) and concluded that the number of grains of sand required to fill the universe would be 8 vigintillion, or 8.
Writings
The works of Archimedes were written in Doric Greek, the dialect of ancient Syracuse. Many written works by Archimedes have not survived or are only extant in heavily edited fragments; at least seven of his treatises are known to have existed due to references made by other authors. Pappus of Alexandria mentions On Sphere-Making and another work on polyhedra, while Theon of Alexandria quotes a remark about refraction from the Catoptrica.
Archimedes made his work known through correspondence with mathematicians in Alexandria. The writings of Archimedes were first collected by the Byzantine Greek architect Isidore of Miletus (), while commentaries on the works of Archimedes written by Eutocius in the same century helped bring his work to a wider audience. Archimedes' work was translated into Arabic by Thābit ibn Qurra (836–901 AD), and into Latin via Arabic by Gerard of Cremona (c. 1114–1187). Direct Greek to Latin translations were later done by William of Moerbeke (c. 1215–1286) and Iacobus Cremonensis (c. 1400–1453).
During the Renaissance, the Editio princeps (First Edition) was published in Basel in 1544 by Johann Herwagen with the works of Archimedes in Greek and Latin.
Surviving works
The following are ordered chronologically based on new terminological and historical criteria set by Knorr (1978) and Sato (1986).
Measurement of a Circle
This is a short work consisting of three propositions. It is written in the form of a correspondence with Dositheus of Pelusium, who was a student of Conon of Samos. In Proposition II, Archimedes gives an approximation of the value of pi (), showing that it is greater than (3.1408...) and less than (3.1428...).
The Sand Reckoner
In this treatise, also known as Psammites, Archimedes finds a number that is greater than the grains of sand needed to fill the universe. This book mentions the heliocentric theory of the solar system proposed by Aristarchus of Samos, as well as contemporary ideas about the size of the Earth and the distance between various celestial bodies. By using a system of numbers based on powers of the myriad, Archimedes concludes that the number of grains of sand required to fill the universe is 8 in modern notation. The introductory letter states that Archimedes' father was an astronomer named Phidias. The Sand Reckoner is the only surviving work in which Archimedes discusses his views on astronomy.
On the Equilibrium of Planes
There are two books to On the Equilibrium of Planes: the first contains seven postulates and fifteen propositions, while the second book contains ten propositions. In the first book, Archimedes proves the law of the lever, which states that:
Archimedes uses the principles derived to calculate the areas and centers of gravity of various geometric figures including triangles, parallelograms and parabolas.
Quadrature of the Parabola
In this work of 24 propositions addressed to Dositheus, Archimedes proves by two methods that the area enclosed by a parabola and a straight line is 4/3 the area of a triangle with equal base and height. He achieves this in one of his proofs by calculating the value of a geometric series that sums to infinity with the ratio 1/4.
On the Sphere and Cylinder
In this two-volume treatise addressed to Dositheus, Archimedes obtains the result of which he was most proud, namely the relationship between a sphere and a circumscribed cylinder of the same height and diameter. The volume is 3 for the sphere, and 23 for the cylinder. The surface area is 42 for the sphere, and 62 for the cylinder (including its two bases), where is the radius of the sphere and cylinder.
On Spirals
This work of 28 propositions is also addressed to Dositheus. The treatise defines what is now called the Archimedean spiral. It is the locus of points corresponding to the locations over time of a point moving away from a fixed point with a constant speed along a line which rotates with constant angular velocity. Equivalently, in modern polar coordinates (, ), it can be described by the equation with real numbers and .
This is an early example of a mechanical curve (a curve traced by a moving point) considered by a Greek mathematician.
On Conoids and Spheroids
This is a work in 32 propositions addressed to Dositheus. In this treatise Archimedes calculates the areas and volumes of sections of cones, spheres, and paraboloids.
On Floating Bodies
There are two books of On Floating Bodies. In the first book, Archimedes spells out the law of equilibrium of fluids and proves that water will adopt a spherical form around a center of gravity. This may have been an attempt at explaining the theory of contemporary Greek astronomers such as Eratosthenes that the Earth is round. The fluids described by Archimedes are not since he assumes the existence of a point towards which all things fall in order to derive the spherical shape. Archimedes principle of buoyancy is given in this work, stated as follows:
Any body wholly or partially immersed in fluid experiences an upthrust equal to, but opposite in direction to, the weight of the fluid displaced.
In the second part, he calculates the equilibrium positions of sections of paraboloids. This was probably an idealization of the shapes of ships' hulls. Some of his sections float with the base under water and the summit above water, similar to the way that icebergs float.
Ostomachion
Also known as Loculus of Archimedes or Archimedes' Box, this is a dissection puzzle similar to a Tangram, and the treatise describing it was found in more complete form in the Archimedes Palimpsest. Archimedes calculates the areas of the 14 pieces which can be assembled to form a square. Reviel Netz of Stanford University argued in 2003 that Archimedes was attempting to determine how many ways the pieces could be assembled into the shape of a square. Netz calculates that the pieces can be made into a square 17,152 ways. The number of arrangements is 536 when solutions that are equivalent by rotation and reflection are excluded. The puzzle represents an example of an early problem in combinatorics.
The origin of the puzzle's name is unclear, and it has been suggested that it is taken from the Ancient Greek word for "throat" or "gullet", stomachos (). Ausonius calls the puzzle , a Greek compound word formed from the roots of () and ().
The cattle problem
Gotthold Ephraim Lessing discovered this work in a Greek manuscript consisting of a 44-line poem in the Herzog August Library in Wolfenbüttel, Germany in 1773. It is addressed to Eratosthenes and the mathematicians in Alexandria. Archimedes challenges them to count the numbers of cattle in the Herd of the Sun by solving a number of simultaneous Diophantine equations. There is a more difficult version of the problem in which some of the answers are required to be square numbers. A. Amthor first solved this version of the problem in 1880, and the answer is a very large number, approximately 7.760271.
The Method of Mechanical Theorems
This treatise was thought lost until the discovery of the Archimedes Palimpsest in 1906. In this work Archimedes uses indivisibles, and shows how breaking up a figure into an infinite number of infinitely small parts can be used to determine its area or volume. He may have considered this method lacking in formal rigor, so he also used the method of exhaustion to derive the results. As with The Cattle Problem, The Method of Mechanical Theorems was written in the form of a letter to Eratosthenes in Alexandria.
Apocryphal works
Archimedes' Book of Lemmas or Liber Assumptorum is a treatise with 15 propositions on the nature of circles. The earliest known copy of the text is in Arabic. T. L. Heath and Marshall Clagett argued that it cannot have been written by Archimedes in its current form, since it quotes Archimedes, suggesting modification by another author. The Lemmas may be based on an earlier work by Archimedes that is now lost.
It has also been claimed that the formula for calculating the area of a triangle from the length of its sides was known to Archimedes, though its first appearance is in the work of Heron of Alexandria in the 1st century AD. Other questionable attributions to Archimedes' work include the Latin poem Carmen de ponderibus et mensuris (4th or 5th century), which describes the use of a hydrostatic balance, to solve the problem of the crown, and the 12th-century text Mappae clavicula, which contains instructions on how to perform assaying of metals by calculating their specific gravities.
Archimedes Palimpsest
The foremost document containing Archimedes' work is the Archimedes Palimpsest. In 1906, the Danish professor Johan Ludvig Heiberg visited Constantinople to examine a 174-page goatskin parchment of prayers, written in the 13th century, after reading a short transcription published seven years earlier by Papadopoulos-Kerameus. He confirmed that it was indeed a palimpsest, a document with text that had been written over an erased older work. Palimpsests were created by scraping the ink from existing works and reusing them, a common practice in the Middle Ages, as vellum was expensive. The older works in the palimpsest were identified by scholars as 10th-century copies of previously lost treatises by Archimedes. The parchment spent hundreds of years in a monastery library in Constantinople before being sold to a private collector in the 1920s. On 29 October 1998, it was sold at auction to an anonymous buyer for a total of $2.2 million.
The palimpsest holds seven treatises, including the only surviving copy of On Floating Bodies in the original Greek. It is the only known source of The Method of Mechanical Theorems, referred to by Suidas and thought to have been lost forever. Stomachion was also discovered in the palimpsest, with a more complete analysis of the puzzle than had been found in previous texts. The palimpsest was stored at the Walters Art Museum in Baltimore, Maryland, where it was subjected to a range of modern tests including the use of ultraviolet and light to read the overwritten text. It has since returned to its anonymous owner.
The treatises in the Archimedes Palimpsest include:
On the Equilibrium of Planes
On Spirals
Measurement of a Circle
On the Sphere and Cylinder
On Floating Bodies
The Method of Mechanical Theorems
Stomachion
Speeches by the 4th century BC politician Hypereides
A commentary on Aristotle's Categories
Other works
Legacy
Sometimes called the father of mathematics and mathematical physics, Archimedes had a wide influence on mathematics and science.
Mathematics and physics
Historians of science and mathematics almost universally agree that Archimedes was the finest mathematician from antiquity. Eric Temple Bell, for instance, wrote:
Likewise, Alfred North Whitehead and George F. Simmons said of Archimedes:
Reviel Netz, Suppes Professor in Greek Mathematics and Astronomy at Stanford University and an expert in Archimedes notes:
Leonardo da Vinci repeatedly expressed admiration for Archimedes, and attributed his invention Architonnerre to Archimedes. Galileo called him "superhuman" and "my master", while Huygens said, "I think Archimedes is comparable to no one", consciously emulating him in his early work. Leibniz said, "He who understands Archimedes and Apollonius will admire less the achievements of the foremost men of later times". Gauss's heroes were Archimedes and Newton, and Moritz Cantor, who studied under Gauss in the University of Göttingen, reported that he once remarked in conversation that "there had been only three epoch-making mathematicians: Archimedes, Newton, and Eisenstein".
The inventor Nikola Tesla praised him, saying:
Honors and commemorations
According to the Italian numismatist and archaeologist Filippo Paruta (1552-1629) and Leonardo Agostini, a scholar from Siena, there was a bronze coin in Sicily with the portrait of Archimedes and a cylinder and sphere as well as his monogram ARMD in Roman script on the reverse. Ivo Schneider described the reverse as “a sphere resting on a base - probably a rough image of one of the planetaria created by Archimedes”. He cites Marcellus as a possible motif for “such an unusual coinage”, who “according to ancient reports, brought two spheres of Archimedes with him to Rome”.
There is a crater on the Moon named Archimedes () in his honor, as well as a lunar mountain range, the Montes Archimedes ().
The Fields Medal for outstanding achievement in mathematics carries a portrait of Archimedes, along with a carving illustrating his proof on the sphere and the cylinder. The inscription around the head of Archimedes is a quote attributed to 1st century AD poet Manilius, which reads in Latin: Transire suum pectus mundoque potiri ("Rise above oneself and grasp the world").
Archimedes has appeared on postage stamps issued by East Germany (1973), Greece (1983), Italy (1983), Nicaragua (1971), San Marino (1982), and Spain (1963).
The exclamation of Eureka! attributed to Archimedes is the state motto of California. In this instance, the word refers to the discovery of gold near Sutter's Mill in 1848 which sparked the California gold rush.
See also
Concepts
Arbelos
Archimedean point
Archimedes' axiom
Archimedes number
Archimedes paradox
Archimedean solid
Archimedes' twin circles
Methods of computing square roots
Salinon
Steam cannon
People
Diocles
Pseudo-Archimedes
Zhang Heng
References
Notes
Citations
Further reading
Boyer, Carl Benjamin. 1991. A History of Mathematics. New York: Wiley. .
Clagett, Marshall. 1964–1984. Archimedes in the Middle Ages 1–5. Madison, WI: University of Wisconsin Press.
Clagett, Marshall. 1970. "Archimedes". In Charles Coulston Gillispie, ed. Dictionary of Scientific Biography. Vol. 1 (Abailard–Berg). New York: Charles Scribner's Sons. .
Dijksterhuis, Eduard J. 1956. Archimedes. Translated by C. Dikshoorn. Copenhagen: Ejnar Munksgaard. Chapters 1–5 were translated from Archimedes (in Dutch). Groningen: Noordhoff. 1938. Later chapters appeared in Euclides Vols. 15–17, 20. 1938–1944. Reprinted 1987 by Princeton University Press.
Gow, Mary. 2005. Archimedes: Mathematical Genius of the Ancient World. Enslow Publishing. .
Hasan, Heather. 2005. Archimedes: The Father of Mathematics. Rosen Central. .
Heath, Thomas L. 1897. Works of Archimedes. Dover Publications. . Complete works of Archimedes in English.
Netz, Reviel. 2004–2017. The Works of Archimedes: Translation and Commentary. 1–2. Cambridge University Press. Vol. 1: "The Two Books on the Sphere and the Cylinder". . Vol. 2: "On Spirals". .
Netz, Reviel, and William Noel. 2007. The Archimedes Codex. Orion Publishing Group. .
Pickover, Clifford A. 2008. Archimedes to Hawking: Laws of Science and the Great Minds Behind Them. Oxford University Press. .
Simms, Dennis L. 1995. Archimedes the Engineer. Continuum International Publishing Group. .
Stein, Sherman. 1999. Archimedes: What Did He Do Besides Cry Eureka?. Mathematical Association of America. .
External links
Heiberg's Edition of Archimedes. Texts in Classical Greek, with some in English.
The Archimedes Palimpsest project at The Walters Art Museum in Baltimore, Maryland
Testing the Archimedes steam cannon
3rd-century BC Greek people
3rd-century BC writers
People from Syracuse, Sicily
Ancient Greek engineers
Ancient Greek inventors
Ancient Greek geometers
Ancient Greek physicists
Hellenistic-era philosophers
Doric Greek writers
Sicilian Greeks
Mathematicians from Sicily
Scientists from Sicily
Ancient Greek murder victims
Ancient Syracusans
Fluid dynamicists
Buoyancy
280s BC births
210s BC deaths
Year of birth uncertain
Year of death uncertain
3rd-century BC mathematicians
3rd-century BC Syracusans | Archimedes | [
"Chemistry"
] | 7,689 | [
"Fluid dynamicists",
"Fluid dynamics"
] |
1,851 | https://en.wikipedia.org/wiki/Antiprism | In geometry, an antiprism or is a polyhedron composed of two parallel direct copies (not mirror images) of an polygon, connected by an alternating band of triangles. They are represented by the Conway notation .
Antiprisms are a subclass of prismatoids, and are a (degenerate) type of snub polyhedron.
Antiprisms are similar to prisms, except that the bases are twisted relatively to each other, and that the side faces (connecting the bases) are triangles, rather than quadrilaterals.
The dual polyhedron of an -gonal antiprism is an -gonal trapezohedron.
History
In his 1619 book Harmonices Mundi, Johannes Kepler observed the existence of the infinite family of antiprisms. This has conventionally been thought of as the first discovery of these shapes, but they may have been known earlier: an unsigned printing block for the net of a hexagonal antiprism has been attributed to Hieronymus Andreae, who died in 1556.
The German form of the word "antiprism" was used for these shapes in the 19th century; Karl Heinze credits its introduction to . Although the English "anti-prism" had been used earlier for an optical prism used to cancel the effects of a primary optimal element, the first use of "antiprism" in English in its geometric sense appears to be in the early 20th century in the works of H. S. M. Coxeter.
Special cases
Right antiprism
For an antiprism with regular -gon bases, one usually considers the case where these two copies are twisted by an angle of degrees.
The axis of a regular polygon is the line perpendicular to the polygon plane and lying in the polygon centre.
For an antiprism with congruent regular -gon bases, twisted by an angle of degrees, more regularity is obtained if the bases have the same axis: are coaxial; i.e. (for non-coplanar bases): if the line connecting the base centers is perpendicular to the base planes. Then the antiprism is called a right antiprism, and its side faces are isosceles triangles.
Uniform antiprism
A uniform -antiprism has two congruent regular -gons as base faces, and equilateral triangles as side faces.
Uniform antiprisms form an infinite class of vertex-transitive polyhedra, as do uniform prisms. For , we have the digonal antiprism (degenerate antiprism), which is visually identical to the regular tetrahedron; for , the regular octahedron as a triangular antiprism (non-degenerate antiprism).
The Schlegel diagrams of these semiregular antiprisms are as follows:
Cartesian coordinates
Cartesian coordinates for the vertices of a right -antiprism (i.e. with regular -gon bases and isosceles triangle side faces, circumradius of the bases equal to 1) are:
where ;
if the -antiprism is uniform (i.e. if the triangles are equilateral), then:
Volume and surface area
Let be the edge-length of a uniform -gonal antiprism; then the volume is:
and the surface area is:
Furthermore, the volume of a regular right -gonal antiprism with side length of its bases and height is given by:
Derivation
The circumradius of the horizontal circumcircle of the regular -gon at the base is
The vertices at the base are at
the vertices at the top are at
Via linear interpolation, points on the outer triangular edges of the antiprism that connect vertices at the bottom with vertices at the top
are at
and at
By building the sums of the squares of the and coordinates in one of the previous two vectors,
the squared circumradius of this section at altitude is
The horizontal section at altitude above the base is a -gon (truncated -gon)
with sides of length alternating with sides of length .
(These are derived from the length of the difference of the previous two vectors.)
It can be dissected into isoceless triangles of edges and (semiperimeter )
plus
isoceless triangles of edges and (semiperimeter ).
According to Heron's formula the areas of these triangles are
and
The area of the section is , and the volume is
Note that the volume of a right -gonal prism with the same and is:
which is smaller than that of an antiprism.
Symmetry
The symmetry group of a right -antiprism (i.e. with regular bases and isosceles side faces) is of order , except in the cases of:
: the regular tetrahedron, which has the larger symmetry group of order , which has three versions of as subgroups;
: the regular octahedron, which has the larger symmetry group of order , which has four versions of as subgroups.
The symmetry group contains inversion if and only if is odd.
The rotation group is of order , except in the cases of:
: the regular tetrahedron, which has the larger rotation group of order , which has three versions of as subgroups;
: the regular octahedron, which has the larger rotation group of order , which has four versions of as subgroups.
Note: The right -antiprisms have congruent regular -gon bases and congruent isosceles triangle side faces, thus have the same (dihedral) symmetry group as the uniform -antiprism, for .
Generalizations
In higher dimensions
Four-dimensional antiprisms can be defined as having two dual polyhedra as parallel opposite faces, so that each three-dimensional face between them comes from two dual parts of the polyhedra: a vertex and a dual polygon, or two dual edges. Every three-dimensional convex polyhedron is combinatorially equivalent to one of the two opposite faces of a four-dimensional antiprism, constructed from its canonical polyhedron and its polar dual. However, there exist four-dimensional polychora that cannot be combined with their duals to form five-dimensional antiprisms.
Self-crossing polyhedra
Uniform star antiprisms are named by their star polygon bases, and exist in prograde and in retrograde (crossed) solutions. Crossed forms have intersecting vertex figures, and are denoted by "inverted" fractions: instead of ; example: 5/3 instead of 5/2.
A right star antiprism has two congruent coaxial regular convex or star polygon base faces, and isosceles triangle side faces.
Any star antiprism with regular convex or star polygon bases can be made a right star antiprism (by translating and/or twisting one of its bases, if necessary).
In the retrograde forms, but not in the prograde forms, the triangles joining the convex or star bases intersect the axis of rotational symmetry. Thus:
Retrograde star antiprisms with regular convex polygon bases cannot have all equal edge lengths, and so cannot be uniform. "Exception": a retrograde star antiprism with equilateral triangle bases (vertex configuration: 3.3/2.3.3) can be uniform; but then, it has the appearance of an equilateral triangle: it is a degenerate star polyhedron.
Similarly, some retrograde star antiprisms with regular star polygon bases cannot have all equal edge lengths, and so cannot be uniform. Example: a retrograde star antiprism with regular star 7/5-gon bases (vertex configuration: 3.3.3.7/5) cannot be uniform.
Also, star antiprism compounds with regular star -gon bases can be constructed if and have common factors. Example: a star 10/4-antiprism is the compound of two star 5/2-antiprisms.
See also
Grand antiprism, a four-dimensional polytope
Skew polygon, a three-dimensional polygon whose convex hull is an antiprism
References
Further reading
Chapter 2: Archimedean polyhedra, prisms and antiprisms
External links
Nonconvex Prisms and Antiprisms
Paper models of prisms and antiprisms
Uniform polyhedra
Prismatoid polyhedra | Antiprism | [
"Physics"
] | 1,741 | [
"Uniform polytopes",
"Uniform polyhedra",
"Symmetry"
] |
1,884 | https://en.wikipedia.org/wiki/ASCII%20art | ASCII art is a graphic design technique that uses computers for presentation and consists of pictures pieced together from the 95 printable (from a total of 128) characters defined by the ASCII Standard from 1963 and ASCII compliant character sets with proprietary extended characters (beyond the 128 characters of standard 7-bit ASCII). The term is also loosely used to refer to text-based visual art in general. ASCII art can be created with any text editor, and is often used with free-form languages. Most examples of ASCII art require a fixed-width font (non-proportional fonts, as on a traditional typewriter) such as Courier or Consolas for presentation.
Among the oldest known examples of ASCII art are the
creations by computer-art pioneer Kenneth Knowlton from around 1966, who was working for Bell Labs at the time. "Studies in Perception I" by Knowlton and Leon Harmon from 1966 shows some examples of their early ASCII art.
ASCII art was invented, in large part, because early printers often lacked graphics ability and thus, characters were used in place of graphic marks. Also, to mark divisions between different print jobs from different users, bulk printers often used ASCII art to print large banner pages, making the division easier to spot so that the results could be more easily separated by a computer operator or clerk. ASCII art was also used in early e-mail when images could not be embedded.
History
Typewriter art
Since 1867, typewriters have been used for creating visual art. Typists could find guides in books or magazines with instructions on how to type portraits or other depictions.
TTY and RTTY
TTY stands for "TeleTYpe" or "TeleTYpewriter", and is also known as Teleprinter or Teletype.
RTTY stands for Radioteletype; character sets such as Baudot code, which predated ASCII, were used. According to a chapter in the "RTTY Handbook", text images have been sent via teletypewriter as early as 1923. However, none of the "old" RTTY art has been discovered yet. What is known is that text images appeared frequently on radioteletype in the 1960s and the 1970s.
Line-printer art
In the 1960s, Andries van Dam published a representation of an electronic circuit produced on an IBM 1403 line printer. At the same time, Kenneth Knowlton was producing realistic images, also on line printers, by overprinting several characters on top of one another.
Note that it was not ASCII art in a sense that the 1403 was driven by an EBCDIC-coded platform and the character sets and trains available on the 1403 were derived from EBCDIC rather than ASCII, despite some glyphs commonalities.
ASCII art
The widespread usage of ASCII art can be traced to the computer bulletin board systems of the late 1970s and early 1980s. The limitations of computers of that time period necessitated the use of text characters to represent images. Along with ASCII's use in communication, however, it also began to appear in the underground online art groups of the period.
An ASCII comic is a form of webcomic which uses ASCII text to create images. In place of images in a regular comic, ASCII art is used, with the text or dialog usually placed underneath.
During the 1990s, graphical browsing and variable-width fonts became increasingly popular, leading to a decline in ASCII art. Despite this, ASCII art continued to survive through online MUDs, an acronym for "Multi-User Dungeon", (which are textual multiplayer role-playing video games), Internet Relay Chat, Email, message boards, and other forms of online communication which commonly employ the needed fixed-width.
ASCII art is seen to this day on the CLI app Neofetch, which displays the logo of the OS on which it is invoked.
ANSI
ASCII and more importantly, ANSI were staples of the early technological era; terminal systems relied on coherent presentation using color and control signals standard in the terminal protocols.
Over the years, warez groups began to enter the ASCII art scene. Warez groups usually release .nfo files with their software, cracks or other general software reverse-engineering releases. The ASCII art will usually include the warez group's name and maybe some ASCII borders on the outsides of the release notes, etc.
BBS systems were based on ASCII and ANSI art, as were most DOS and similar console applications, and the precursor to AOL.
Uses
ASCII art is used wherever text can be more readily printed or transmitted than graphics, or in some cases, where the transmission of pictures is not possible. This includes typewriters, teleprinters, non-graphic computer terminals, printer separators, in early computer networking (e.g., BBSes), email, and Usenet news messages. ASCII art is also used within the source code of computer programs for representation of company or product logos, and flow control or other diagrams. In some cases, the entire source code of a program is a piece of ASCII art – for instance, an entry to one of the earlier International Obfuscated C Code Contest is a program that adds numbers, but visually looks like a binary adder drawn in logic ports.
Some electronic schematic archives represent the circuits using ASCII art.
Examples of ASCII-style art predating the modern computer era can be found in the June 1939, July 1948 and October 1948 editions of Popular Mechanics.
Early computer games played on terminals frequently used ASCII art to simulate graphics, most notably the roguelike genre using ASCII art to visually represent dungeons and monsters within them. "0verkill" is a 2D platform multiplayer shooter game designed entirely in color ASCII art. MPlayer and VLC media player can display videos as ASCII art through the AAlib library. ASCII art is used in the making of DOS-based ZZT games.
Many game walkthrough guides come as part of a basic .txt file; this file often contains the name of the game in ASCII art. Such as below, word art is created using backslashes and other ASCII values in order to create the illusion of 3D.
Types and styles
Different techniques could be used in ASCII art to obtain different artistic effects.
"Typewriter-style" lettering, made from individual letter characters:
Line art, for creating shapes:
.--. /\
'--' /__\ (^._.^)~ <(o.o )>
Solid art, for creating filled objects:
.g@8g. db
'Y8@P' d88b
Shading, using symbols with various intensities for creating gradients or contrasts:
:$#$: "4b. ':.
:$#$: "4b. ':.
Combinations of the above, often used as signatures, for example, at the end of an email:
|\_/| **************************** (\_/)
/ @ @ \ * "Purrrfectly pleasant" * (='.'=)
( > º < ) * Poppy Prinz * (")_(")
`>>x<<´ * (pprinz@example.com) *
/ O \ ****************************
As-pixel characters use combinations of ░ , █ , ▄, ▀ (Block Elements), and/or ⣿, ⣴, ⢁, etc (Braille ASCII) to make pictures:
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠿⠿⠿⠿⢿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠟⢁⣴⣾⣿⣷⣦⣌⠙⢿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠟⢁⣴⣿⣿⣿⣿⣿⣿⣿⣷⡈⢻⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠟⢁⣴⣿⣿⠟⠋⣉⠙⢻⣿⣿⣿⣷⠀⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠟⢁⣴⣿⣿⠟⢁⣴⣿⣿⡷⢀⣿⣿⣿⡿⠀⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⠟⢁⣴⣿⣿⠟⢁⣴⣿⣿⡿⠋⣠⣾⣿⣿⠟⢁⣼⣿⣿
⣿⣿⣿⣿⣿⣿⣿⠟⢁⣴⣿⣿⠟⢁⣴⣿⣿⡿⠋⣠⣾⣿⣿⠟⢁⣴⣿⣿⣿⣿
⣿⣿⣿⣿⣿⠟⢁⣴⣿⣿⠟⢁⣴⣿⣿⡿⠋⣠⣾⣿⣿⠟⢁⣴⣿⣿⣿⣿⣿⣿
⣿⣿⣿⠟⢁⣴⣿⣿⣿⣿⣶⣿⣿⡿⠋⣠⣾⣿⣿⠟⢁⣴⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⠁⣴⣿⣿⣿⣿⣿⣿⣿⡿⠋⣠⣾⣿⣿⠟⢁⣴⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⠀⢿⣿⣿⣿⣿⣿⡿⠋⣠⣾⣿⣿⠟⢁⣴⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣧⡈⠻⢿⣿⡿⠋⣠⣾⣿⣿⡟⢁⣴⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⣷⣶⣶⣶⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
Emoticons
The simplest forms of ASCII art are combinations of two or three characters for expressing emotion in text. They are commonly referred to as 'emoticon', 'smilie', or 'smiley'. There is another type of one-line ASCII art that does not require the mental rotation of pictures, which is widely known in Japan as kaomoji (literally "face characters".)
More complex examples use several lines of text to draw large symbols or more complex figures. Hundreds of different text smileys have developed over time, but only a few are generally accepted, used and understood.
ASCII comic
An ASCII comic is a form of webcomic.
The Adventures of Nerd Boy
The Adventures of Nerd Boy, or just Nerd Boy, was an ASCII comic, published by Joaquim Gândara between 5 August 2001 and 17 July 2007, and consisting of 600 strips. They were posted to ASCII art newsgroup alt.ascii-art and on the website. Some strips have been translated to Polish and French.
Styles of the computer underground text art scene
Atari 400/800 ATASCII
The Atari 400/800, which were released in 1979, did not follow the ASCII standard and had their own character set, called ATASCII. The emergence of ATASCII art coincided with the growing popularity of BBS Systems caused by availability of the acoustic couplers that were compatible with the 8-bit home computers. ATASCII text animations are also referred to as "break animations" by the Atari sceners.
C-64 PETSCII
The Commodore 64, which was released in 1982, also did not follow the ASCII standard. The C-64 character set is called PETSCII, an extended form of ASCII-1963. As with the Atari's ATASCII art, C-64 fans developed a similar scene that used PETSCII for their creations.
"Block ASCII" / "High ASCII" style ASCII art on the IBM PC
So-called "block ASCII" or "high ASCII" uses the extended characters of the 8-bit code page 437, which is a proprietary standard introduced by IBM in 1979 (ANSI Standard x3.16) for the IBM PC DOS and MS-DOS operating systems. "Block ASCIIs" were widely used on the PC during the 1990s until the Internet replaced BBSes as the main communication platform. Until then, "block ASCIIs" dominated the PC Text Art Scene.
The first art scene group that focused on the extended character set of the PC in their artwork was called "Aces of ANSI Art", or . Some members left in 1990 and formed a group called "ANSI Creators in Demand", or ACiD. In that same year the second major underground art scene group "Insane Creators Enterprise", or ICE, was founded.
There is some debate between ASCII and block ASCII artists, with "Hardcore" ASCII artists maintaining that block ASCII art is in fact not ASCII art, because it does not use the 128 characters of the original ASCII standard. On the other hand, block ASCII artists argue that if their art uses only characters of the computer's character set, then it is to be called ASCII, regardless if the character set is proprietary or not.
Microsoft Windows does not support the ANSI Standard x3.16. One can view block ASCIIs with a text editor using the font "Terminal", but it will not look exactly as it was intended by the artist. With a special ASCII/ANSI viewer, such as ACiDView for Windows , one can see block ASCII and ANSI files properly. An example that illustrates the difference in appearance is part of this article. Alternatively, one could look at the file using the TYPE command in the command prompt.
"Amiga"/"Oldskool" style ASCII art
In the art scene one popular ASCII style that used the 7-bit standard ASCII character set was the so-called "Oldskool" style. It is also called "Amiga style", due to its origin and widespread use on Commodore Amiga computers. The style uses primarily the characters _/\-+=.()<>: and looks more like the outlined drawings of shapes than real pictures. The accompanying image is an example of "Amiga style" (also referred to as "old school" or "oldskool" style) scene ASCII art.
The Amiga ASCII scene surfaced in 1992, seven years after the introduction of the Commodore Amiga 1000. The Commodore 64 PETSCII scene did not make the transition to the Commodore Amiga as the C64 demo and warez scenes did. Among the first Amiga ASCII art groups were ART, Epsilon Design, Upper Class, Unreal (later known as "DeZign"). This means that the text art scene on the Amiga was actually younger than the text art scene on the PC. The Amiga artists also did not call their ASCII art style "Oldskool". That term was introduced on the PC; when and by whom is unknown and lost to history.
The Amiga style ASCII artwork was most often released in the form of a single text file, which included all the artwork (usually requested), with some design parts in between, as opposed to the PC art scene where the art work was released as a ZIP archive with separate text files for each piece. Furthermore, the releases were usually called "ASCII collections" and not "art packs" like on the IBM PC.
In text editors
_ ___ _ _
| ___|_ _/ ___| | ___| |_
| |_ | | | _| |/ _ \ __|
| _| | | |_| | | __/ |_
|_| |___\|_|\___|\__|
This kind of ASCII art is handmade in a text editor. Popular editors used to make this kind of ASCII art include Microsoft Notepad, CygnusEditor also known as CED (Amiga), and EditPlus2 (PC).
The accompanying image shows an Oldskool font example done with the ASCII editor FIGlet on a PC.
Newskool style ASCII art
"Newskool" is a popular form of ASCII art which capitalizes on character strings like "$#Xxo". In spite of its name, the style is not "new"; on the contrary, it was very old but fell out of favor and was replaced by "Oldskool" and "Block" style ASCII art. It was dubbed "Newskool" upon its comeback and renewed popularity at the end of the 1990s.
Newskool changed significantly as the result of the introduction of extended proprietary characters. The classic 7-bit standard ASCII characters remain predominant, but the extended characters are often used for "fine tuning" and "tweaking". The style developed further after the introduction and adaptation of Unicode.
Methods for generating ASCII art
While some prefer to use a simple text editor to produce ASCII art, specialized programs, such as JavE have been developed that often simulate the features and tools in bitmap image editors. For Block ASCII art and ANSI art the artist almost always uses a special text editor, because to generate the required characters on a standard keyboard, one needs to know the Alt code for each character. For example, + will produce ▓, + will produce ▒, and + will produce ◘.
The special text editors have sets of special characters assigned to existing keys on the keyboard. Popular DOS-based editors, such as TheDraw and ACiDDraw had multiple sets of different special characters mapped to the function keys to make the use of those characters easier for the artist who can switch between individual sets of characters via basic keyboard shortcuts. PabloDraw is one of the very few special ASCII/ANSI art editors that was developed for Windows.
Image to text conversion
Other programs allow one to automatically convert an image to text characters, which is a special case of vector quantization. A method is to sample the image down to grayscale with less than 8-bit precision, and then assign a character for each value. Such ASCII art generators often allow users to choose the intensity and contrast of the generated image.
Three factors limit the fidelity of the conversion, especially of photographs:
depth (solutions: reduced line spacing; bold style; block elements; colored background; good shading);
sharpness (solutions: a longer text, with a smaller font; a greater set of characters; variable width fonts);
ratio (solutions with compatibility issues: font with a square grid; stylized without extra line spacing).
Examples of converted images are given below.
This is one of the earliest forms of ASCII art, dating back to the early days of the 1960s minicomputers and teletypes. During the 1970s, it was popular in US malls to get a t-shirt with a photograph printed in ASCII art on it from an automated kiosk containing a computer, and London's Science Museum had a similar service to produce printed portraits. With the advent of the web, HTML and CSS, many ASCII conversion programs will now quantize to a full RGB colorspace, enabling colorized ASCII images.
Still images or movies can also be converted to ASCII on various UNIX and UNIX-like systems using the AAlib (black and white) or libcaca (colour) graphics device driver, or the VLC media player or mpv under Windows, Linux or macOS; all of which render the screen using ASCII symbols instead of pixels.
There are also a number of smartphone applications, such as ASCII cam for Android, that generate ASCII art in real-time using input from the phone's camera. These applications typically allow the ASCII art to be saved as either a text file or as an image made up of ASCII text.
Non fixed-width ASCII
Most ASCII art is created using a monospaced font, such as Courier, where all characters are identical in width. Early computers in use when ASCII art came into vogue had monospaced fonts for screen and printer displays. Today, most of the more commonly used fonts in word processors, web browsers and other programs are proportional fonts, such as Helvetica or Times Roman, where different widths are used for different characters. ASCII art drawn for a fixed width font will usually appear distorted, or even unrecognizable when displayed in a proportional font.
Some ASCII artists have produced art for display in proportional fonts. These ASCIIs, rather than using a purely shade-based correspondence, use characters for slopes and borders and use block shading. These ASCIIs generally offer greater precision and attention to detail than fixed-width ASCIIs for a lower character count, although they are not as universally accessible since they are usually relatively font-specific.
Animated ASCII art
Animated ASCII art started in 1970 from so-called VT100 animations produced on VT100 terminals. These animations were simply text with cursor movement instructions, deleting and erasing the characters necessary to appear animated. Usually, they represented a long hand-crafted process undertaken by a single person to tell a story.
Contemporary web browser revitalized animated ASCII art again. It became possible to display animated ASCII art via JavaScript or Java applets. Static ASCII art pictures are loaded and displayed one after another, creating the animation, very similar to how movie projectors unreel film reel and project the individual pictures on the big screen at movie theaters. A new term was born: "ASCIImation" – another name of animated ASCII art. A seminal work in this arena is the Star Wars ASCIImation. More complicated routines in JavaScript generate more elaborate ASCIImations showing effects like Morphing effects, star field emulations, fading effects and calculated images, such as mandelbrot fractal animations.
There are now many tools and programs that can transform raster images into text symbols; some of these tools can operate on streaming video. For example, the music video for American singer Beck's song "Black Tambourine" is made up entirely of ASCII characters that approximate the original footage. VLC, a media player software, can render any video in colored ASCII through the libcaca module.
Other text-based visual art
There are a variety of other types of art using text symbols from character sets other than ASCII and/or some form of color coding. Despite not being pure ASCII, these are still often referred to as "ASCII art". The character set portion designed specifically for drawing is known as the line drawing characters or pseudo-graphics.
ANSI art
The IBM PC graphics hardware in text mode uses 16 bits per character. It supports a variety of configurations, but in its default mode under DOS they are used to give 256 glyphs from one of the IBM PC code pages (Code page 437 by default), 16 foreground colors, eight background colors, and a flash option. Such art can be loaded into screen memory directly. ANSI.SYS, if loaded, also allows such art to be placed on screen by outputting escape sequences that indicate movements of the screen cursor and color/flash changes. If this method is used then the art becomes known as ANSI art. The IBM PC code pages also include characters intended for simple drawing which often made this art appear much cleaner than that made with more traditional character sets. Plain text files are also seen with these characters, though they have become far less common since Windows GUI text editors (using the Windows ANSI code page) have largely replaced DOS-based ones.
Shift_JIS and Japan
In Japan, ASCII art (AA) is mainly known as Shift_JIS art. Shift JIS offers a larger selection of characters than plain ASCII (including characters from Japanese scripts and fullwidth forms of ASCII characters), and may be used for text-based art on Japanese websites.
Often, such artwork is designed to be viewed with the default Japanese font on a platform, such as the proportional MS P Gothic.
Kaomoji
Users on ASCII-NET, in which the word ASCII refers to the ASCII Corporation rather than the American Standard Code for Information Interchange, popularised a style of in which the face appears upright rather than rotated.
Unicode
Unicode would seem to offer the ultimate flexibility in producing text based art with its huge variety of characters. However, finding a suitable fixed-width font is likely to be difficult if a significant subset of Unicode is desired. (Modern UNIX-style operating systems do provide complete fixed-width Unicode fonts, e.g. for xterm. Windows has the Courier New font, which includes characters like ♥☺). Also, the common practice of rendering Unicode with a mixture of variable width fonts is likely to make predictable display hard, if more than a tiny subset of Unicode is used. is an adequate representation of a cat's face in a font with varying character widths.
Control and combining characters
The combining characters mechanism of Unicode provides considerable ways of customizing the style, even obfuscating the text (e.g. via an online generator like Obfuscator, which focuses on the filters). Glitcher is one example of Unicode art, initiated in 2012: These symbols, intruding up and down, are made by combining lots of diacritical marks. It’s a kind of art. There’s quite a lot of artists who use the Internet or specific social networks as their canvas. The corresponding creations are favored in web browsers (thanks to their always better support), as geekily stylized usernames for social networks. With a fair compatibility, and among different online tools, [Facebook symbols] showcases various types of Unicode art, mainly for aesthetic purpose (Ɯıḳĭƥḙȡḯả Wîkipêȡıẚ Ẉǐḳîṗȅḍȉā Ẃįḵįṗẻḑìẵ Ẉĭḵɪṕḗdïą Ẇïƙỉpểɗĭà Ẅȉḱïṕȩđĩẵ etc.). Besides, the creations can be hand-crafted (by programming), or pasted from mobile applications (e.g. the category of 'fancy text' tools on Android). The underlying technique dates back to the old systems that incorporated control characters, though. E.g. the German composite ö would be imitated on ZX Spectrum by overwriting " after backspace and o.
Overprinting (surprint)
In the 1970s and early 1980s it was popular to produce a kind of text art that relied on overprinting. This could be produced either on a screen or on a printer by typing a character, backing up, and then typing another character, just as on a typewriter. This developed into sophisticated graphics in some cases, such as the PLATO system (circa 1973), where superscript and subscript allowed a wide variety of graphic effects. A common use was for emoticons, with WOBTAX and VICTORY both producing convincing smiley faces. Overprinting had previously been used on typewriters, but the low-resolution pixelation of characters on video terminals meant that overprinting here produced seamless pixel graphics, rather than visibly overstruck combinations of letters on paper.
Beyond pixel graphics, this was also used for printing photographs, as the overall darkness of a particular character space dependent on how many characters, as well as the choice of character, were printed in a particular place. Thanks to the increased granularity of tone, photographs were often converted to this type of printout. Even manual typewriters or daisy wheel printers could be used. The technique has fallen from popularity since all cheap printers can easily print photographs, and a normal text file (or an e-mail message or Usenet posting) cannot represent overprinted text. However, something similar has emerged to replace it: shaded or colored ASCII art, using ANSI video terminal markup or color codes (such as those found in HTML, IRC, and many internet message boards) to add a bit more tone variation. In this way, it is possible to create ASCII art where the characters only differ in color.
See also
Micrography
Types and styles: Alt code, ASCII stereogram, box-drawing characters, emoticon, FILE_ID.DIZ, .nfo (release info file)
Pre-ASCII history: Calligram, Concrete poetry, Typewriter, Typewriter mystery game, Teleprinter, Radioteletype
Related art: ANSI art, ASCII porn, ATASCII, Fax art, PETSCII, Shift JIS art, Text semigraphics
Related context: Bulletin board system (BBS), Computer art scene, :Category:Artscene groups
Software: AAlib, cowsay
Unicode: Homoglyph, Duplicate characters in Unicode
References
Further reading
(Polish translators: Ania Górecka [ag], Asia Mazur [as], Błażej Kozłowski [bug], Janusz [jp], Łukasz Dąbrowski [luk], Łukasz Tyrała [lt.], Łukasz Wilk [wilu], Marcin Gliński [fsc])
External links
media4u.ch - ASCII Art (ASCII Art Movie. The Matrix in ASCII Art)
TexArt.io ASCII Art collection
Textfiles.com archive
Sixteen Colors ANSI Art and ASCII Art Archive
Defacto2.net Scene NFO Files Archive
Chris.com ASCII art collection
"As-Pixel Characters" ASCII art collection
ASCII Art Animation of Star Wars, "ASCIIMATION"
ASCII Keyboard Art Collection
Animasci
Video to ASCII Demonstration in 4 stages
Computer art
Digital art
New media art
Internet art
Multimedia
Wikipedia articles with ASCII art | ASCII art | [
"Technology"
] | 5,865 | [
"Multimedia"
] |
1,908 | https://en.wikipedia.org/wiki/Abzyme | An abzyme (from antibody and enzyme), also called catmab (from catalytic monoclonal antibody), and most often called catalytic antibody or sometimes catab, is a monoclonal antibody with catalytic activity. Abzymes are usually raised in lab animals immunized against synthetic haptens, but some natural abzymes can be found in normal humans (anti-vasoactive intestinal peptide autoantibodies) and in patients with autoimmune diseases such as systemic lupus erythematosus, where they can bind to and hydrolyze DNA. To date abzymes display only weak, modest catalytic activity and have not proved to be of any practical use. They are, however, subjects of considerable academic interest. Studying them has yielded important insights into reaction mechanisms, enzyme structure and function, catalysis, and the immune system itself.
Enzymes function by lowering the activation energy of the transition state of a chemical reaction, thereby enabling the formation of an otherwise less-favorable molecular intermediate between the reactant(s) and the product(s). If an antibody is developed to bind to a molecule that is structurally and electronically similar to the transition state of a given chemical reaction, the developed antibody will bind to, and stabilize, the transition state, just like a natural enzyme, lowering the activation energy of the reaction, and thus catalyzing the reaction. By raising an antibody to bind to a stable transition-state analog, a new and unique type of enzyme is produced.
So far, all catalytic antibodies produced have displayed only modest, weak catalytic activity. The reasons for low catalytic activity for these molecules have been widely discussed. Possibilities indicate that factors beyond the binding site may play an important role, in particular through protein dynamics. Some abzymes have been engineered to use metal ions and other cofactors to improve their catalytic activity.
History
The possibility of catalyzing a reaction by means of an antibody which binds the transition state was first suggested by William P. Jencks in 1969. In 1994 Peter G. Schultz and Richard A. Lerner received the prestigious Wolf Prize in Chemistry for developing catalytic antibodies for many reactions and popularizing their study into a significant sub-field of enzymology.
Abzymes in healthy human breast milk
There are a broad range of abzymes in healthy human breast milk with DNAse, RNAse, and protease activity.
Potential HIV treatment
In a June 2008 issue of the journal Autoimmunity Review, researchers S. Planque, Sudhir Paul, Ph.D., and Yasuhiro Nishiyama, Ph.D. of the University Of Texas Medical School at Houston announced that they have engineered an abzyme that degrades the superantigenic region of the gp120 CD4 binding site. This is the one part of the HIV virus outer coating that does not change, because it is the attachment point to T lymphocytes, the key cell in cell-mediated immunity. Once infected by HIV, patients produce antibodies to the more changeable parts of the viral coat. The antibodies are ineffective because of the virus' ability to change their coats rapidly. Because this protein gp120 is necessary for HIV to attach, it does not change across different strains and is a point of vulnerability across the entire range of the HIV variant population.
The abzyme does more than bind to the site: it catalytically destroys the site, rendering the virus inert, and then can attack other HIV viruses. A single abzyme molecule can destroy thousands of HIV viruses.
References
Monoclonal antibodies
Immune system
Enzymes | Abzyme | [
"Biology"
] | 735 | [
"Immune system",
"Organ systems"
] |
1,909 | https://en.wikipedia.org/wiki/Adaptive%20radiation | In evolutionary biology, adaptive radiation is a process in which organisms diversify rapidly from an ancestral species into a multitude of new forms, particularly when a change in the environment makes new resources available, alters biotic interactions or opens new environmental niches. Starting with a single ancestor, this process results in the speciation and phenotypic adaptation of an array of species exhibiting different morphological and physiological traits. The prototypical example of adaptive radiation is finch speciation on the Galapagos ("Darwin's finches"), but examples are known from around the world.
Characteristics
Four features can be used to identify an adaptive radiation:
A common ancestry of component species: specifically a recent ancestry. Note that this is not the same as a monophyly in which all descendants of a common ancestor are included.
A phenotype-environment correlation: a significant association between environments and the morphological and physiological traits used to exploit those environments.
Trait utility: the performance or fitness advantages of trait values in their corresponding environments.
Rapid speciation: presence of one or more bursts in the emergence of new species around the time that ecological and phenotypic divergence is underway.
Conditions
Adaptive radiations are thought to be triggered by an ecological opportunity or a new adaptive zone. Sources of ecological opportunity can be the loss of antagonists (competitors or predators), the evolution of a key innovation, or dispersal to a new environment. Any one of these ecological opportunities has the potential to result in an increase in population size and relaxed stabilizing (constraining) selection. As genetic diversity is positively correlated with population size the expanded population will have more genetic diversity compared to the ancestral population. With reduced stabilizing selection phenotypic diversity can also increase. In addition, intraspecific competition will increase, promoting divergent selection to use a wider range of resources. This ecological release provides the potential for ecological speciation and thus adaptive radiation.
Occupying a new environment might take place under the following conditions:
A new habitat has opened up: a volcano, for example, can create new ground in the middle of the ocean. This is the case in places like Hawaii and the Galapagos. For aquatic species, the formation of a large new lake habitat could serve the same purpose; the tectonic movement that formed the East African Rift, ultimately leading to the creation of the Rift Valley Lakes, is an example of this. An extinction event could effectively achieve this same result, opening up niches that were previously occupied by species that no longer exist.
This new habitat is relatively isolated. When a volcano erupts on the mainland and destroys an adjacent forest, it is likely that the terrestrial plant and animal species that used to live in the destroyed region will recolonize without evolving greatly. However, if a newly formed habitat is isolated, the species that colonize it will likely be somewhat random and uncommon arrivals.
The new habitat has a wide availability of niche space. The rare colonist can only adaptively radiate into as many forms as there are niches.
Relationship between mass-extinctions and mass adaptive radiations
A 2020 study found there to be no direct causal relationship between the proportionally most comparable mass radiations and extinctions in terms of "co-occurrence of species", substantially challenging the hypothesis of "creative mass extinctions".
Examples
Darwin's finches
Darwin's finches on the Galapagos Islands are a model system for the study of adaptive radiation. Today represented by approximately 15 species, Darwin's finches are Galapagos endemics famously adapted for a specialized feeding behavior (although one species, the Cocos finch (Pinaroloxias inornata), is not found in the Galapagos but on the island of Cocos south of Costa Rica). Darwin's finches are not actually finches in the true sense, but are members of the tanager family Thraupidae, and are derived from a single ancestor that arrived in the Galapagos from mainland South America perhaps just 3 million years ago. Excluding the Cocos finch, each species of Darwin's finch is generally widely distributed in the Galapagos and fills the same niche on each island. For the ground finches, this niche is a diet of seeds, and they have thick bills to facilitate the consumption of these hard materials. The ground finches are further specialized to eat seeds of a particular size: the large ground finch (Geospiza magnirostris) is the largest species of Darwin's finch and has the thickest beak for breaking open the toughest seeds, the small ground finch (Geospiza fuliginosa) has a smaller beak for eating smaller seeds, and the medium ground finch (Geospiza fortis) has a beak of intermediate size for optimal consumption of intermediately sized seeds (relative to G. magnirostris and G. fuliginosa). There is some overlap: for example, the most robust medium ground finches could have beaks larger than those of the smallest large ground finches. Because of this overlap, it can be difficult to tell the species apart by eye, though their songs differ. These three species often occur sympatrically, and during the rainy season in the Galapagos when food is plentiful, they specialize little and eat the same, easily accessible foods. It was not well-understood why their beaks were so adapted until Peter and Rosemary Grant studied their feeding behavior in the long dry season, and discovered that when food is scarce, the ground finches use their specialized beaks to eat the seeds that they are best suited to eat and thus avoid starvation.
The other finches in the Galapagos are similarly uniquely adapted for their particular niche. The cactus finches (Geospiza sp.) have somewhat longer beaks than the ground finches that serve the dual purpose of allowing them to feed on Opuntia cactus nectar and pollen while these plants are flowering, but on seeds during the rest of the year. The warbler-finches (Certhidea sp.) have short, pointed beaks for eating insects. The woodpecker finch (Camarhynchus pallidus) has a slender beak which it uses to pick at wood in search of insects; it also uses small sticks to reach insect prey inside the wood, making it one of the few animals that use tools.
The mechanism by which the finches initially diversified is still an area of active research. One proposition is that the finches were able to have a non-adaptive, allopatric speciation event on separate islands in the archipelago, such that when they reconverged on some islands, they were able to maintain reproductive isolation. Once they occurred in sympatry, niche specialization was favored so that the different species competed less directly for resources. This second, sympatric event was adaptive radiation.
Cichlids of the African Great Lakes
The haplochromine cichlid fishes in the Great Lakes of the East African Rift (particularly in Lake Tanganyika, Lake Malawi, and Lake Victoria) form the most speciose modern example of adaptive radiation. These lakes are believed to be home to about 2,000 different species of cichlid, spanning a wide range of ecological roles and morphological characteristics. Cichlids in these lakes fill nearly all of the roles typically filled by many fish families, including those of predators, scavengers, and herbivores, with varying dentitions and head shapes to match their dietary habits. In each case, the radiation events are only a few million years old, making the high level of speciation particularly remarkable. Several factors could be responsible for this diversity: the availability of a multitude of niches probably favored specialization, as few other fish taxa are present in the lakes (meaning that sympatric speciation was the most probable mechanism for initial specialization). Also, continual changes in the water level of the lakes during the Pleistocene (which often turned the largest lakes into several smaller ones) could have created the conditions for secondary allopatric speciation.
Tanganyika cichlids
Lake Tanganyika is the site from which nearly all the cichlid lineages of East Africa (including both riverine and lake species) originated. Thus, the species in the lake constitute a single adaptive radiation event but do not form a single monophyletic clade. Lake Tanganyika is also the least speciose of the three largest African Great Lakes, with only around 200 species of cichlid; however, these cichlids are more morphologically divergent and ecologically distinct than their counterparts in lakes Malawi and Victoria, an artifact of Lake Tanganyika's older cichlid fauna. Lake Tanganyika itself is believed to have formed 9–12 million years ago, putting a recent cap on the age of the lake's cichlid fauna. Many of Tanganyika's cichlids live very specialized lifestyles. The giant or emperor cichlid (Boulengerochromis microlepis) is a piscivore often ranked the largest of all cichlids (though it competes for this title with South America's Cichla temensis, the speckled peacock bass). It is thought that giant cichlids spawn only a single time, breeding in their third year and defending their young until they reach a large size, before dying of starvation some time thereafter. The three species of Altolamprologus are also piscivores, but with laterally compressed bodies and thick scales enabling them to chase prey into thin cracks in rocks without damaging their skin. Plecodus straeleni has evolved large, strangely curved teeth that are designed to scrape scales off of the sides of other fish, scales being its main source of food. Gnathochromis permaxillaris possesses a large mouth with a protruding upper lip, and feeds by opening this mouth downward onto the sandy lake bottom, sucking in small invertebrates. A number of Tanganyika's cichlids are shell-brooders, meaning that mating pairs lay and fertilize their eggs inside of empty shells on the lake bottom. Lamprologus callipterus is a unique egg-brooding species, with 15 cm-long males amassing collections of shells and guarding them in the hopes of attracting females (about 6 cm in length) to lay eggs in these shells. These dominant males must defend their territories from three types of rival: (1) other dominant males looking to steal shells; (2) younger, "sneaker" males looking to fertilize eggs in a dominant male's territory; and (3) tiny, 2–4 cm "parasitic dwarf" males that also attempt to rush in and fertilize eggs in the dominant male's territory. These parasitic dwarf males never grow to the size of dominant males, and the male offspring of dominant and parasitic dwarf males grow with 100% fidelity into the form of their fathers. A number of other highly specialized Tanganyika cichlids exist aside from these examples, including those adapted for life in open lake water up to 200m deep.
Malawi cichlids
The cichlids of Lake Malawi constitute a "species flock" of up to 1000 endemic species. Only seven cichlid species in Lake Malawi are not a part of the species flock: the Eastern happy (Astatotilapia calliptera), the sungwa (Serranochromis robustus), and five tilapia species (genera Oreochromis and Coptodon). All of the other cichlid species in the lake are descendants of a single original colonist species, which itself was descended from Tanganyikan ancestors. The common ancestor of Malawi's species flock is believed to have reached the lake 3.4 million years ago at the earliest, making Malawi cichlids' diversification into their present numbers particularly rapid. Malawi's cichlids span a similarly range of feeding behaviors to those of Tanganyika, but also show signs of a much more recent origin. For example, all members of the Malawi species flock are mouth-brooders, meaning the female keeps her eggs in her mouth until they hatch; in almost all species, the eggs are also fertilized in the female's mouth, and in a few species, the females continue to guard their fry in their mouth after they hatch. Males of most species display predominantly blue coloration when mating. However, a number of particularly divergent species are known from Malawi, including the piscivorous Nimbochromis livingtonii, which lies on its side in the substrate until small cichlids, perhaps drawn to its broken white patterning, come to inspect the predator - at which point they are swiftly eaten.
Victoria's cichlids
Lake Victoria's cichlids are also a species flock, once composed of some 500 or more species. The deliberate introduction of the Nile Perch (Lates niloticus) in the 1950s proved disastrous for Victoria cichlids, and the collective biomass of the Victoria cichlid species flock has decreased substantially and an unknown number of species have become extinct. However, the original range of morphological and behavioral diversity seen in the lake's cichlid fauna is still mostly present today, if endangered. These again include cichlids specialized for niches across the trophic spectrum, as in Tanganyika and Malawi, but again, there are standouts. Victoria is famously home to many piscivorous cichlid species, some of which feed by sucking the contents out of mouthbrooding females' mouths. Victoria's cichlids constitute a far younger radiation than even that of Lake Malawi, with estimates of the age of the flock ranging from 200,000 years to as little as 14,000.
Adaptive radiation in Hawaii
Hawaii has served as the site of a number of adaptive radiation events, owing to its isolation, recent origin, and large land area. The three most famous examples of these radiations are presented below, though insects like the Hawaiian drosophilid flies and Hyposmocoma moths have also undergone adaptive radiation.
Hawaiian honeycreepers
The Hawaiian honeycreepers form a large, highly morphologically diverse species group of birds that began radiating in the early days of the Hawaiian archipelago. While today only 17 species are known to persist in Hawaii (3 more may or may not be extinct), there were more than 50 species prior to Polynesian colonization of the archipelago (between 18 and 21 species have gone extinct since the discovery of the islands by westerners). The Hawaiian honeycreepers are known for their beaks, which are specialized to satisfy a wide range of dietary needs: for example, the beak of the ʻakiapōlāʻau (Hemignathus wilsoni) is characterized by a short, sharp lower mandible for scraping bark off of trees, and the much longer, curved upper mandible is used to probe the wood underneath for insects. Meanwhile, the ʻiʻiwi (Drepanis coccinea) has a very long curved beak for reaching nectar deep in Lobelia flowers. An entire clade of Hawaiian honeycreepers, the tribe Psittirostrini, is composed of thick-billed, mostly seed-eating birds, like the Laysan finch (Telespiza cantans). In at least some cases, similar morphologies and behaviors appear to have evolved convergently among the Hawaiian honeycreepers; for example, the short, pointed beaks of Loxops and Oreomystis evolved separately despite once forming the justification for lumping the two genera together. The Hawaiian honeycreepers are believed to have descended from a single common ancestor some 15 to 20 million years ago, though estimates range as low as 3.5 million years.
Hawaiian silverswords
Adaptive radiation is not a strictly vertebrate phenomenon, and examples are also known from among plants. The most famous example of adaptive radiation in plants is quite possibly the Hawaiian silverswords, named for alpine desert-dwelling Argyroxiphium species with long, silvery leaves that live for up to 20 years before growing a single flowering stalk and then dying. The Hawaiian silversword alliance consists of twenty-eight species of Hawaiian plants which, aside from the namesake silverswords, includes trees, shrubs, vines, cushion plants, and more. The silversword alliance is believed to have originated in Hawaii no more than 6 million years ago, making this one of Hawaii's youngest adaptive radiation events. This means that the silverswords evolved on Hawaii's modern high islands, and descended from a single common ancestor that arrived on Kauai from western North America. The closest modern relatives of the silverswords today are California tarweeds of the family Asteraceae.
Hawaiian lobelioids
Hawaii is also the site of a separate major floral adaptive radiation event: the Hawaiian lobelioids. The Hawaiian lobelioids are significantly more speciose than the silverswords, perhaps because they have been present in Hawaii for so much longer: they descended from a single common ancestor who arrived in the archipelago up to 15 million years ago. Today the Hawaiian lobelioids form a clade of over 125 species, including succulents, trees, shrubs, epiphytes, etc. Many species have been lost to extinction and many of the surviving species endangered.
Caribbean anoles
Anole lizards are distributed broadly in the New World, from the Southeastern US to South America. With over 400 species currently recognized, often placed in a single genus (Anolis), they constitute one of the largest radiation events among all lizards. Anole radiation on the mainland has largely been a process of speciation, and is not adaptive to any great degree, but anoles on each of the Greater Antilles (Cuba, Hispaniola, Puerto Rico, and Jamaica) have adaptively radiated in separate, convergent ways. On each of these islands, anoles have evolved with such a consistent set of morphological adaptations that each species can be assigned to one of six "ecomorphs": trunk–ground, trunk–crown, grass–bush, crown–giant, twig, and trunk. Take for example crown–giants from each of these islands: the Cuban Anolis luteogularis, Hispaniola's Anolis ricordii, Puerto Rico's Anolis cuvieri, and Jamaica's Anolis garmani (Cuba and Hispaniola are both home to more than one species of crown–giant). These anoles are all large, canopy-dwelling species with large heads and large lamellae (scales on the undersides of the fingers and toes that are important for traction in climbing), and yet none of these species are particularly closely related and appear to have evolved these similar traits independently. The same can be said of the other five ecomorphs across the Caribbean's four largest islands. Much like in the case of the cichlids of the three largest African Great Lakes, each of these islands is home to its own convergent Anolis adaptive radiation event.
Other examples
Presented above are the most well-documented examples of modern adaptive radiation, but other examples are known. Populations of three-spined sticklebacks have repeatedly diverged and evolved into distinct ecotypes. On Madagascar, birds of the family Vangidae are marked by very distinct beak shapes to suit their ecological roles. Madagascan mantellid frogs have radiated into forms that mirror other tropical frog faunas, with the brightly colored mantellas (Mantella) having evolved convergently with the Neotropical poison dart frogs of Dendrobatidae, while the arboreal Boophis species are the Madagascan equivalent of tree frogs and glass frogs. The pseudoxyrhophiine snakes of Madagascar have evolved into fossorial, arboreal, terrestrial, and semi-aquatic forms that converge with the colubroid faunas in the rest of the world. These Madagascan examples are significantly older than most of the other examples presented here: Madagascar's fauna has been evolving in isolation since the island split from India some 88 million years ago, and the Mantellidae originated around 50 mya. Older examples are known: the K-Pg extinction event, which caused the disappearance of the dinosaurs and most other reptilian megafauna 65 million years ago, is seen as having triggered a global adaptive radiation event that created the mammal diversity that exists today. Also the Cambrian Explosion, where vacant niches left by the extinction of Ediacaran biota during End-Ediacaran mass extinction were filled up by the emergence of new phyla.
See also
Cambrian explosion—the most notable evolutionary radiation event
Evolutionary radiation—a more general term to describe any radiation
List of adaptive radiated Hawaiian honeycreepers by form
List of adaptive radiated marsupials by form
Nonadaptive radiation
References
Further reading
Wilson, E. et al. Life on Earth, by Wilson, E.; Eisner, T.; Briggs, W.; Dickerson, R.; Metzenberg, R.; O'Brien, R.; Susman, M.; Boggs, W. (Sinauer Associates, Inc., Publishers, Stamford, Connecticut), c 1974. Chapters: The Multiplication of Species; Biogeography, pp 824–877. 40 Graphs, w species pictures, also Tables, Photos, etc. Includes Galápagos Islands, Hawaii, and Australia subcontinent, (plus St. Helena Island, etc.).
Leakey, Richard. The Origin of Humankind—on adaptive radiation in biology and human evolution, pp. 28–32, 1994, Orion Publishing.
Grant, P.R. 1999. The ecology and evolution of Darwin's Finches. Princeton University Press, Princeton, NJ.
Mayr, Ernst. 2001. What evolution is. Basic Books, New York, NY.
Gavrilets, S. and A. Vose. 2009. Dynamic patterns of adaptive radiation: evolution of mating preferences. In Butlin, R.K., J. Bridle, and D. Schluter (eds) Speciation and Patterns of Diversity, Cambridge University Press, page. 102–126.
Pinto, Gabriel, Luke Mahler, Luke J. Harmon, and Jonathan B. Losos. "Testing the Island Effect in Adaptive Radiation: Rates and Patterns of Morphological Diversification in Caribbean and Mainland Anolis Lizards." NCBI (2008): n. pag. Web. 28 Oct. 2014.
Schluter, Dolph. The ecology of adaptive radiation. Oxford University Press, 2000.
Speciation
Evolutionary biology terminology | Adaptive radiation | [
"Biology"
] | 4,679 | [
"Evolutionary processes",
"Speciation",
"Evolutionary biology terminology"
] |
1,910 | https://en.wikipedia.org/wiki/Agarose%20gel%20electrophoresis | Agarose gel electrophoresis is a method of gel electrophoresis used in biochemistry, molecular biology, genetics, and clinical chemistry to separate a mixed population of macromolecules such as DNA or proteins in a matrix of agarose, one of the two main components of agar. The proteins may be separated by charge and/or size (isoelectric focusing agarose electrophoresis is essentially size independent), and the DNA and RNA fragments by length. Biomolecules are separated by applying an electric field to move the charged molecules through an agarose matrix, and the biomolecules are separated by size in the agarose gel matrix.
Agarose gel is easy to cast, has relatively fewer charged groups, and is particularly suitable for separating DNA of size range most often encountered in laboratories, which accounts for the popularity of its use. The separated DNA may be viewed with stain, most commonly under UV light, and the DNA fragments can be extracted from the gel with relative ease. Most agarose gels used are between 0.7–2% dissolved in a suitable electrophoresis buffer.
Properties of agarose gel
Agarose gel is a three-dimensional matrix formed of helical agarose molecules in supercoiled bundles that are aggregated into three-dimensional structures with channels and pores through which biomolecules can pass. The 3-D structure is held together with hydrogen bonds and can therefore be disrupted by heating back to a liquid state. The melting temperature is different from the gelling temperature, depending on the sources, agarose gel has a gelling temperature of and a melting temperature of . Low-melting and low-gelling agaroses made through chemical modifications are also available.
Agarose gel has large pore size and good gel strength, making it suitable as an anticonvection medium for the electrophoresis of DNA and large protein molecules. The pore size of a 1% gel has been estimated from 100 nm to 200–500 nm, and its gel strength allows gels as dilute as 0.15% to form a slab for gel electrophoresis. Low-concentration gels (0.1–0.2%) however are fragile and therefore hard to handle. Agarose gel has lower resolving power than polyacrylamide gel for DNA but has a greater range of separation, and is therefore used for DNA fragments of usually 50–20,000 bp in size. The limit of resolution for standard agarose gel electrophoresis is around 750 kb, but resolution of over 6 Mb is possible with pulsed field gel electrophoresis (PFGE). It can also be used to separate large proteins, and it is the preferred matrix for the gel electrophoresis of particles with effective radii larger than 5–10 nm. A 0.9% agarose gel has pores large enough for the entry of bacteriophage T4.
The agarose polymer contains charged groups, in particular pyruvate and sulfate. These negatively charged groups create a flow of water in the opposite direction to the movement of DNA in a process called electroendosmosis (EEO), and can therefore retard the movement of DNA and cause blurring of bands. Higher concentration gels would have higher electroendosmotic flow. Low EEO agarose is therefore generally preferred for use in agarose gel electrophoresis of nucleic acids, but high EEO agarose may be used for other purposes. The lower sulfate content of low EEO agarose, particularly low-melting point (LMP) agarose, is also beneficial in cases where the DNA extracted from gel is to be used for further manipulation as the presence of contaminating sulfates may affect some subsequent procedures, such as ligation and PCR. Zero EEO agaroses however are undesirable for some applications as they may be made by adding positively charged groups and such groups can affect subsequent enzyme reactions. Electroendosmosis is a reason agarose is used in preference to agar as the agaropectin component in agar contains a significant amount of negatively charged sulfate and carboxyl groups. The removal of agaropectin in agarose substantially reduces the EEO, as well as reducing the non-specific adsorption of biomolecules to the gel matrix. However, for some applications such as the electrophoresis of serum proteins, a high EEO may be desirable, and agaropectin may be added in the gel used.
Migration of nucleic acids in agarose gel
Factors affecting migration of nucleic acid in gel
A number of factors can affect the migration of nucleic acids: the dimension of the gel pores (gel concentration), size of DNA being electrophoresed, the voltage used, the ionic strength of the buffer, and the concentration of intercalating dye such as ethidium bromide if used during electrophoresis.
Smaller molecules travel faster than larger molecules in gel, and double-stranded DNA moves at a rate that is inversely proportional to the logarithm of the number of base pairs. This relationship however breaks down with very large DNA fragments, and separation of very large DNA fragments requires the use of pulsed field gel electrophoresis (PFGE), which applies alternating current from different directions and the large DNA fragments are separated as they reorient themselves with the changing field.
For standard agarose gel electrophoresis, larger molecules are resolved better using a low concentration gel while smaller molecules separate better at high concentration gel. Higher concentration gels, however, require longer run times (sometimes days).
The movement of the DNA may be affected by the conformation of the DNA molecule, for example, supercoiled DNA usually moves faster than relaxed DNA because it is tightly coiled and hence more compact. In a normal plasmid DNA preparation, multiple forms of DNA may be present. Gel electrophoresis of the plasmids would normally show the negatively supercoiled form as the main band, while nicked DNA (open circular form) and the relaxed closed circular form appears as minor bands. The rate at which the various forms move however can change using different electrophoresis conditions, and the mobility of larger circular DNA may be more strongly affected than linear DNA by the pore size of the gel.
Ethidium bromide which intercalates into circular DNA can change the charge, length, as well as the superhelicity of the DNA molecule, therefore its presence in gel during electrophoresis can affect its movement. For example, the positive charge of ethidium bromide can reduce the DNA movement by 15%. Agarose gel electrophoresis can be used to resolve circular DNA with different supercoiling topology.
DNA damage due to increased cross-linking will also reduce electrophoretic DNA migration in a dose-dependent way.
The rate of migration of the DNA is proportional to the voltage applied, i.e. the higher the voltage, the faster the DNA moves. The resolution of large DNA fragments however is lower at high voltage. The mobility of DNA may also change in an unsteady field – in a field that is periodically reversed, the mobility of DNA of a particular size may drop significantly at a particular cycling frequency. This phenomenon can result in band inversion in field inversion gel electrophoresis (FIGE), whereby larger DNA fragments move faster than smaller ones.
Migration anomalies
"Smiley" gels - this edge effect is caused when the voltage applied is too high for the gel concentration used.
Overloading of DNA - overloading of DNA slows down the migration of DNA fragments.
Contamination - presence of impurities, such as salts or proteins can affect the movement of the DNA.
Mechanism of migration and separation
The negative charge of its phosphate backbone moves the DNA towards the positively charged anode during electrophoresis. However, the migration of DNA molecules in solution, in the absence of a gel matrix, is independent of molecular weight during electrophoresis. The gel matrix is therefore responsible for the separation of DNA by size during electrophoresis, and a number of models exist to explain the mechanism of separation of biomolecules in gel matrix. A widely accepted one is the Ogston model which treats the polymer matrix as a sieve. A globular protein or a random coil DNA moves through the interconnected pores, and the movement of larger molecules is more likely to be impeded and slowed down by collisions with the gel matrix, and the molecules of different sizes can therefore be separated in this sieving process.
The Ogston model however breaks down for large molecules whereby the pores are significantly smaller than size of the molecule. For DNA molecules of size greater than 1 kb, a reptation model (or its variants) is most commonly used. This model assumes that the DNA can crawl in a "snake-like" fashion (hence "reptation") through the pores as an elongated molecule. A biased reptation model applies at higher electric field strength, whereby the leading end of the molecule become strongly biased in the forward direction and pulls the rest of the molecule along. Real-time fluorescence microscopy of stained molecules, however, showed more subtle dynamics during electrophoresis, with the DNA showing considerable elasticity as it alternately stretching in the direction of the applied field and then contracting into a ball, or becoming hooked into a U-shape when it gets caught on the polymer fibres.
General procedure
The details of an agarose gel electrophoresis experiment may vary depending on methods, but most follow a general procedure.
Casting of gel
The gel is prepared by dissolving the agarose powder in an appropriate buffer, such as TAE or TBE, to be used in electrophoresis. The agarose is dispersed in the buffer before heating it to near-boiling point, but avoid boiling. The melted agarose is allowed to cool sufficiently before pouring the solution into a cast as the cast may warp or crack if the agarose solution is too hot. A comb is placed in the cast to create wells for loading sample, and the gel should be completely set before use.
The concentration of gel affects the resolution of DNA separation. The agarose gel is composed of microscopic pores through which the molecules travel, and there is an inverse relationship between the pore size of the agarose gel and the concentration – pore size decreases as the density of agarose fibers increases. High gel concentration improves separation of smaller DNA molecules, while lowering gel concentration permits large DNA molecules to be separated. The process allows fragments ranging from 50 base pairs to several mega bases to be separated depending on the gel concentration used. The concentration is measured in weight of agarose over volume of buffer used (g/ml). For a standard agarose gel electrophoresis, a 0.8% gel gives good separation or resolution of large 5–10kb DNA fragments, while 2% gel gives good resolution for small 0.2–1kb fragments. 1% gels is often used for a standard electrophoresis. High percentage gels are often brittle and may not set evenly, while low percentage gels (0.1-0.2%) are fragile and not easy to handle. Low-melting-point (LMP) agarose gels are also more fragile than normal agarose gel. Low-melting point agarose may be used on its own or simultaneously with standard agarose for the separation and isolation of DNA. PFGE and FIGE are often done with high percentage agarose gels.
Loading of samples
Once the gel has set, the comb is removed, leaving wells where DNA samples can be loaded. Loading buffer is mixed with the DNA sample before the mixture is loaded into the wells. The loading buffer contains a dense compound, which may be glycerol, sucrose, or Ficoll, that raises the density of the sample so that the DNA sample may sink to the bottom of the well. If the DNA sample contains residual ethanol after its preparation, it may float out of the well. The loading buffer also includes colored dyes such as xylene cyanol and bromophenol blue used to monitor the progress of the electrophoresis. The DNA samples are loaded using a pipette.
Electrophoresis
Agarose gel electrophoresis is most commonly done horizontally in a subaquaeous mode whereby the slab gel is completely submerged in buffer during electrophoresis. It is also possible, but less common, to perform the electrophoresis vertically, as well as horizontally with the gel raised on agarose legs using an appropriate apparatus. The buffer used in the gel is the same as the running buffer in the electrophoresis tank, which is why electrophoresis in the subaquaeous mode is possible with agarose gel.
For optimal resolution of DNA greater than 2kb in size in standard gel electrophoresis, 5 to 8 V/cm is recommended (the distance in cm refers to the distance between electrodes, therefore this recommended voltage would be 5 to 8 multiplied by the distance between the electrodes in cm). Voltage may also be limited by the fact that it heats the gel and may cause the gel to melt if it is run at high voltage for a prolonged period, especially if the gel used is LMP agarose gel. Too high a voltage may also reduce resolution, as well as causing band streaking for large DNA molecules. Too low a voltage may lead to broadening of band for small DNA fragments due to dispersion and diffusion.
Since DNA is not visible in natural light, the progress of the electrophoresis is monitored using colored dyes. Xylene cyanol (light blue color) comigrates large DNA fragments, while Bromophenol blue (dark blue) comigrates with the smaller fragments. Less commonly used dyes include Cresol Red and Orange G which migrate ahead of bromophenol blue. A DNA marker is also run together for the estimation of the molecular weight of the DNA fragments. Note however that the size of a circular DNA like plasmids cannot be accurately gauged using standard markers unless it has been linearized by restriction digest, alternatively a supercoiled DNA marker may be used.
Staining and visualization
DNA as well as RNA are normally visualized by staining with ethidium bromide, which intercalates into the major grooves of the DNA and fluoresces under UV light. The intercalation depends on the concentration of DNA and thus, a band with high intensity will indicate a higher amount of DNA compared to a band of less intensity. The ethidium bromide may be added to the agarose solution before it gels, or the DNA gel may be stained later after electrophoresis. Destaining of the gel is not necessary but may produce better images. Other methods of staining are available; examples are MIDORI Green, SYBR Green, GelRed, methylene blue, brilliant cresyl blue, Nile blue sulfate, and crystal violet. SYBR Green, GelRed and other similar commercial products are sold as safer alternatives to ethidium bromide as it has been shown to be mutagenic in Ames test, although the carcinogenicity of ethidium bromide has not actually been established. SYBR Green requires the use of a blue-light transilluminator. DNA stained with crystal violet can be viewed under natural light without the use of a UV transilluminator which is an advantage, however it may not produce a strong band.
When stained with ethidium bromide, the gel is viewed with an ultraviolet (UV) transilluminator. The UV light excites the electrons within the aromatic ring of ethidium bromide, and once they return to the ground state, light is released, making the DNA and ethidium bromide complex fluoresce. Standard transilluminators use wavelengths of 302/312-nm (UV-B), however exposure of DNA to UV radiation for as little as 45 seconds can produce damage to DNA and affect subsequent procedures, for example reducing the efficiency of transformation, in vitro transcription, and PCR. Exposure of DNA to UV radiation therefore should be limited. Using a higher wavelength of 365 nm (UV-A range) causes less damage to the DNA but also produces much weaker fluorescence with ethidium bromide. Where multiple wavelengths can be selected in the transilluminator, shorter wavelength can be used to capture images, while longer wavelength should be used if it is necessary to work on the gel for any extended period of time.
The transilluminator apparatus may also contain image capture devices, such as a digital or polaroid camera, that allow an image of the gel to be taken or printed.
For gel electrophoresis of protein, the bands may be visualised with Coomassie or silver stains.
Downstream procedures
The separated DNA bands are often used for further procedures, and a DNA band may be cut out of the gel as a slice, dissolved and purified. Contaminants however may affect some downstream procedures such as PCR, and low melting point agarose may be preferred in some cases as it contains fewer of the sulfates that can affect some enzymatic reactions. The gels may also be used for blotting techniques.
Buffers
In general, the ideal buffer should have good conductivity, produce less heat and have a long life. There are a number of buffers used for agarose electrophoresis; common ones for nucleic acids include tris/acetate/EDTA (TAE) and tris/borate/EDTA (TBE). The buffers used contain EDTA to inactivate many nucleases which require divalent cation for their function. The borate in TBE buffer can be problematic as borate can polymerize, and/or interact with cis diols such as those found in RNA. TAE has the lowest buffering capacity, but it provides the best resolution for larger DNA. This means a lower voltage and more time, but a better product.
Many other buffers have been proposed, e.g. lithium borate (LB), iso electric histidine, pK matched goods buffers, etc.; in most cases the purported rationale is lower current (less heat) and or matched ion mobilities, which leads to longer buffer life. Tris-phosphate buffer has high buffering capacity but cannot be used if DNA extracted is to be used in phosphate sensitive reaction. LB is relatively new and is ineffective in resolving fragments larger than 5 kbp; However, with its low conductivity, a much higher voltage could be used (up to 35 V/cm), which means a shorter analysis time for routine electrophoresis. As low as one base pair size difference could be resolved in 3% agarose gel with an extremely low conductivity medium (1 mM lithium borate).
Other buffering system may be used in specific applications, for example, barbituric acid-sodium barbiturate or tris-barbiturate buffers may be used for in agarose gel electrophoresis of proteins, for example in the detection of abnormal distribution of proteins.
Applications
Estimation of the size of DNA molecules following digestion with restriction enzymes, e.g., in restriction mapping of cloned DNA.
Estimation of the DNA concentration by comparing the intensity of the nucleic acid band with the corresponding band of the size marker.
Analysis of products of a polymerase chain reaction (PCR), e.g., in molecular genetic diagnosis or genetic fingerprinting
Separation of DNA fragments for extraction and purification.
Separation of restricted genomic DNA prior to Southern transfer, or of RNA prior to Northern transfer.
Separation of proteins, for example, screening of protein abnormalities in clinical chemistry.
Agarose gels are easily cast and handled compared to other matrices and nucleic acids are not chemically altered during electrophoresis. Samples are also easily recovered. After the experiment is finished, the resulting gel can be stored in a plastic bag in a refrigerator.
Electrophoresis is performed in buffer solutions to reduce pH changes due to the electric field, which is important because the charge of DNA and RNA depends on pH, but running for too long can exhaust the buffering capacity of the solution. Further, different preparations of genetic material may not migrate consistently with each other, for morphological or other reasons.
See also
Gel electrophoresis
Immunodiffusion, Immunoelectrophoresis
SDD-AGE
Northern blot
SDS-polyacrylamide gel electrophoresis
Southern blot
References
External links
How to run a DNA or RNA gel
Animation of gel analysis of DNA restriction fragments
Video and article of agarose gel electrophoresis
Step by step photos of running a gel and extracting DNA
Drinking straw electrophoresis!
A typical method from wikiversity
Building a gel electrophoresis chamber
Biological techniques and tools
Molecular biology
Electrophoresis
Polymerase chain reaction
Articles containing video clips | Agarose gel electrophoresis | [
"Chemistry",
"Biology"
] | 4,409 | [
"Biochemistry methods",
"Genetics techniques",
"Polymerase chain reaction",
"Instrumental analysis",
"Biochemical separation processes",
"Molecular biology techniques",
"nan",
"Molecular biology",
"Biochemistry",
"Electrophoresis"
] |
1,912 | https://en.wikipedia.org/wiki/Ampicillin | Ampicillin is an antibiotic belonging to the aminopenicillin class of the penicillin family. The drug is used to prevent and treat several bacterial infections, such as respiratory tract infections, urinary tract infections, meningitis, salmonellosis, and endocarditis. It may also be used to prevent group B streptococcal infection in newborns. It is used by mouth, by injection into a muscle, or intravenously.
Common side effects include rash, nausea, and diarrhea. It should not be used in people who are allergic to penicillin. Serious side effects may include Clostridioides difficile colitis or anaphylaxis. While usable in those with kidney problems, the dose may need to be decreased. Its use during pregnancy and breastfeeding appears to be generally safe.
Ampicillin was discovered in 1958 and came into commercial use in 1961. It is on the World Health Organization's List of Essential Medicines. The World Health Organization classifies ampicillin as critically important for human medicine. It is available as a generic medication.
Medical uses
Diseases
Bacterial meningitis; an aminoglycoside can be added to increase efficacy against gram-negative meningitis bacteria
Endocarditis by enterococcal strains (off-label use); often given with an aminoglycoside
Gastrointestinal infections caused by contaminated water or food (for example, by Salmonella)
Genito-urinary tract infections
Healthcare-associated infections that are related to infections from using urinary catheters and that are unresponsive to other medications
Otitis media (middle ear infection)
Prophylaxis (i.e. to prevent infection) in those who previously had rheumatic heart disease or are undergoing dental procedures, vaginal hysterectomies, or C-sections. It is also used in pregnant woman who are carriers of group B streptococci to prevent early-onset neonatal infections.
Respiratory infections, including bronchitis, pharyngitis
Sinusitis
Sepsis
Whooping cough, to prevent and treat secondary infections
Ampicillin used to also be used to treat gonorrhea, but there are now too many strains resistant to penicillins.
Bacteria
Ampicillin is used to treat infections by many gram-positive and gram-negative bacteria. It was the first "broad spectrum" penicillin with activity against gram-positive bacteria, including Streptococcus pneumoniae, Streptococcus pyogenes, some isolates of Staphylococcus aureus (but not penicillin-resistant or methicillin-resistant strains), Trueperella, and some Enterococcus. It is one of the few antibiotics that works against multidrug resistant Enterococcus faecalis and E. faecium. Activity against gram-negative bacteria includes Neisseria meningitidis, some Haemophilus influenzae, and some of the Enterobacteriaceae (though most Enterobacteriaceae and Pseudomonas are resistant). Its spectrum of activity is enhanced by co-administration of sulbactam, a drug that inhibits beta lactamase, an enzyme produced by bacteria to inactivate ampicillin and related antibiotics. It is sometimes used in combination with other antibiotics that have different mechanisms of action, like vancomycin, linezolid, daptomycin, and tigecycline.
Available forms
Ampicillin can be administered by mouth, an intramuscular injection (shot) or by intravenous infusion. The oral form, available as capsules or oral suspensions, is not given as an initial treatment for severe infections, but rather as a follow-up to an IM or IV injection. For IV and IM injections, ampicillin is kept as a powder that must be reconstituted.
IV injections must be given slowly, as rapid IV injections can lead to convulsive seizures.
Specific populations
Ampicillin is one of the most used drugs in pregnancy, and has been found to be generally harmless both by the Food and Drug Administration in the U.S. (which classified it as category B) and the Therapeutic Goods Administration in Australia (which classified it as category A). It is the drug of choice for treating Listeria monocytogenes in pregnant women, either alone or combined with an aminoglycoside. Pregnancy increases the clearance of ampicillin by up to 50%, and a higher dose is thus needed to reach therapeutic levels.
Ampicillin crosses the placenta and remains in the amniotic fluid at 50–100% of the concentration in maternal plasma; this can lead to high concentrations of ampicillin in the newborn.
While lactating mothers secrete some ampicillin into their breast milk, the amount is minimal.
In newborns, ampicillin has a longer half-life and lower plasma protein binding. The clearance by the kidneys is lower, as kidney function has not fully developed.
Contraindications
Ampicillin is contraindicated in those with a hypersensitivity to penicillins, as they can cause fatal anaphylactic reactions. Hypersensitivity reactions can include frequent skin rashes and hives, exfoliative dermatitis, erythema multiforme, and a temporary decrease in both red and white blood cells.
Ampicillin is not recommended in people with concurrent mononucleosis, as over 40% of patients develop a skin rash.
Side effects
Ampicillin is comparatively less toxic than other antibiotics, and side effects are more likely in those who are sensitive to penicillins and those with a history of asthma or allergies. In very rare cases, it causes severe side effects such as angioedema, anaphylaxis, and C. difficile infection (that can range from mild diarrhea to serious pseudomembranous colitis). Some develop black "furry" tongue. Serious adverse effects also include seizures and serum sickness. The most common side effects, experienced by about 10% of users are diarrhea and rash. Less common side effects can be nausea, vomiting, itching, and blood dyscrasias. The gastrointestinal effects, such as hairy tongue, nausea, vomiting, diarrhea, and colitis, are more common with the oral form of penicillin. Other conditions may develop up several weeks after treatment.
Overdose
Ampicillin overdose can cause behavioral changes, confusion, blackouts, and convulsions, as well as neuromuscular hypersensitivity, electrolyte imbalance, and kidney failure.
Interactions
Ampicillin reacts with probenecid and methotrexate to decrease renal excretion. Large doses of ampicillin can increase the risk of bleeding with concurrent use of warfarin and other oral anticoagulants, possibly by inhibiting platelet aggregation. Ampicillin has been said to make oral contraceptives less effective, but this has been disputed. It can be made less effective by other antibiotic, such as chloramphenicol, erythromycin, cephalosporins, and tetracyclines. For example, tetracyclines inhibit protein synthesis in bacteria, reducing the target against which ampicillin acts. If given at the same time as aminoglycosides, it can bind to it and inactivate it. When administered separately, aminoglycosides and ampicillin can potentiate each other instead.
Ampicillin causes skin rashes more often when given with allopurinol.
Both the live cholera vaccine and live typhoid vaccine can be made ineffective if given with ampicillin. Ampicillin is normally used to treat cholera and typhoid fever, lowering the immunological response that the body has to mount.
Pharmacology
Mechanism of action
Ampicillin is in the penicillin group of beta-lactam antibiotics and is part of the aminopenicillin family. It is roughly equivalent to amoxicillin in terms of activity. Ampicillin is able to penetrate gram-positive and some gram-negative bacteria. It differs from penicillin G, or benzylpenicillin, only by the presence of an amino group. This amino group, present on both ampicillin and amoxicillin, helps these antibiotics pass through the pores of the outer membrane of gram-negative bacteria, such as Escherichia coli, Proteus mirabilis, Salmonella enterica, and Shigella.
Ampicillin acts as an irreversible inhibitor of the enzyme transpeptidase, which is needed by bacteria to make the cell wall. It inhibits the third and final stage of bacterial cell wall synthesis in binary fission, which ultimately leads to cell lysis; therefore, ampicillin is usually bacteriolytic.
Pharmacokinetics
Ampicillin is well-absorbed from the GI tract (though food reduces its absorption), and reaches peak concentrations in one to two hours. The bioavailability is around 62% for parenteral routes. Unlike other penicillins, which usually bind 60–90% to plasma proteins, ampicillin binds to only 15–20%.
Ampicillin is distributed through most tissues, though it is concentrated in the liver and kidneys. It can also be found in the cerebrospinal fluid when the meninges become inflamed (such as, for example, meningitis). Some ampicillin is metabolized by hydrolyzing the beta-lactam ring to penicilloic acid, though most of it is excreted unchanged. In the kidneys, it is filtered out mostly by tubular secretion; some also undergoes glomerular filtration, and the rest is excreted in the feces and bile.
Hetacillin and pivampicillin are ampicillin esters that have been developed to increase bioavailability.
History
Ampicillin has been used extensively to treat bacterial infections since 1961. Until the introduction of ampicillin by the British company Beecham, penicillin therapies had only been effective against gram-positive organisms such as staphylococci and streptococci. Ampicillin (originally branded as "Penbritin") also demonstrated activity against gram-negative organisms such as H. influenzae, coliforms, and Proteus spp.
Society and culture
Economics
Ampicillin is relatively inexpensive. In the United States, it is available as a generic medication.
Veterinary use
In veterinary medicine, ampicillin is used in cats, dogs, and farm animals to treat:
Anal gland infections
Cutaneous infections, such as abscesses, cellulitis, and pustular dermatitis
E. coli and Salmonella infections in cattle, sheep, and goats (oral form). Ampicillin use for this purpose had declined as bacterial resistance has increased.
Mastitis in sows
Mixed aerobic–anaerobic infections, such as from cat bites
Multidrug-resistant Enterococcus faecalis and E. faecium
Prophylactic use in poultry against Salmonella and sepsis from E. coli or Staphylococcus aureus
Respiratory tract infections, including tonsilitis, bovine respiratory disease, shipping fever, bronchopneumonia, and calf and bovine pneumonia
Urinary tract infections in dogs
Horses are generally not treated with oral ampicillin, as they have low bioavailability of beta-lactams.
The half-life in animals is around that same of that in humans (just over an hour). Oral absorption is less than 50% in cats and dogs, and less than 4% in horses.
References
External links
Enantiopure drugs
Penicillins
Phenyl compounds
World Health Organization essential medicines
Wikipedia medicine articles ready to translate | Ampicillin | [
"Chemistry"
] | 2,481 | [
"Stereochemistry",
"Enantiopure drugs"
] |
1,914 | https://en.wikipedia.org/wiki/Antimicrobial%20resistance | Antimicrobial resistance (AMR or AR) occurs when microbes evolve mechanisms that protect them from antimicrobials, which are drugs used to treat infections. This resistance affects all classes of microbes, including bacteria (antibiotic resistance), viruses (antiviral resistance), protozoa (antiprotozoal resistance), and fungi (antifungal resistance). Together, these adaptations fall under the AMR umbrella, posing significant challenges to healthcare worldwide. Misuse and improper management of antimicrobials are primary drivers of this resistance, though it can also occur naturally through genetic mutations and the spread of resistant genes.
Microbes resistant to multiple drugs are termed multidrug-resistant (MDR) and are sometimes called superbugs. Antibiotic resistance, a significant AMR subset, enables bacteria to survive antibiotic treatment, complicating infection management and treatment options. Resistance arises through spontaneous mutation, horizontal gene transfer, and increased selective pressure from antibiotic overuse, both in medicine and agriculture, which accelerates resistance development.
The burden of AMR is immense, with nearly 5 million annual deaths associated with resistant infections. Infections from AMR microbes are more challenging to treat and often require costly alternative therapies that may have more severe side effects. Preventive measures, such as using narrow-spectrum antibiotics and improving hygiene practices, aim to reduce the spread of resistance.
The WHO claims that AMR is one of the top global public health and development threats, estimating that bacterial AMR was directly responsible for 1.27 million global deaths in 2019 and contributed to 4.95 million deaths. Moreover, the WHO and other international bodies warn that AMR could lead to up to 10 million deaths annually by 2050 unless actions are taken. Global initiatives, such as calls for international AMR treaties, emphasize coordinated efforts to limit misuse, fund research, and provide access to necessary antimicrobials in developing nations. However, the COVID-19 pandemic redirected resources and scientific attention away from AMR, intensifying the challenge.
Definition
The WHO defines antimicrobial resistance as a microorganism's resistance to an antimicrobial drug that was once able to treat an infection by that microorganism. A person cannot become resistant to antibiotics. Resistance is a property of the microbe, not a person or other organism infected by a microbe. All types of microbes can develop drug resistance. Thus, there are antibiotic, antifungal, antiviral and antiparasitic resistance.
Antibiotic resistance is a subset of antimicrobial resistance. This more specific resistance is linked to bacteria and thus broken down into two further subsets, microbiological and clinical. Microbiological resistance is the most common and occurs from genes, mutated or inherited, that allow the bacteria to resist the mechanism to kill the microbe associated with certain antibiotics. Clinical resistance is shown through the failure of many therapeutic techniques where the bacteria that are normally susceptible to a treatment become resistant after surviving the outcome of the treatment. In both cases of acquired resistance, the bacteria can pass the genetic catalyst for resistance through horizontal gene transfer: conjugation, transduction, or transformation. This allows the resistance to spread across the same species of pathogen or even similar bacterial pathogens.
Overview
WHO report released April 2014 stated, "this serious threat is no longer a prediction for the future, it is happening right now in every region of the world and has the potential to affect anyone, of any age, in any country. Antibiotic resistance—when bacteria change so antibiotics no longer work in people who need them to treat infections—is now a major threat to public health."
Each year, nearly 5 million deaths are associated with AMR globally. In 2019, global deaths attributable to AMR numbered 1.27 million in 2019. That same year, AMR may have contributed to 5 million deaths and one in five people who died due to AMR were children under five years old.
In 2018, WHO considered antibiotic resistance to be one of the biggest threats to global health, food security and development. Deaths attributable to AMR vary by area:
The European Centre for Disease Prevention and Control calculated that in 2015 there were 671,689 infections in the EU and European Economic Area caused by antibiotic-resistant bacteria, resulting in 33,110 deaths. Most were acquired in healthcare settings. In 2019 there were 133,000 deaths caused by AMR.
Causes
AMR is driven largely by the misuse and overuse of antimicrobials. Yet, at the same time, many people around the world do not have access to essential antimicrobials. This leads to microbes either evolving a defense against drugs used to treat them, or certain strains of microbes that have a natural resistance to antimicrobials becoming much more prevalent than the ones that are easily defeated with medication. While antimicrobial resistance does occur naturally over time, the use of antimicrobial agents in a variety of settings both within the healthcare industry and outside of has led to antimicrobial resistance becoming increasingly more prevalent.
Although many microbes develop resistance to antibiotics over time though natural mutation, overprescribing and inappropriate prescription of antibiotics have accelerated the problem. It is possible that as many as 1 in 3 prescriptions written for antibiotics are unnecessary. Every year, approximately 154 million prescriptions for antibiotics are written. Of these, up to 46 million are unnecessary or inappropriate for the condition that the patient has. Microbes may naturally develop resistance through genetic mutations that occur during cell division, and although random mutations are rare, many microbes reproduce frequently and rapidly, increasing the chances of members of the population acquiring a mutation that increases resistance. Many individuals stop taking antibiotics when they begin to feel better. When this occurs, it is possible that the microbes that are less susceptible to treatment still remain in the body. If these microbes are able to continue to reproduce, this can lead to an infection by bacteria that are less susceptible or even resistant to an antibiotic.
Natural occurrence
AMR is a naturally occurring process. Antimicrobial resistance can evolve naturally due to continued exposure to antimicrobials. Natural selection means that organisms that are able to adapt to their environment, survive, and continue to produce offspring. As a result, the types of microorganisms that are able to survive over time with continued attack by certain antimicrobial agents will naturally become more prevalent in the environment, and those without this resistance will become obsolete.
Some contemporary antimicrobial resistances have also evolved naturally before the use of antimicrobials of human clinical uses. For instance, methicillin-resistance evolved as a pathogen of hedgehogs, possibly as a co-evolutionary adaptation of the pathogen to hedgehogs that are infected by a dermatophyte that naturally produces antibiotics. Also, many soil fungi and bacteria are natural competitors and the original antibiotic penicillin discovered by Alexander Fleming rapidly lost clinical effectiveness in treating humans and, furthermore, none of the other natural penicillins (F, K, N, X, O, U1 or U6) are currently in clinical use.
Antimicrobial resistance can be acquired from other microbes through swapping genes in a process termed horizontal gene transfer. This means that once a gene for resistance to an antibiotic appears in a microbial community, it can then spread to other microbes in the community, potentially moving from a non-disease causing microbe to a disease-causing microbe. This process is heavily driven by the natural selection processes that happen during antibiotic use or misuse.
Over time, most of the strains of bacteria and infections present will be the type resistant to the antimicrobial agent being used to treat them, making this agent now ineffective to defeat most microbes. With the increased use of antimicrobial agents, there is a speeding up of this natural process.
Self-medication
In the vast majority of countries, antibiotics can only be prescribed by a doctor and supplied by a pharmacy. Self-medication by consumers is defined as "the taking of medicines on one's own initiative or on another person's suggestion, who is not a certified medical professional", and it has been identified as one of the primary reasons for the evolution of antimicrobial resistance. Self-medication with antibiotics is an unsuitable way of using them but a common practice in resource-constrained countries. The practice exposes individuals to the risk of bacteria that have developed antimicrobial resistance. Many people resort to this out of necessity, when access to a physician is unavailable, or when patients have a limited amount of time or money to see a doctor. This increased access makes it extremely easy to obtain antimicrobials. An example is India, where in the state of Punjab 73% of the population resorted to treating their minor health issues and chronic illnesses through self-medication.
Self-medication is higher outside the hospital environment, and this is linked to higher use of antibiotics, with the majority of antibiotics being used in the community rather than hospitals. The prevalence of self-medication in low- and middle-income countries (LMICs) ranges from 8.1% to 93%. Accessibility, affordability, and conditions of health facilities, as well as the health-seeking behavior, are factors that influence self-medication in low- and middle-income countries. Two significant issues with self-medication are the lack of knowledge of the public on, firstly, the dangerous effects of certain antimicrobials (for example ciprofloxacin which can cause tendonitis, tendon rupture and aortic dissection) and, secondly, broad microbial resistance and when to seek medical care if the infection is not clearing. In order to determine the public's knowledge and preconceived notions on antibiotic resistance, a screening of 3,537 articles published in Europe, Asia, and North America was done. Of the 55,225 total people surveyed in the articles, 70% had heard of antibiotic resistance previously, but 88% of those people thought it referred to some type of physical change in the human body.
Clinical misuse
Clinical misuse by healthcare professionals is another contributor to increased antimicrobial resistance. Studies done in the US show that the indication for treatment of antibiotics, choice of the agent used, and the duration of therapy was incorrect in up to 50% of the cases studied. In 2010 and 2011 about a third of antibiotic prescriptions in outpatient settings in the United States were not necessary. Another study in an intensive care unit in a major hospital in France has shown that 30% to 60% of prescribed antibiotics were unnecessary. These inappropriate uses of antimicrobial agents promote the evolution of antimicrobial resistance by supporting the bacteria in developing genetic alterations that lead to resistance.
According to research conducted in the US that aimed to evaluate physicians' attitudes and knowledge on antimicrobial resistance in ambulatory settings, only 63% of those surveyed reported antibiotic resistance as a problem in their local practices, while 23% reported the aggressive prescription of antibiotics as necessary to avoid failing to provide adequate care. This demonstrates how a majority of doctors underestimate the impact that their own prescribing habits have on antimicrobial resistance as a whole. It also confirms that some physicians may be overly cautious and prescribe antibiotics for both medical or legal reasons, even when clinical indications for use of these medications are not always confirmed. This can lead to unnecessary antimicrobial use, a pattern which may have worsened during the COVID-19 pandemic.
Studies have shown that common misconceptions about the effectiveness and necessity of antibiotics to treat common mild illnesses contribute to their overuse.
Important to the conversation of antibiotic use is the veterinary medical system. Veterinary oversight is required by law for all medically important antibiotics. Veterinarians use the Pharmacokinetic/pharmacodynamic model (PK/PD) approach to ensuring that the correct dose of the drug is delivered to the correct place at the correct timing.
Pandemics, disinfectants and healthcare systems
Increased antibiotic use during the early waves of the COVID-19 pandemic may exacerbate this global health challenge. Moreover, pandemic burdens on some healthcare systems may contribute to antibiotic-resistant infections. On the other hand, "increased hand hygiene, decreased international travel, and decreased elective hospital procedures may have reduced AMR pathogen selection and spread in the short term" during the COVID-19 pandemic. The use of disinfectants such as alcohol-based hand sanitizers, and antiseptic hand wash may also have the potential to increase antimicrobial resistance. Extensive use of disinfectants can lead to mutations that induce antimicrobial resistance.
A 2024 United Nations High-Level Meeting on AMR has pledged to reduce deaths associated with bacterial AMR by 10% over the next six years. In their first major declaration on the issue since 2016, global leaders also committed to raising $100 million to update and implement AMR action plans. However, the final draft of the declaration omitted an earlier target to reduce antibiotic use in animals by 30% by 2030, due to opposition from meat-producing countries and the farming industry. Critics argue this omission is a major weakness, as livestock accounts for around 73% of global sales of antimicrobial agents, including antibiotics, antivirals, and antiparasitics.
Environmental pollution
Considering the complex interactions between humans, animals and the environment, it is also important to consider the environmental aspects and contributors to antimicrobial resistance. Although there are still some knowledge gaps in understanding the mechanisms and transmission pathways, environmental pollution is considered a significant contributor to antimicrobial resistance. Important contributing factors are through "antibiotic residues", "industrial effluents", " agricultural runoffs", "heavy metals", "biocides and pesticides" and "sewage and wastewater" that create reservoirs for resistant genes and bacteria that facilitates the transfer of human pathogens. Unused or expired antibiotics, if not disposed of properly, can enter water systems and soil. Discharge from pharmaceutical manufacturing and other industrial companies can also introduce antibiotics and other chemicals into the environment. These factors allow for creating selective pressure for resistant bacteria. Antibiotics used in livestock and aquaculture can contaminate soil and water, which promotes resistance in environmental microbes. Heavy metals such as zinc, copper and mercury, and also biocides and pesticides, can co- select for antibiotic resistance, enhancing their speed. Inadequate treatment of sewage and wastewater allows resistant bacteria and genes to spread through water systems.
Food production
Livestock
The antimicrobial resistance crisis also extends to the food industry, specifically with food producing animals. With an ever-increasing human population, there is constant pressure to intensify productivity in many agricultural sectors, including the production of meat as a source of protein. Antibiotics are fed to livestock to act as growth supplements, and a preventive measure to decrease the likelihood of infections.
Farmers typically use antibiotics in animal feed to improve growth rates and prevent infections. However, this is illogical as antibiotics are used to treat infections and not prevent infections. 80% of antibiotic use in the U.S. is for agricultural purposes and about 70% of these are medically important. Overusing antibiotics gives the bacteria time to adapt leaving higher doses or even stronger antibiotics needed to combat the infection. Though antibiotics for growth promotion were banned throughout the EU in 2006, 40 countries worldwide still use antibiotics to promote growth.
This can result in the transfer of resistant bacterial strains into the food that humans eat, causing potentially fatal transfer of disease. While the practice of using antibiotics as growth promoters does result in better yields and meat products, it is a major issue and needs to be decreased in order to prevent antimicrobial resistance. Though the evidence linking antimicrobial usage in livestock to antimicrobial resistance is limited, the World Health Organization Advisory Group on Integrated Surveillance of Antimicrobial Resistance strongly recommended the reduction of use of medically important antimicrobials in livestock. Additionally, the Advisory Group stated that such antimicrobials should be expressly prohibited for both growth promotion and disease prevention in food producing animals.
By mapping antimicrobial consumption in livestock globally, it was predicted that in 228 countries there would be a total 67% increase in consumption of antibiotics by livestock by 2030. In some countries such as Brazil, Russia, India, China, and South Africa it is predicted that a 99% increase will occur. Several countries have restricted the use of antibiotics in livestock, including Canada, China, Japan, and the US. These restrictions are sometimes associated with a reduction of the prevalence of antimicrobial resistance in humans.
In the United States the Veterinary Feed Directive went into practice in 2017 dictating that All medically important antibiotics to be used in feed or water for food animal species require a veterinary feed directive (VFD) or a prescription.
Pesticides
Most pesticides protect crops against insects and plants, but in some cases antimicrobial pesticides are used to protect against various microorganisms such as bacteria, viruses, fungi, algae, and protozoa. The overuse of many pesticides in an effort to have a higher yield of crops has resulted in many of these microbes evolving a tolerance against these antimicrobial agents. Currently there are over 4000 antimicrobial pesticides registered with the US Environmental Protection Agency (EPA) and sold to market, showing the widespread use of these agents. It is estimated that for every single meal a person consumes, 0.3 g of pesticides is used, as 90% of all pesticide use is in agriculture. A majority of these products are used to help defend against the spread of infectious diseases, and hopefully protect public health. But out of the large amount of pesticides used, it is also estimated that less than 0.1% of those antimicrobial agents, actually reach their targets. That leaves over 99% of all pesticides used available to contaminate other resources. In soil, air, and water these antimicrobial agents are able to spread, coming in contact with more microorganisms and leading to these microbes evolving mechanisms to tolerate and further resist pesticides. The use of antifungal azole pesticides that drive environmental azole resistance have been linked to azole resistance cases in the clinical setting. The same issues confront the novel antifungal classes (e.g. orotomides) which are again being used in both the clinic and agriculture.
Wild birds
Wildlife, including wild and migratory birds, serve as a reservoir for zoonotic disease and antimicrobial-resistant organisms. Birds are a key link between the transmission of zoonotic diseases to human populations. By the same token, increased contact between wild birds and human populations (including domesticated animals), has increased the amount of anti-microbial resistance (AMR) to the bird population. The introduction of AMR to wild birds positively correlates with human pollution and increased human contact. Additionally, wild birds can participate in horizontal gene transfer with bacteria, leading to the transmission of antibiotic-resistant genes (ARG).
For simplicity, wild bird populations can be divided into two major categories, wild sedentary birds and wild migrating birds. Wild sedentary bird exposure to AMR is through increased contact with densely populated areas, human waste, domestic animals, and domestic animal/livestock waste. Wild migrating birds interact with sedentary birds in different environments along their migration route. This increases the rate and diversity of AMR across varying ecosystems.
Neglect of wildlife in the global discussions surrounding health security and AMR, creates large barriers to true AMR surveillance. The surveillance of anti-microbial resistant organisms in wild birds is a potential metric for the rate of AMR in the environment. This surveillance also allows for further investigation into the transmission routs between different ecosystems and human populations (including domesticated animals and livestock). Such information gathered from wild bird biomes, can help identify patterns of diseased transmission and better target interventions. These targeted interventions can inform the use of antimicrobial agents and reduce the persistence of multi-drug resistant organisms.
Gene transfer from ancient microorganisms
Permafrost is a term used to refer to any ground that remained frozen for two years or more, with the oldest known examples continuously frozen for around 700,000 years. In the recent decades, permafrost has been rapidly thawing due to climate change. The cold preserves any organic matter inside the permafrost, and it is possible for microorganisms to resume their life functions once it thaws. While some common pathogens such as influenza, smallpox or the bacteria associated with pneumonia have failed to survive intentional attempts to revive them, more cold-adapted microorganisms such as anthrax, or several ancient plant and amoeba viruses, have successfully survived prolonged thaw.
Some scientists have argued that the inability of known causative agents of contagious diseases to survive being frozen and thawed makes this threat unlikely. Instead, there have been suggestions that when modern pathogenic bacteria interact with the ancient ones, they may, through horizontal gene transfer, pick up genetic sequences which are associated with antimicrobial resistance, exacerbating an already difficult issue. Antibiotics to which permafrost bacteria have displayed at least some resistance include chloramphenicol, streptomycin, kanamycin, gentamicin, tetracycline, spectinomycin and neomycin. However, other studies show that resistance levels in ancient bacteria to modern antibiotics remain lower than in the contemporary bacteria from the active layer of thawed ground above them, which may mean that this risk is "no greater" than from any other soil.
Prevention
There have been increasing public calls for global collective action to address the threat, including a proposal for an international treaty on antimicrobial resistance. Further detail and attention is still needed in order to recognize and measure trends in resistance on the international level; the idea of a global tracking system has been suggested but implementation has yet to occur. A system of this nature would provide insight to areas of high resistance as well as information necessary for evaluating programs, introducing interventions and other changes made to fight or reverse antibiotic resistance.
Duration of antimicrobials
Delaying or minimizing the use of antibiotics for certain conditions may help safely reduce their use. Antimicrobial treatment duration should be based on the infection and other health problems a person may have. For many infections once a person has improved there is little evidence that stopping treatment causes more resistance. Some, therefore, feel that stopping early may be reasonable in some cases. Other infections, however, do require long courses regardless of whether a person feels better.
Delaying antibiotics for ailments such as a sore throat and otitis media may have no difference in the rate of complications compared with immediate antibiotics, for example. When treating respiratory tract infections, clinical judgement is required as to the appropriate treatment (delayed or immediate antibiotic use).
The study, "Shorter and Longer Antibiotic Durations for Respiratory Infections: To Fight Antimicrobial Resistance—A Retrospective Cross-Sectional Study in a Secondary Care Setting in the UK," highlights the urgency of reevaluating antibiotic treatment durations amidst the global challenge of antimicrobial resistance (AMR). It investigates the effectiveness of shorter versus longer antibiotic regimens for respiratory tract infections (RTIs) in a UK secondary care setting, emphasizing the need for evidence-based prescribing practices to optimize patient outcomes and combat AMR.
Monitoring and mapping
There are multiple national and international monitoring programs for drug-resistant threats, including methicillin-resistant Staphylococcus aureus (MRSA), vancomycin-resistant S. aureus (VRSA), extended spectrum beta-lactamase (ESBL) producing Enterobacterales, vancomycin-resistant Enterococcus (VRE), and multidrug-resistant Acinetobacter baumannii (MRAB).
ResistanceOpen is an online global map of antimicrobial resistance developed by HealthMap which displays aggregated data on antimicrobial resistance from publicly available and user submitted data. The website can display data for a radius from a location. Users may submit data from antibiograms for individual hospitals or laboratories. European data is from the EARS-Net (European Antimicrobial Resistance Surveillance Network), part of the ECDC. ResistanceMap is a website by the Center for Disease Dynamics, Economics & Policy and provides data on antimicrobial resistance on a global level.
The WHO's AMR global action plan also recommends antimicrobial resistance surveillance in animals. Initial steps in the EU for establishing the veterinary counterpart EARS-Vet (EARS-Net for veterinary medicine) have been made. AMR data from pets in particular is scarce, but needed to support antibiotic stewardship in veterinary medicine.
By comparison there is a lack of national and international monitoring programs for antifungal resistance.
Limiting antimicrobial use in humans
Antimicrobial stewardship programmes appear useful in reducing rates of antimicrobial resistance. The antimicrobial stewardship program will also provide pharmacists with the knowledge to educate patients that antibiotics will not work for a virus for example.
Excessive antimicrobial use has become one of the top contributors to the evolution of antimicrobial resistance. Since the beginning of the antimicrobial era, antimicrobials have been used to treat a wide range of infectious diseases. Overuse of antimicrobials has become the primary cause of rising levels of antimicrobial resistance. The main problem is that doctors are willing to prescribe antimicrobials to ill-informed individuals who believe that antimicrobials can cure nearly all illnesses, including viral infections like the common cold. In an analysis of drug prescriptions, 36% of individuals with a cold or an upper respiratory infection (both usually viral in origin) were given prescriptions for antibiotics. These prescriptions accomplished nothing other than increasing the risk of further evolution of antibiotic resistant bacteria. Using antimicrobials without prescription is another driving force leading to the overuse of antibiotics to self-treat diseases like the common cold, cough, fever, and dysentery resulting in an epidemic of antibiotic resistance in countries like Bangladesh, risking its spread around the globe. Introducing strict antibiotic stewardship in the outpatient setting to reduce inappropriate prescribing of antibiotics may reduce the emerging bacterial resistance.
The WHO AWaRe (Access, Watch, Reserve) guidance and antibiotic book has been introduced to guide antibiotic choice for the 30 most common infections in adults and children to reduce inappropriate prescribing in primary care and hospitals. Narrow-spectrum antibiotics are preferred due to their lower resistance potential, and broad-spectrum antibiotics are only recommended for people with more severe symptoms. Some antibiotics are more likely to confer resistance, so are kept as reserve antibiotics in the AWaRe book.
Various diagnostic strategies have been employed to prevent the overuse of antifungal therapy in the clinic, proving a safe alternative to empirical antifungal therapy, and thus underpinning antifungal stewardship schemes.
At the hospital level
Antimicrobial stewardship teams in hospitals are encouraging optimal use of antimicrobials. The goals of antimicrobial stewardship are to help practitioners pick the right drug at the right dose and duration of therapy while preventing misuse and minimizing the development of resistance. Stewardship interventions may reduce the length of stay by an average of slightly over 1 day while not increasing the risk of death. Dispensing, to discharged in-house patients, the exact number of antibiotic pharmaceutical units necessary to complete an ongoing treatment can reduce antibiotic leftovers within the community as community pharmacies can have antibiotic package inefficiencies.
At the primary care level
Given the volume of care provided in primary care (general practice), recent strategies have focused on reducing unnecessary antimicrobial prescribing in this setting. Simple interventions, such as written information explaining when taking antibiotics is not necessary, for example in common infections of the upper respiratory tract, have been shown to reduce antibiotic prescribing. Various tools are also available to help professionals decide if prescribing antimicrobials is necessary.
Parental expectations, driven by the worry for their children's health, can influence how often children are prescribed antibiotics. Parents often rely on their clinician for advice and reassurance. However a lack of plain language information and not having adequate time for consultation negatively impacts this relationship. In effect parents often rely on past experiences in their expectations rather than reassurance from the clinician. Adequate time for consultation and plain language information can help parents make informed decisions and avoid unnecessary antibiotic use.
Parents play a critical role in reducing unnecessary antibiotic use, particularly during cold and flu season when children frequently experience respiratory illnesses. Many of these illnesses are caused by viruses, such as colds or the flu, which antibiotics cannot treat. Misusing antibiotics in these situations not only fails to benefit the child but also contributes to the emergence of antibiotic-resistant bacteria, posing a broader public health threat. To address parental concerns and reduce inappropriate prescribing, healthcare providers can offer plain-language explanations about the difference between bacterial and viral infections, alongside clear guidance on managing viral illnesses without antibiotics. Vaccinations also play a vital role in reducing the incidence of serious bacterial infections that may require antibiotic treatment, thereby helping to preserve the effectiveness of existing antibiotics. Schools further amplify the spread of infections due to close contact and shared surfaces, underscoring the importance of hygiene practices like regular handwashing, covering coughs, and staying home when unwell. These preventive measures not only reduce the need for antibiotics but also lower the overall risk of resistant bacteria spreading within communities.
The prescriber should closely adhere to the five rights of drug administration: the right patient, the right drug, the right dose, the right route, and the right time. Microbiological samples should be taken for culture and sensitivity testing before treatment when indicated and treatment potentially changed based on the susceptibility report.
Health workers and pharmacists can help tackle antibiotic resistance by: enhancing infection prevention and control; only prescribing and dispensing antibiotics when they are truly needed; prescribing and dispensing the right antibiotic(s) to treat the illness. A unit dose system implemented in community pharmacies can also reduce antibiotic leftovers at households.
At the individual level
People can help tackle resistance by using antibiotics only when infected with a bacterial infection and prescribed by a doctor; completing the full prescription even if the user is feeling better, never sharing antibiotics with others, or using leftover prescriptions. Taking antibiotics when not needed won't help the user, but instead give bacteria the option to adapt and leave the user with the side effects that come with the certain type of antibiotic. The CDC recommends that you follow these behaviors so that you avoid these negative side effects and keep the community safe from spreading drug-resistant bacteria. Practicing basic bacterial infection prevention courses, such as hygiene, also helps to prevent the spread of antibiotic-resistant bacteria.
Country examples
The Netherlands has the lowest rate of antibiotic prescribing in the OECD, at a rate of 11.4 defined daily doses (DDD) per 1,000 people per day in 2011. The defined daily dose (DDD) is a statistical measure of drug consumption, defined by the World Health Organization (WHO).
Germany and Sweden also have lower prescribing rates, with Sweden's rate having been declining since 2007.
Greece, France and Belgium have high prescribing rates for antibiotics of more than 28 DDD.
Water, sanitation, hygiene
Infectious disease control through improved water, sanitation and hygiene (WASH) infrastructure needs to be included in the antimicrobial resistance (AMR) agenda. The "Interagency Coordination Group on Antimicrobial Resistance" stated in 2018 that "the spread of pathogens through unsafe water results in a high burden of gastrointestinal disease, increasing even further the need for antibiotic treatment." This is particularly a problem in developing countries where the spread of infectious diseases caused by inadequate WASH standards is a major driver of antibiotic demand. Growing usage of antibiotics together with persistent infectious disease levels have led to a dangerous cycle in which reliance on antimicrobials increases while the efficacy of drugs diminishes. The proper use of infrastructure for water, sanitation and hygiene (WASH) can result in a 47–72 percent decrease of diarrhea cases treated with antibiotics depending on the type of intervention and its effectiveness. A reduction of the diarrhea disease burden through improved infrastructure would result in large decreases in the number of diarrhea cases treated with antibiotics. This was estimated as ranging from 5 million in Brazil to up to 590 million in India by the year 2030. The strong link between increased consumption and resistance indicates that this will directly mitigate the accelerating spread of AMR. Sanitation and water for all by 2030 is Goal Number 6 of the Sustainable Development Goals.
An increase in hand washing compliance by hospital staff results in decreased rates of resistant organisms.
Water supply and sanitation infrastructure in health facilities offer significant co-benefits for combatting AMR, and investment should be increased. There is much room for improvement: WHO and UNICEF estimated in 2015 that globally 38% of health facilities did not have a source of water, nearly 19% had no toilets and 35% had no water and soap or alcohol-based hand rub for handwashing.
Industrial wastewater treatment
Manufacturers of antimicrobials need to improve the treatment of their wastewater (by using industrial wastewater treatment processes) to reduce the release of residues into the environment.
Limiting antimicrobial use in animals and farming
It is established that the use of antibiotics in animal husbandry can give rise to AMR resistances in bacteria found in food animals to the antibiotics being administered (through injections or medicated feeds). For this reason only antimicrobials that are deemed "not-clinically relevant" are used in these practices.
Unlike resistance to antibacterials, antifungal resistance can be driven by arable farming, currently there is no regulation on the use of similar antifungal classes in agriculture and the clinic.
Recent studies have shown that the prophylactic use of "non-priority" or "non-clinically relevant" antimicrobials in feeds can potentially, under certain conditions, lead to co-selection of environmental AMR bacteria with resistance to medically important antibiotics. The possibility for co-selection of AMR resistances in the food chain pipeline may have far-reaching implications for human health.
Country examples
Europe
In 1997, European Union health ministers voted to ban avoparcin and four additional antibiotics used to promote animal growth in 1999. In 2006 a ban on the use of antibiotics in European feed, with the exception of two antibiotics in poultry feeds, became effective. In Scandinavia, there is evidence that the ban has led to a lower prevalence of antibiotic resistance in (nonhazardous) animal bacterial populations. As of 2004, several European countries established a decline of antimicrobial resistance in humans through limiting the use of antimicrobials in agriculture and food industries without jeopardizing animal health or economic cost.
United States
The United States Department of Agriculture (USDA) and the Food and Drug Administration (FDA) collect data on antibiotic use in humans and in a more limited fashion in animals. About 80% of antibiotic use in the U.S. is for agriculture purposes, and about 70% of these are medically important. This gives reason for concern about the antibiotic resistance crisis in the U.S. and more reason to monitor it. The FDA first determined in 1977 that there is evidence of emergence of antibiotic-resistant bacterial strains in livestock. The long-established practice of permitting OTC sales of antibiotics (including penicillin and other drugs) to lay animal owners for administration to their own animals nonetheless continued in all states.
In 2000, the FDA announced their intention to revoke approval of fluoroquinolone use in poultry production because of substantial evidence linking it to the emergence of fluoroquinolone-resistant Campylobacter infections in humans. Legal challenges from the food animal and pharmaceutical industries delayed the final decision to do so until 2006. Fluroquinolones have been banned from extra-label use in food animals in the USA since 2007. However, they remain widely used in companion and exotic animals.
Global action plans and awareness
At the 79th United Nations General Assembly High-Level Meeting on AMR on 26 September 2024, world leaders approved a political declaration committing to a clear set of targets and actions, including reducing the estimated 4.95 million human deaths associated with bacterial AMR annually by 10% by 2030.
The increasing interconnectedness of the world and the fact that new classes of antibiotics have not been developed and approved for more than 25 years highlight the extent to which antimicrobial resistance is a global health challenge. A global action plan to tackle the growing problem of resistance to antibiotics and other antimicrobial medicines was endorsed at the Sixty-eighth World Health Assembly in May 2015. One of the key objectives of the plan is to improve awareness and understanding of antimicrobial resistance through effective communication, education and training. This global action plan developed by the World Health Organization was created to combat the issue of antimicrobial resistance and was guided by the advice of countries and key stakeholders. The WHO's global action plan is composed of five key objectives that can be targeted through different means, and represents countries coming together to solve a major problem that can have future health consequences. These objectives are as follows:
improve awareness and understanding of antimicrobial resistance through effective communication, education and training.
strengthen the knowledge and evidence base through surveillance and research.
reduce the incidence of infection through effective sanitation, hygiene and infection prevention measures.
optimize the use of antimicrobial medicines in human and animal health.
develop the economic case for sustainable investment that takes account of the needs of all countries and to increase investment in new medicines, diagnostic tools, vaccines and other interventions.
Steps towards progress
React based in Sweden has produced informative material on AMR for the general public.
Videos are being produced for the general public to generate interest and awareness.
The Irish Department of Health published a National Action Plan on Antimicrobial Resistance in October 2017. The Strategy for the Control of Antimicrobial Resistance in Ireland (SARI), Iaunched in 2001 developed Guidelines for Antimicrobial Stewardship in Hospitals in Ireland in conjunction with the Health Protection Surveillance Centre, these were published in 2009. Following their publication a public information campaign 'Action on Antibiotics' was launched to highlight the need for a change in antibiotic prescribing. Despite this, antibiotic prescribing remains high with variance in adherence to guidelines.
The United Kingdom published a 20-year vision for antimicrobial resistance that sets out the goal of containing and controlling AMR by 2040. The vision is supplemented by a 5-year action plan running from 2019 to 2024, building on the previous action plan (2013–2018).
The World Health Organization has published the 2024 Bacterial Priority Pathogens List which covers 15 families of antibiotic-resistant bacterial pathogens. Notable among these are gram-negative bacteria resistant to last-resort antibiotics, drug-resistant mycobacterium tuberculosis, and other high-burden resistant pathogens such as Salmonella, Shigella, Neisseria gonorrhoeae, Pseudomonas aeruginosa, and Staphylococcus aureus. The inclusion of these pathogens in the list underscores their global impact in terms of burden, as well as issues related to transmissibility, treatability, and prevention options. It also reflects the R&D pipeline of new treatments and emerging resistance trends.
Antibiotic Awareness Week
The World Health Organization has promoted the first World Antibiotic Awareness Week running from 16 to 22 November 2015. The aim of the week is to increase global awareness of antibiotic resistance. It also wants to promote the correct usage of antibiotics across all fields in order to prevent further instances of antibiotic resistance.
World Antibiotic Awareness Week has been held every November since 2015. For 2017, the Food and Agriculture Organization of the United Nations (FAO), the World Health Organization (WHO) and the World Organisation for Animal Health (OIE) are together calling for responsible use of antibiotics in humans and animals to reduce the emergence of antibiotic resistance.
United Nations
In 2016 the Secretary-General of the United Nations convened the Interagency Coordination Group (IACG) on Antimicrobial Resistance. The IACG worked with international organizations and experts in human, animal, and plant health to create a plan to fight antimicrobial resistance. Their report released in April 2019 highlights the seriousness of antimicrobial resistance and the threat it poses to world health. It suggests five recommendations for member states to follow in order to tackle this increasing threat. The IACG recommendations are as follows:
Accelerate progress in countries
Innovate to secure the future
Collaborate for more effective action
Invest for a sustainable response
Strengthen accountability and global governance
Mechanisms and organisms
Bacteria
The five main mechanisms by which bacteria exhibit resistance to antibiotics are:
Drug inactivation or modification: for example, enzymatic deactivation of penicillin G in some penicillin-resistant bacteria through the production of β-lactamases. Drugs may also be chemically modified through the addition of functional groups by transferase enzymes; for example, acetylation, phosphorylation, or adenylation are common resistance mechanisms to aminoglycosides. Acetylation is the most widely used mechanism and can affect a number of drug classes.
Alteration of target- or binding site: for example, alteration of PBP—the binding target site of penicillins—in MRSA and other penicillin-resistant bacteria. Another protective mechanism found among bacterial species is ribosomal protection proteins. These proteins protect the bacterial cell from antibiotics that target the cell's ribosomes to inhibit protein synthesis. The mechanism involves the binding of the ribosomal protection proteins to the ribosomes of the bacterial cell, which in turn changes its conformational shape. This allows the ribosomes to continue synthesizing proteins essential to the cell while preventing antibiotics from binding to the ribosome to inhibit protein synthesis.
Alteration of metabolic pathway: for example, some sulfonamide-resistant bacteria do not require para-aminobenzoic acid (PABA), an important precursor for the synthesis of folic acid and nucleic acids in bacteria inhibited by sulfonamides, instead, like mammalian cells, they turn to using preformed folic acid.
Reduced drug accumulation: by decreasing drug permeability or increasing active efflux (pumping out) of the drugs across the cell surface These pumps within the cellular membrane of certain bacterial species are used to pump antibiotics out of the cell before they are able to do any damage. They are often activated by a specific substrate associated with an antibiotic, as in fluoroquinolone resistance.
Ribosome splitting and recycling: for example, drug-mediated stalling of the ribosome by lincomycin and erythromycin unstalled by a heat shock protein found in Listeria monocytogenes, which is a homologue of HflX from other bacteria. Liberation of the ribosome from the drug allows further translation and consequent resistance to the drug.
There are several different types of germs that have developed a resistance over time.
The six pathogens causing most deaths associated with resistance are Escherichia coli, Staphylococcus aureus, Klebsiella pneumoniae, Streptococcus pneumoniae, Acinetobacter baumannii, and Pseudomonas aeruginosa. They were responsible for 929,000 deaths attributable to resistance and 3.57 million deaths associated with resistance in 2019.
Penicillinase-producing Neisseria gonorrhoeae developed a resistance to penicillin in 1976. Another example is Azithromycin-resistant Neisseria gonorrhoeae, which developed a resistance to azithromycin in 2011.
In gram-negative bacteria, plasmid-mediated resistance genes produce proteins that can bind to DNA gyrase, protecting it from the action of quinolones. Finally, mutations at key sites in DNA gyrase or topoisomerase IV can decrease their binding affinity to quinolones, decreasing the drug's effectiveness.
Some bacteria are naturally resistant to certain antibiotics; for example, gram-negative bacteria are resistant to most β-lactam antibiotics due to the presence of β-lactamase. Antibiotic resistance can also be acquired as a result of either genetic mutation or horizontal gene transfer. Although mutations are rare, with spontaneous mutations in the pathogen genome occurring at a rate of about 1 in 105 to 1 in 108 per chromosomal replication, the fact that bacteria reproduce at a high rate allows for the effect to be significant. Given that lifespans and production of new generations can be on a timescale of mere hours, a new (de novo) mutation in a parent cell can quickly become an inherited mutation of widespread prevalence, resulting in the microevolution of a fully resistant colony. However, chromosomal mutations also confer a cost of fitness. For example, a ribosomal mutation may protect a bacterial cell by changing the binding site of an antibiotic but may result in slower growth rate. Moreover, some adaptive mutations can propagate not only through inheritance but also through horizontal gene transfer. The most common mechanism of horizontal gene transfer is the transferring of plasmids carrying antibiotic resistance genes between bacteria of the same or different species via conjugation. However, bacteria can also acquire resistance through transformation, as in Streptococcus pneumoniae uptaking of naked fragments of extracellular DNA that contain antibiotic resistance genes to streptomycin, through transduction, as in the bacteriophage-mediated transfer of tetracycline resistance genes between strains of S. pyogenes, or through gene transfer agents, which are particles produced by the host cell that resemble bacteriophage structures and are capable of transferring DNA.
Antibiotic resistance can be introduced artificially into a microorganism through laboratory protocols, sometimes used as a selectable marker to examine the mechanisms of gene transfer or to identify individuals that absorbed a piece of DNA that included the resistance gene and another gene of interest.
Recent findings show no necessity of large populations of bacteria for the appearance of antibiotic resistance. Small populations of Escherichia coli in an antibiotic gradient can become resistant. Any heterogeneous environment with respect to nutrient and antibiotic gradients may facilitate antibiotic resistance in small bacterial populations. Researchers hypothesize that the mechanism of resistance evolution is based on four SNP mutations in the genome of E. coli produced by the gradient of antibiotic.
In one study, which has implications for space microbiology, a non-pathogenic strain E. coli MG1655 was exposed to trace levels of the broad spectrum antibiotic chloramphenicol, under simulated microgravity (LSMMG, or Low Shear Modeled Microgravity) over 1000 generations. The adapted strain acquired resistance to not only chloramphenicol, but also cross-resistance to other antibiotics; this was in contrast to the observation on the same strain, which was adapted to over 1000 generations under LSMMG, but without any antibiotic exposure; the strain in this case did not acquire any such resistance. Thus, irrespective of where they are used, the use of an antibiotic would likely result in persistent resistance to that antibiotic, as well as cross-resistance to other antimicrobials.
In recent years, the emergence and spread of β-lactamases called carbapenemases has become a major health crisis. One such carbapenemase is New Delhi metallo-beta-lactamase 1 (NDM-1), an enzyme that makes bacteria resistant to a broad range of beta-lactam antibiotics. The most common bacteria that make this enzyme are gram-negative such as E. coli and Klebsiella pneumoniae, but the gene for NDM-1 can spread from one strain of bacteria to another by horizontal gene transfer.
Viruses
Specific antiviral drugs are used to treat some viral infections. These drugs prevent viruses from reproducing by inhibiting essential stages of the virus's replication cycle in infected cells. Antivirals are used to treat HIV, hepatitis B, hepatitis C, influenza, herpes viruses including varicella zoster virus, cytomegalovirus and Epstein–Barr virus. With each virus, some strains have become resistant to the administered drugs.
Antiviral drugs typically target key components of viral reproduction; for example, oseltamivir targets influenza neuraminidase, while guanosine analogs inhibit viral DNA polymerase. Resistance to antivirals is thus acquired through mutations in the genes that encode the protein targets of the drugs.
Resistance to HIV antivirals is problematic, and even multi-drug resistant strains have evolved. One source of resistance is that many current HIV drugs, including NRTIs and NNRTIs, target reverse transcriptase; however, HIV-1 reverse transcriptase is highly error prone and thus mutations conferring resistance arise rapidly. Resistant strains of the HIV virus emerge rapidly if only one antiviral drug is used. Using three or more drugs together, termed combination therapy, has helped to control this problem, but new drugs are needed because of the continuing emergence of drug-resistant HIV strains.
Fungi
Infections by fungi are a cause of high morbidity and mortality in immunocompromised persons, such as those with HIV/AIDS, tuberculosis or receiving chemotherapy. The fungi Candida, Cryptococcus neoformans and Aspergillus fumigatus cause most of these infections and antifungal resistance occurs in all of them. Multidrug resistance in fungi is increasing because of the widespread use of antifungal drugs to treat infections in immunocompromised individuals and the use of some agricultural antifungals. Antifungal resistant disease is associated with increased mortality.
Some fungi (e.g. Candida krusei and fluconazole) exhibit intrinsic resistance to certain antifungal drugs or classes, whereas some species develop antifungal resistance to external pressures. Antifungal resistance is a One Health concern, driven by multiple extrinsic factors, including extensive fungicidal use, overuse of clinical antifungals, environmental change and host factors.
In the USA fluconazole-resistant Candida species and azole resistance in Aspergillus fumigatus have been highlighted as a growing threat.
More than 20 species of Candida can cause candidiasis infection, the most common of which is Candida albicans. Candida yeasts normally inhabit the skin and mucous membranes without causing infection. However, overgrowth of Candida can lead to candidiasis. Some Candida species (e.g. Candida glabrata) are becoming resistant to first-line and second-line antifungal agents such as echinocandins and azoles.
The emergence of Candida auris as a potential human pathogen that sometimes exhibits multi-class antifungal drug resistance is concerning and has been associated with several outbreaks globally. The WHO has released a priority fungal pathogen list, including pathogens with antifungal resistance.
The identification of antifungal resistance is undermined by limited classical diagnosis of infection, where a culture is lacking, preventing susceptibility testing. National and international surveillance schemes for fungal disease and antifungal resistance are limited, hampering the understanding of the disease burden and associated resistance. The application of molecular testing to identify genetic markers associating with resistance may improve the identification of antifungal resistance, but the diversity of mutations associated with resistance is increasing across the fungal species causing infection. In addition, a number of resistance mechanisms depend on up-regulation of selected genes (for instance reflux pumps) rather than defined mutations that are amenable to molecular detection.
Due to the limited number of antifungals in clinical use and the increasing global incidence of antifungal resistance, using the existing antifungals in combination might be beneficial in some cases but further research is needed. Similarly, other approaches that might help to combat the emergence of antifungal resistance could rely on the development of host-directed therapies such as immunotherapy or vaccines.
Parasites
The protozoan parasites that cause the diseases malaria, trypanosomiasis, toxoplasmosis, cryptosporidiosis and leishmaniasis are important human pathogens.
Malarial parasites that are resistant to the drugs that are currently available to infections are common and this has led to increased efforts to develop new drugs. Resistance to recently developed drugs such as artemisinin has also been reported. The problem of drug resistance in malaria has driven efforts to develop vaccines.
Trypanosomes are parasitic protozoa that cause African trypanosomiasis and Chagas disease (American trypanosomiasis). There are no vaccines to prevent these infections so drugs such as pentamidine and suramin, benznidazole and nifurtimox are used to treat infections. These drugs are effective but infections caused by resistant parasites have been reported.
Leishmaniasis is caused by protozoa and is an important public health problem worldwide, especially in sub-tropical and tropical countries. Drug resistance has "become a major concern".
Global and genomic data
In 2022, genomic epidemiologists reported results from a global survey of antimicrobial resistance via genomic wastewater-based epidemiology, finding large regional variations, providing maps, and suggesting resistance genes are also passed on between microbial species that are not closely related. The WHO provides the Global Antimicrobial Resistance and Use Surveillance System (GLASS) reports which summarize annual (e.g. 2020's) data on international AMR, also including an interactive dashboard.
Epidemiology
United Kingdom
Public Health England reported that the total number of antibiotic resistant infections in England rose by 9% from 55,812 in 2017 to 60,788 in 2018, but antibiotic consumption had fallen by 9% from 20.0 to 18.2 defined daily doses per 1,000 inhabitants per day between 2014 and 2018.
United States
The Centers for Disease Control and Prevention reported that more than 2.8 million cases of antibiotic resistance have been reported. However, in 2019 overall deaths from antibiotic-resistant infections decreased by 18% and deaths in hospitals decreased by 30%.
The COVID pandemic caused a reversal of much of the progress made on attenuating the effects of antibiotic resistance, resulting in more antibiotic use, more resistant infections, and less data on preventive action. Hospital-onset infections and deaths both increased by 15% in 2020, and significantly higher rates of infections were reported for 4 out of 6 types of healthcare associated infections.
History
The 1950s to 1970s represented the golden age of antibiotic discovery, where countless new classes of antibiotics were discovered to treat previously incurable diseases such as tuberculosis and syphilis. However, since that time the discovery of new classes of antibiotics has been almost nonexistent, and represents a situation that is especially problematic considering the resiliency of bacteria shown over time and the continued misuse and overuse of antibiotics in treatment.
The phenomenon of antimicrobial resistance caused by overuse of antibiotics was predicted as early as 1945 by Alexander Fleming who said "The time may come when penicillin can be bought by anyone in the shops. Then there is the danger that the ignorant man may easily under-dose himself and by exposing his microbes to nonlethal quantities of the drug make them resistant." Without the creation of new and stronger antibiotics an era where common infections and minor injuries can kill, and where complex procedures such as surgery and chemotherapy become too risky, is a very real possibility. Antimicrobial resistance can lead to epidemics of enormous proportions if preventive actions are not taken. In this day and age current antimicrobial resistance leads to longer hospital stays, higher medical costs, and increased mortality.
Society and culture
Innovation policy
Since the mid-1980s pharmaceutical companies have invested in medications for cancer or chronic disease that have greater potential to make money and have "de-emphasized or dropped development of antibiotics". On 20 January 2016 at the World Economic Forum in Davos, Switzerland, more than "80 pharmaceutical and diagnostic companies" from around the world called for "transformational commercial models" at a global level to spur research and development on antibiotics and on the "enhanced use of diagnostic tests that can rapidly identify the infecting organism". A number of countries are considering or implementing delinked payment models for new antimicrobials whereby payment is based on value rather than volume of drug sales. This offers the opportunity to pay for valuable new drugs even if they are reserved for use in relatively rare drug resistant infections.
Legal frameworks
Some global health scholars have argued that a global, legal framework is needed to prevent and control antimicrobial resistance. For instance, binding global policies could be used to create antimicrobial use standards, regulate antibiotic marketing, and strengthen global surveillance systems. Ensuring compliance of involved parties is a challenge. Global antimicrobial resistance policies could take lessons from the environmental sector by adopting strategies that have made international environmental agreements successful in the past such as: sanctions for non-compliance, assistance for implementation, majority vote decision-making rules, an independent scientific panel, and specific commitments.
United States
For the United States 2016 budget, U.S. president Barack Obama proposed to nearly double the amount of federal funding to "combat and prevent" antibiotic resistance to more than $1.2 billion. Many international funding agencies like USAID, DFID, SIDA and Bill & Melinda Gates Foundation have pledged money for developing strategies to counter antimicrobial resistance.
On 27 March 2015, the White House released a comprehensive plan to address the increasing need for agencies to combat the rise of antibiotic-resistant bacteria. The Task Force for Combating Antibiotic-Resistant Bacteria developed The National Action Plan for Combating Antibiotic-Resistant Bacteria with the intent of providing a roadmap to guide the US in the antibiotic resistance challenge and with hopes of saving many lives. This plan outlines steps taken by the Federal government over the next five years needed in order to prevent and contain outbreaks of antibiotic-resistant infections; maintain the efficacy of antibiotics already on the market; and to help to develop future diagnostics, antibiotics, and vaccines.
The Action Plan was developed around five goals with focuses on strengthening health care, public health veterinary medicine, agriculture, food safety and research, and manufacturing. These goals, as listed by the White House, are as follows:
Slow the Emergence of Resistant Bacteria and Prevent the Spread of Resistant Infections
Strengthen National One-Health Surveillance Efforts to Combat Resistance
Advance Development and use of Rapid and Innovative Diagnostic Tests for Identification and Characterization of Resistant Bacteria
Accelerate Basic and Applied Research and Development for New Antibiotics, Other Therapeutics, and Vaccines
Improve International Collaboration and Capacities for Antibiotic Resistance Prevention, Surveillance, Control and Antibiotic Research and Development
The following are goals set to meet by 2020:
Establishment of antimicrobial programs within acute care hospital settings
Reduction of inappropriate antibiotic prescription and use by at least 50% in outpatient settings and 20% inpatient settings
Establishment of State Antibiotic Resistance (AR) Prevention Programs in all 50 states
Elimination of the use of medically important antibiotics for growth promotion in food-producing animals.
Current Status of AMR in the U.S.
As of 2023, antimicrobial resistance (AMR) remains a significant public health threat in the United States. According to the Centers for Disease Control and Prevention's 2023 Report on Antibiotic Resistance Threats, over 2.8 million antibiotic-resistant infections occur in the U.S. each year, leading to at least 35,000 deaths annually. Among the most concerning resistant pathogens are Carbapenem-resistant Enterobacteriaceae (CRE), Methicillin-resistant Staphylococcus aureus (MRSA), and Clostridioides difficile (C. diff), all of which continue to be responsible for severe healthcare-associated infections (HAIs).
The COVID-19 pandemic led to a significant disruption in healthcare, with an increase in the use of antibiotics during the treatment of viral infections. This rise in antibiotic prescribing, coupled with overwhelmed healthcare systems, contributed to a resurgence in AMR during the pandemic years. A 2021 CDC report identified a sharp increase in HAIs caused by resistant pathogens in COVID-19 patients, a trend that has persisted into 2023. Recent data suggest that although antibiotic use has decreased since the pandemic, some resistant pathogens remain prevalent in healthcare settings.
The CDC has also expanded its Get Ahead of Sepsis campaign in 2023, focusing on raising awareness of AMR's role in sepsis and promoting the judicious use of antibiotics in both healthcare and community settings. This initiative has reached millions through social media, healthcare facilities, and public health outreach, aiming to educate the public on the importance of preventing infections and reducing antibiotic misuse.
Policies
According to World Health Organization, policymakers can help tackle resistance by strengthening resistance-tracking and laboratory capacity and by regulating and promoting the appropriate use of medicines. Policymakers and industry can help tackle resistance by: fostering innovation and research and development of new tools; and promoting cooperation and information sharing among all stakeholders.
The U.S. government continues to prioritize AMR mitigation through policy and legislation. In 2023, the National Action Plan for Combating Antibiotic-Resistant Bacteria (CARB) 2023-2028 was released, outlining strategic objectives for reducing antibiotic-resistant infections, advancing infection prevention, and accelerating research on new antibiotics. The plan also emphasizes the importance of improving antibiotic stewardship across healthcare, agriculture, and veterinary settings.
Furthermore, the PASTEUR Act (Pioneering Antimicrobial Subscriptions to End Upsurging Resistance) has gained momentum in Congress. If passed, the bill would create a subscription-based payment model to incentivize the development of new antimicrobial drugs, while supporting antimicrobial stewardship programs to reduce the misuse of existing antibiotics. This legislation is considered a critical step toward addressing the economic barriers to developing new antimicrobials.
Policy evaluation
Measuring the costs and benefits of strategies to combat AMR is difficult and policies may only have effects in the distant future. In other infectious diseases this problem has been addressed by using mathematical models. More research is needed to understand how AMR develops and spreads so that mathematical modelling can be used to anticipate the likely effects of different policies.
Further research
Rapid testing and diagnostics
Distinguishing infections requiring antibiotics from self-limiting ones is clinically challenging. In order to guide appropriate use of antibiotics and prevent the evolution and spread of antimicrobial resistance, diagnostic tests that provide clinicians with timely, actionable results are needed.
Acute febrile illness is a common reason for seeking medical care worldwide and a major cause of morbidity and mortality. In areas with decreasing malaria incidence, many febrile patients are inappropriately treated for malaria, and in the absence of a simple diagnostic test to identify alternative causes of fever, clinicians presume that a non-malarial febrile illness is most likely a bacterial infection, leading to inappropriate use of antibiotics. Multiple studies have shown that the use of malaria rapid diagnostic tests without reliable tools to distinguish other fever causes has resulted in increased antibiotic use.
Antimicrobial susceptibility testing (AST) can facilitate a precision medicine approach to treatment by helping clinicians to prescribe more effective and targeted antimicrobial therapy. At the same time with traditional phenotypic AST it can take 12 to 48 hours to obtain a result due to the time taken for organisms to grow on/in culture media. Rapid testing, possible from molecular diagnostics innovations, is defined as "being feasible within an 8-h working shift". There are several commercial Food and Drug Administration-approved assays available which can detect AMR genes from a variety of specimen types. Progress has been slow due to a range of reasons including cost and regulation. Genotypic AMR characterisation methods are, however, being increasingly used in combination with machine learning algorithms in research to help better predict phenotypic AMR from organism genotype.
Optical techniques such as phase contrast microscopy in combination with single-cell analysis are another powerful method to monitor bacterial growth. In 2017, scientists from Uppsala University in Sweden published a method that applies principles of microfluidics and cell tracking, to monitor bacterial response to antibiotics in less than 30 minutes overall manipulation time. This invention was awarded the 8M£ Longitude Prize on AMR in 2024. Recently, this platform has been advanced by coupling microfluidic chip with optical tweezing in order to isolate bacteria with altered phenotype directly from the analytical matrix.
Rapid diagnostic methods have also been trialled as antimicrobial stewardship interventions to influence the healthcare drivers of AMR. Serum procalcitonin measurement has been shown to reduce mortality rate, antimicrobial consumption and antimicrobial-related side-effects in patients with respiratory infections, but impact on AMR has not yet been demonstrated. Similarly, point of care serum testing of the inflammatory biomarker C-reactive protein has been shown to influence antimicrobial prescribing rates in this patient cohort, but further research is required to demonstrate an effect on rates of AMR. Clinical investigation to rule out bacterial infections are often done for patients with pediatric acute respiratory infections. Currently it is unclear if rapid viral testing affects antibiotic use in children.
Vaccines
Vaccines are an essential part of the response to reduce AMR as they prevent infections, reduce the use and overuse of antimicrobials, and slow the emergence and spread of drug-resistant pathogens.
Microorganisms usually do not develop resistance to vaccines because vaccines reduce the spread of the infection and target the pathogen in multiple ways in the same host and possibly in different ways between different hosts. Furthermore, if the use of vaccines increases, there is evidence that antibiotic resistant strains of pathogens will decrease; the need for antibiotics will naturally decrease as vaccines prevent infection before it occurs. A 2024 report by WHO finds that vaccines against 24 pathogens could reduce the number of antibiotics needed by 22% or 2.5 billion defined daily doses globally every year. If vaccines could be rolled out against all the evaluated pathogens, they could save a third of the hospital costs associated with AMR. Vaccinated people have fewer infections and are protected against potential complications from secondary infections that may need antimicrobial medicines or require admission to hospital. However, there are well documented cases of vaccine resistance, although these are usually much less of a problem than antimicrobial resistance.
While theoretically promising, antistaphylococcal vaccines have shown limited efficacy, because of immunological variation between Staphylococcus species, and the limited duration of effectiveness of the antibodies produced. Development and testing of more effective vaccines is underway.
Two registrational trials have evaluated vaccine candidates in active immunization strategies against S. aureus infection. In a phase II trial, a bivalent vaccine of capsular proteins 5 & 8 was tested in 1804 hemodialysis patients with a primary fistula or synthetic graft vascular access. After 40 weeks following vaccination a protective effect was seen against S. aureus bacteremia, but not at 54 weeks following vaccination. Based on these results, a second trial was conducted which failed to show efficacy.
Merck tested V710, a vaccine targeting IsdB, in a blinded randomized trial in patients undergoing median sternotomy. The trial was terminated after a higher rate of multiorgan system failure–related deaths was found in the V710 recipients. Vaccine recipients who developed S. aureus infection were five times more likely to die than control recipients who developed S. aureus infection.
Numerous investigators have suggested that a multiple-antigen vaccine would be more effective, but a lack of biomarkers defining human protective immunity keep these proposals in the logical, but strictly hypothetical arena.
Antibody therapy
Antibodies are promising against antimicrobial resistance. Monoclonal antibodies (mAbs) target bacterial virulence factors, aiding in bacterial destruction through various mechanisms. Three FDA-approved antibodies target B. anthracis and C. difficile toxins. Innovative strategies include DSTA4637S, an antibody-antibiotic conjugate, and MEDI13902, a bispecific antibody targeting Pseudomonas aeruginosa components.
Alternating therapy
Alternating therapy is a proposed method in which two or three antibiotics are taken in a rotation versus taking just one antibiotic such that bacteria resistant to one antibiotic are killed when the next antibiotic is taken. Studies have found that this method reduces the rate at which antibiotic resistant bacteria emerge in vitro relative to a single drug for the entire duration.
Studies have found that bacteria that evolve antibiotic resistance towards one group of antibiotic may become more sensitive to others. This phenomenon can be used to select against resistant bacteria using an approach termed collateral sensitivity cycling, which has recently been found to be relevant in developing treatment strategies for chronic infections caused by Pseudomonas aeruginosa. Despite its promise, large-scale clinical and experimental studies revealed limited evidence of susceptibility to antibiotic cycling across various pathogens.
Development of new drugs
Since the discovery of antibiotics, research and development (R&D) efforts have provided new drugs in time to treat bacteria that became resistant to older antibiotics, but in the 2000s there has been concern that development has slowed enough that seriously ill people may run out of treatment options. Another concern is that practitioners may become reluctant to perform routine surgeries because of the increased risk of harmful infection. Backup treatments can have serious side-effects; for example, antibiotics like aminoglycosides (such as amikacin, gentamicin, kanamycin, streptomycin, etc.) used for the treatment of drug-resistant tuberculosis and cystic fibrosis can cause respiratory disorders, deafness and kidney failure.
The potential crisis at hand is the result of a marked decrease in industry research and development. Poor financial investment in antibiotic research has exacerbated the situation. The pharmaceutical industry has little incentive to invest in antibiotics because of the high risk and because the potential financial returns are less likely to cover the cost of development than for other pharmaceuticals. In 2011, Pfizer, one of the last major pharmaceutical companies developing new antibiotics, shut down its primary research effort, citing poor shareholder returns relative to drugs for chronic illnesses. However, small and medium-sized pharmaceutical companies are still active in antibiotic drug research. In particular, apart from classical synthetic chemistry methodologies, researchers have developed a combinatorial synthetic biology platform on single cell level in a high-throughput screening manner to diversify novel lanthipeptides.
In the 5–10 years since 2010, there has been a significant change in the ways new antimicrobial agents are discovered and developed – principally via the formation of public-private funding initiatives. These include CARB-X, which focuses on nonclinical and early phase development of novel antibiotics, vaccines, rapid diagnostics; Novel Gram Negative Antibiotic (GNA-NOW), which is part of the EU's Innovative Medicines Initiative; and Replenishing and Enabling the Pipeline for Anti-infective Resistance Impact Fund (REPAIR). Later stage clinical development is supported by the AMR Action Fund, which in turn is supported by multiple investors with the aim of developing 2–4 new antimicrobial agents by 2030. The delivery of these trials is facilitated by national and international networks supported by the Clinical Research Network of the National Institute for Health and Care Research (NIHR), European Clinical Research Alliance in Infectious Diseases (ECRAID) and the recently formed ADVANCE-ID, which is a clinical research network based in Asia. The Global Antibiotic Research and Development Partnership (GARDP) is generating new evidence for global AMR threats such as neonatal sepsis, treatment of serious bacterial infections and sexually transmitted infections as well as addressing global access to new and strategically important antibacterial drugs.
The discovery and development of new antimicrobial agents has been facilitated by regulatory advances, which have been principally led by the European Medicines Agency (EMA) and the Food and Drug Administration (FDA). These processes are increasingly aligned although important differences remain and drug developers must prepare separate documents. New development pathways have been developed to help with the approval of new antimicrobial agents that address unmet needs such as the Limited Population Pathway for Antibacterial and Antifungal Drugs (LPAD). These new pathways are required because of difficulties in conducting large definitive phase III clinical trials in a timely way.
Some of the economic impediments to the development of new antimicrobial agents have been addressed by innovative reimbursement schemes that delink payment of antimicrobials from volume-based sales. In the UK, a market entry reward scheme has been pioneered by the National Institute for Clinical Excellence (NICE) whereby an annual subscription fee is paid for use of strategically valuable antimicrobial agents – cefiderocol and ceftazidime-aviabactam are the first agents to be used in this manner and the scheme is potential blueprint for comparable programs in other countries.
The available classes of antifungal drugs are still limited but as of 2021 novel classes of antifungals are being developed and are undergoing various stages of clinical trials to assess performance.
Scientists have started using advanced computational approaches with supercomputers for the development of new antibiotic derivatives to deal with antimicrobial resistance.
Biomaterials
Using antibiotic-free alternatives in bone infection treatment may help decrease the use of antibiotics and thus antimicrobial resistance. The bone regeneration material bioactive glass S53P4 has shown to effectively inhibit the bacterial growth of up to 50 clinically relevant bacteria including MRSA and MRSE.
Nanomaterials
During the last decades, copper and silver nanomaterials have demonstrated appealing features for the development of a new family of antimicrobial agents. Nanoparticles (1–100 nm) show unique properties and promise as antimicrobial agents against resistant bacteria. Silver (AgNPs) and gold nanoparticles (AuNPs) are extensively studied, disrupting bacterial cell membranes and interfering with protein synthesis. Zinc oxide (ZnO NPs), copper (CuNPs), and silica (SiNPs) nanoparticles also exhibit antimicrobial properties. However, high synthesis costs, potential toxicity, and instability pose challenges. To overcome these, biological synthesis methods and combination therapies with other antimicrobials are explored. Enhanced biocompatibility and targeting are also under investigation to improve efficacy.
Rediscovery of ancient treatments
Similar to the situation in malaria therapy, where successful treatments based on ancient recipes have been found, there has already been some success in finding and testing ancient drugs and other treatments that are effective against AMR bacteria.
Computational community surveillance
One of the key tools identified by the WHO and others for the fight against rising antimicrobial resistance is improved surveillance of the spread and movement of AMR genes through different communities and regions. Recent advances in high-throughput DNA sequencing as a result of the Human Genome Project have resulted in the ability to determine the individual microbial genes in a sample. Along with the availability of databases of known antimicrobial resistance genes, such as the Comprehensive Antimicrobial Resistance Database (CARD) and ResFinder, this allows the identification of all the antimicrobial resistance genes within the sample – the so-called "resistome". In doing so, a profile of these genes within a community or environment can be determined, providing insights into how antimicrobial resistance is spreading through a population and allowing for the identification of resistance that is of concern.
Phage therapy
Phage therapy is the therapeutic use of bacteriophages to treat pathogenic bacterial infections. Phage therapy has many potential applications in human medicine as well as dentistry, veterinary science, and agriculture.
Phage therapy relies on the use of naturally occurring bacteriophages to infect and lyse bacteria at the site of infection in a host. Due to current advances in genetics and biotechnology these bacteriophages can possibly be manufactured to treat specific infections. Phages can be bioengineered to target multidrug-resistant bacterial infections, and their use involves the added benefit of preventing the elimination of beneficial bacteria in the human body. Phages destroy bacterial cell walls and membrane through the use of lytic proteins which kill bacteria by making many holes from the inside out. Bacteriophages can even possess the ability to digest the biofilm that many bacteria develop that protect them from antibiotics in order to effectively infect and kill bacteria. Bioengineering can play a role in creating successful bacteriophages.
Understanding the mutual interactions and evolutions of bacterial and phage populations in the environment of a human or animal body is essential for rational phage therapy.
Bacteriophagics are used against antibiotic resistant bacteria in Georgia (George Eliava Institute) and in one institute in Wrocław, Poland. Bacteriophage cocktails are common drugs sold over the counter in pharmacies in eastern countries. In Belgium, four patients with severe musculoskeletal infections received bacteriophage therapy with concomitant antibiotics. After a single course of phage therapy, no recurrence of infection occurred and no severe side-effects related to the therapy were detected.
See also
References
Books
Journals
16-minute film about a post-antibiotic world. Review:
Further reading
External links
WHO fact sheet on antimicrobial resistance
Animation of Antibiotic Resistance
Bracing for Superbugs: Strengthening environmental action in the One Health response to antimicrobial resistance UNEP, 2023.
CDC Guideline "Management of Multidrug-Resistant Organisms in Healthcare Settings, 2006"
Evolutionary biology
Health disasters
Pharmaceuticals policy
Veterinary medicine
Global issues | Antimicrobial resistance | [
"Biology"
] | 16,355 | [
"Evolutionary biology"
] |
1,915 | https://en.wikipedia.org/wiki/Antigen | In immunology, an antigen (Ag) is a molecule, moiety, foreign particulate matter, or an allergen, such as pollen, that can bind to a specific antibody or T-cell receptor. The presence of antigens in the body may trigger an immune response.
Antigens can be proteins, peptides (amino acid chains), polysaccharides (chains of simple sugars), lipids, or nucleic acids. Antigens exist on normal cells, cancer cells, parasites, viruses, fungi, and bacteria.
Antigens are recognized by antigen receptors, including antibodies and T-cell receptors. Diverse antigen receptors are made by cells of the immune system so that each cell has a specificity for a single antigen. Upon exposure to an antigen, only the lymphocytes that recognize that antigen are activated and expanded, a process known as clonal selection. In most cases, antibodies are antigen-specific, meaning that an antibody can only react to and bind one specific antigen; in some instances, however, antibodies may cross-react to bind more than one antigen. The reaction between an antigen and an antibody is called the antigen-antibody reaction.
Antigen can originate either from within the body ("self-protein" or "self antigens") or from the external environment ("non-self"). The immune system identifies and attacks "non-self" external antigens. Antibodies usually do not react with self-antigens due to negative selection of T cells in the thymus and B cells in the bone marrow. The diseases in which antibodies react with self antigens and damage the body's own cells are called autoimmune diseases.
Vaccines are examples of antigens in an immunogenic form, which are intentionally administered to a recipient to induce the memory function of the adaptive immune system towards antigens of the pathogen invading that recipient. The vaccine for seasonal influenza is a common example.
Etymology
Paul Ehrlich coined the term antibody () in his side-chain theory at the end of the 19th century. In 1899, Ladislas Deutsch (László Detre) named the hypothetical substances halfway between bacterial constituents and antibodies "antigenic or immunogenic substances" (). He originally believed those substances to be precursors of antibodies, just as a zymogen is a precursor of an enzyme. But, by 1903, he understood that an antigen induces the production of immune bodies (antibodies) and wrote that the word antigen is a contraction of antisomatogen (). The Oxford English Dictionary indicates that the logical construction should be "anti(body)-gen".
The term originally referred to a substance that acts as an antibody generator.
Terminology
Epitope – the distinct surface features of an antigen, its antigenic determinant.Antigenic molecules, normally "large" biological polymers, usually present surface features that can act as points of interaction for specific antibodies. Any such feature constitutes an epitope. Most antigens have the potential to be bound by multiple antibodies, each of which is specific to one of the antigen's epitopes. Using the "lock and key" metaphor, the antigen can be seen as a string of keys (epitopes) each of which matches a different lock (antibody). Different antibody idiotypes, each have distinctly formed complementarity-determining regions.
Allergen – A substance capable of causing an allergic reaction. The (detrimental) reaction may result after exposure via ingestion, inhalation, injection, or contact with skin.
Superantigen – A class of antigens that cause non-specific activation of T-cells, resulting in polyclonal T-cell activation and massive cytokine release.
Tolerogen – A substance that invokes a specific immune non-responsiveness due to its molecular form. If its molecular form is changed, a tolerogen can become an immunogen.
Immunoglobulin-binding protein – Proteins such as protein A, protein G, and protein L that are capable of binding to antibodies at positions outside of the antigen-binding site. While antigens are the "target" of antibodies, immunoglobulin-binding proteins "attack" antibodies.
T-dependent antigen – Antigens that require the assistance of T cells to induce the formation of specific antibodies.
T-independent antigen – Antigens that stimulate B cells directly.
Immunodominant antigens – Antigens that dominate (over all others from a pathogen) in their ability to produce an immune response. T cell responses typically are directed against a relatively few immunodominant epitopes, although in some cases (e.g., infection with the malaria pathogen Plasmodium spp.) it is dispersed over a relatively large number of parasite antigens.
Antigen-presenting cells present antigens in the form of peptides on histocompatibility molecules. The T cells selectively recognize the antigens; depending on the antigen and the type of the histocompatibility molecule, different types of T cells will be activated. For T-cell receptor (TCR) recognition, the peptide must be processed into small fragments inside the cell and presented by a major histocompatibility complex (MHC). The antigen cannot elicit the immune response without the help of an immunologic adjuvant. Similarly, the adjuvant component of vaccines plays an essential role in the activation of the innate immune system.
An immunogen is an antigen substance (or adduct) that is able to trigger a humoral (innate) or cell-mediated immune response. It first initiates an innate immune response, which then causes the activation of the adaptive immune response. An antigen binds the highly variable immunoreceptor products (B-cell receptor or T-cell receptor) once these have been generated. Immunogens are those antigens, termed immunogenic, capable of inducing an immune response.
At the molecular level, an antigen can be characterized by its ability to bind to an antibody's paratopes. Different antibodies have the potential to discriminate among specific epitopes present on the antigen surface. A hapten is a small molecule that can only induce an immune response when attached to a larger carrier molecule, such as a protein. Antigens can be proteins, polysaccharides, lipids, nucleic acids or other biomolecules. This includes parts (coats, capsules, cell walls, flagella, fimbriae, and toxins) of bacteria, viruses, and other microorganisms. Non-microbial non-self antigens can include pollen, egg white, and proteins from transplanted tissues and organs or on the surface of transfused blood cells.
Sources
Antigens can be classified according to their source.
Exogenous antigens
Exogenous antigens are antigens that have entered the body from the outside, for example, by inhalation, ingestion or injection. The immune system's response to exogenous antigens is often subclinical. By endocytosis or phagocytosis, exogenous antigens are taken into the antigen-presenting cells (APCs) and processed into fragments. APCs then present the fragments to T helper cells (CD4+) by the use of class II histocompatibility molecules on their surface. Some T cells are specific for the peptide:MHC complex. They become activated and start to secrete cytokines, substances that activate cytotoxic T lymphocytes (CTL), antibody-secreting B cells, macrophages and other particles.
Some antigens start out as exogenous and later become endogenous (for example, intracellular viruses). Intracellular antigens can be returned to circulation upon the destruction of the infected cell.
Endogenous antigens
Endogenous antigens are generated within normal cells as a result of normal cell metabolism, or because of viral or intracellular bacterial infection. The fragments are then presented on the cell surface in the complex with MHC class I molecules. If activated cytotoxic CD8+ T cells recognize them, the T cells secrete various toxins that cause the lysis or apoptosis of the infected cell. In order to keep the cytotoxic cells from killing cells just for presenting self-proteins, the cytotoxic cells (self-reactive T cells) are deleted as a result of tolerance (negative selection). Endogenous antigens include xenogenic (heterologous), autologous and idiotypic or allogenic (homologous) antigens. Sometimes antigens are part of the host itself in an autoimmune disease.
Autoantigens
An autoantigen is usually a self-protein or protein complex (and sometimes DNA or RNA) that is recognized by the immune system of patients with a specific autoimmune disease. Under normal conditions, these self-proteins should not be the target of the immune system, but in autoimmune diseases, their associated T cells are not deleted and instead attack.
Neoantigens
Neoantigens are those that are entirely absent from the normal human genome. As compared with nonmutated self-proteins, neoantigens are of relevance to tumor control, as the quality of the T cell pool that is available for these antigens is not affected by central T cell tolerance. Technology to systematically analyze T cell reactivity against neoantigens became available only recently. Neoantigens can be directly detected and quantified.
Viral antigens
For virus-associated tumors, such as cervical cancer and a subset of head and neck cancers, epitopes derived from viral open reading frames contribute to the pool of neoantigens.
Tumor antigens
Tumor antigens are those antigens that are presented by MHC class I or MHC class II molecules on the surface of tumor cells. Antigens found only on such cells are called tumor-specific antigens (TSAs) and generally result from a tumor-specific mutation. More common are antigens that are presented by tumor cells and normal cells, called tumor-associated antigens (TAAs). Cytotoxic T lymphocytes that recognize these antigens may be able to destroy tumor cells.
Tumor antigens can appear on the surface of the tumor in the form of, for example, a mutated receptor, in which case they are recognized by B cells.
For human tumors without a viral etiology, novel peptides (neo-epitopes) are created by tumor-specific DNA alterations.
Process
A large fraction of human tumor mutations are effectively patient-specific. Therefore, neoantigens may also be based on individual tumor genomes. Deep-sequencing technologies can identify mutations within the protein-coding part of the genome (the exome) and predict potential neoantigens. In mice models, for all novel protein sequences, potential MHC-binding peptides were predicted. The resulting set of potential neoantigens was used to assess T cell reactivity. Exome–based analyses were exploited in a clinical setting, to assess reactivity in patients treated by either tumor-infiltrating lymphocyte (TIL) cell therapy or checkpoint blockade. Neoantigen identification was successful for multiple experimental model systems and human malignancies.
The false-negative rate of cancer exome sequencing is low—i.e.: the majority of neoantigens occur within exonic sequence with sufficient coverage. However, the vast majority of mutations within expressed genes do not produce neoantigens that are recognized by autologous T cells.
As of 2015 mass spectrometry resolution is insufficient to exclude many false positives from the pool of peptides that may be presented by MHC molecules. Instead, algorithms are used to identify the most likely candidates. These algorithms consider factors such as the likelihood of proteasomal processing, transport into the endoplasmic reticulum, affinity for the relevant MHC class I alleles and gene expression or protein translation levels.
The majority of human neoantigens identified in unbiased screens display a high predicted MHC binding affinity. Minor histocompatibility antigens, a conceptually similar antigen class are also correctly identified by MHC binding algorithms. Another potential filter examines whether the mutation is expected to improve MHC binding. The nature of the central TCR-exposed residues of MHC-bound peptides is associated with peptide immunogenicity.
Nativity
A native antigen is an antigen that is not yet processed by an APC to smaller parts. T cells cannot bind native antigens, but require that they be processed by APCs, whereas B cells can be activated by native ones.
Antigenic specificity
Antigenic specificity is the ability of the host cells to recognize an antigen specifically as a unique molecular entity and distinguish it from another with exquisite precision. Antigen specificity is due primarily to the side-chain conformations of the antigen. It is measurable and need not be linear or of a rate-limited step or equation. Both T cells and B cells are cellular components of adaptive immunity.
See also
References
Immune system
Biomolecules | Antigen | [
"Chemistry",
"Biology"
] | 2,732 | [
"Natural products",
"Biochemistry",
"Antigens",
"Immune system",
"Organic compounds",
"Organ systems",
"Biomolecules",
"Molecular biology",
"Structural biology"
] |
1,926 | https://en.wikipedia.org/wiki/Antlia | Antlia (; from Ancient Greek ἀντλία) is a constellation in the Southern Celestial Hemisphere. Its name means "pump" in Latin and Greek; it represents an air pump. Originally Antlia Pneumatica, the constellation was established by Nicolas-Louis de Lacaille in the 18th century. Its non-specific (single-word) name, already in limited use, was preferred by John Herschel then welcomed by the astronomic community which officially accepted this. North of stars forming some of the sails of the ship Argo Navis (the constellation Vela), Antlia is completely visible from latitudes south of 49 degrees north.
Antlia is a faint constellation; its brightest star is Alpha Antliae, an orange giant that is a suspected variable star, ranging between apparent magnitudes 4.22 and 4.29. S Antliae is an eclipsing binary star system, changing in brightness as one star passes in front of the other. Sharing a common envelope, the stars are so close they will one day merge to form a single star. Two star systems with known exoplanets, HD 93083 and WASP-66, lie within Antlia, as do NGC 2997, a spiral galaxy, and the Antlia Dwarf Galaxy.
History
The French astronomer Nicolas-Louis de Lacaille first described the constellation in French as la Machine Pneumatique (the Pneumatic Machine) in 1751–52, commemorating the air pump invented by the French physicist Denis Papin. De Lacaille had observed and catalogued almost 10,000 southern stars during a two-year stay at the Cape of Good Hope, devising fourteen new constellations in uncharted regions of the Southern Celestial Hemisphere not visible from Europe. He named all but one in honour of instruments that symbolised the Age of Enlightenment. Lacaille depicted Antlia as a single-cylinder vacuum pump used in Papin's initial experiments, while German astronomer Johann Bode chose the more advanced double-cylinder version. Lacaille Latinised the name to Antlia pneumatica on his 1763 chart. English astronomer John Herschel proposed shrinking the name to one word in 1844, noting that Lacaille himself had abbreviated his constellations thus on occasion. This was universally adopted. The International Astronomical Union adopted it as one of the 88 modern constellations in 1922.
Although visible to the Ancient Greeks, Antlia's stars were too faint to have been commonly recognised as a figurative object, or part of one, in ancient asterisms. The stars that now comprise Antlia are in a zone of the sky associated with the asterism/old constellation Argo Navis, the ship, the Argo, of the Argonauts, in its latter centuries. This, due to its immense size, was split into hull, poop deck and sails by Lacaille in 1763. Ridpath reports that due to their faintness, the stars of Antlia did not make up part of the classical depiction of Argo Navis.
In non-Western astronomy
Chinese astronomers were able to view what is modern Antlia from their latitudes, and incorporated its stars into two different constellations. Several stars in the southern part of Antlia were a portion of "Dong'ou", which represented an area in southern China. Furthermore, Epsilon, Eta, and Theta Antliae were incorporated into the celestial temple, which also contained stars from modern Pyxis.
Characteristics
Covering 238.9 square degrees and hence 0.579% of the sky, Antlia ranks 62nd of the 88 modern constellations by area. Its position in the Southern Celestial Hemisphere means that the whole constellation is visible to observers south of 49°N. Hydra the sea snake runs along the length of its northern border, while Pyxis the compass, Vela the sails, and Centaurus the centaur line it to the west, south and east respectively. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union, is "Ant". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon with an east side, south side and ten other sides (facing the two other cardinal compass points) (illustrated in infobox at top-right). In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −24.54° and −40.42°.
Features
Stars
Lacaille gave nine stars Bayer designations, labelling them Alpha through to Theta, combining two stars next to each other as Zeta. Gould later added a tenth, Iota Antliae. Beta and Gamma Antliae (now HR 4339 and HD 90156) ended up in the neighbouring constellation Hydra once the constellation boundaries were delineated in 1930. Within the constellation's borders, there are 42 stars brighter than or equal to apparent magnitude 6.5.
The constellation's two brightest stars—Alpha and Epsilon Antliae—shine with a reddish tinge. Alpha is an orange giant of spectral type K4III that is a suspected variable star, ranging between apparent magnitudes 4.22 and 4.29. It is located 320 ± 10 light-years away from Earth. Estimated to be shining with around 480 to 555 times the luminosity of the Sun, it is most likely an ageing star that is brightening and on its way to becoming a Mira variable star, having converted all its core fuel into carbon. Located 590 ± 30 light-years from Earth, Epsilon Antliae is an evolved orange giant star of spectral type K3 IIIa, that has swollen to have a diameter about 69 times that of the Sun, and a luminosity of around 1279 Suns. It is slightly variable. At the other end of Antlia, Iota Antliae is likewise an orange giant of spectral type K1 III. It is 202 ± 2 light-years distant.
Located near Alpha is Delta Antliae, a binary star, 450 ± 10 light-years distant from Earth. The primary is a blue-white main sequence star of spectral type B9.5V and magnitude 5.6, and the secondary is a yellow-white main sequence star of spectral type F9Ve and magnitude 9.6. Zeta Antliae is a wide optical double star. The brighter star—Zeta1 Antliae—is 410 ± 40 light-years distant and has a magnitude of 5.74, though it is a true binary star system composed of two white main sequence stars of magnitudes 6.20 and 7.01 that are separated by 8.042 arcseconds. The fainter star—Zeta2 Antliae—is 386 ± 5 light-years distant and of magnitude 5.9. Eta Antliae is another double composed of a yellow white star of spectral type F1V and magnitude 5.31, with a companion of magnitude 11.3. Theta Antliae is likewise double, most likely composed of an A-type main sequence star and a yellow giant. S Antliae is an eclipsing binary star system that varies in apparent magnitude from 6.27 to 6.83 over a period of 15.6 hours. The system is classed as a W Ursae Majoris variable—the primary is hotter than the secondary and the drop in magnitude is caused by the latter passing in front of the former. Calculating the properties of the component stars from the orbital period indicates that the primary star has a mass 1.94 times and a diameter 2.026 times that of the Sun, and the secondary has a mass 0.76 times and a diameter 1.322 times that of the Sun. The two stars have similar luminosity and spectral type as they have a common envelope and share stellar material. The system is thought to be around 5–6 billion years old. The two stars will eventually merge to form a single fast-spinning star.
T Antliae is a yellow-white supergiant of spectral type F6Iab and Classical Cepheid variable ranging between magnitude 8.88 and 9.82 over 5.9 days. U Antliae is a red C-type carbon star and is an irregular variable that ranges between magnitudes 5.27 and 6.04. At 910 ± 50 light-years distant, it is around 5819 times as luminous as the Sun. BF Antliae is a Delta Scuti variable that varies by 0.01 of a magnitude. HR 4049, also known as AG Antliae, is an unusual hot variable ageing star of spectral type B9.5Ib-II. It is undergoing intense loss of mass and is a unique variable that does not belong to any class of known variable star, ranging between magnitudes 5.29 and 5.83 with a period of 429 days. It is around 6000 light-years away from Earth. UX Antliae is an R Coronae Borealis variable with a baseline apparent magnitude of around 11.85, with irregular dimmings down to below magnitude 18.0. A luminous and remote star, it is a supergiant with a spectrum resembling that of a yellow-white F-type star but it has almost no hydrogen.
HD 93083 is an orange dwarf star of spectral type K3V that is smaller and cooler than the Sun. It has a planet that was discovered by the radial velocity method with the HARPS spectrograph in 2005. About as massive as Saturn, the planet orbits its star with a period of 143 days at a mean distance of 0.477 AU. WASP-66 is a sunlike star of spectral type F4V. A planet with 2.3 times the mass of Jupiter orbits it every 4 days, discovered by the transit method in 2012. DEN 1048-3956 is a brown dwarf of spectral type M8 located around 13 light-years distant from Earth. At magnitude 17 it is much too faint to be seen with the unaided eye. It has a surface temperature of about 2500 K. Two powerful flares lasting 4–5 minutes each were detected in 2002. 2MASS 0939-2448 is a system of two cool and faint brown dwarfs, probably with effective temperatures of about 500 and 700 K and masses of about 25 and 40 times that of Jupiter, though it is also possible that both objects have temperatures of 600 K and 30 Jupiter masses.
Deep-sky objects
Antlia contains many faint galaxies, the brightest of which is NGC 2997 at magnitude 10.6. It is a loosely wound face-on spiral galaxy of type Sc. Though nondescript in most amateur telescopes, it presents bright clusters of young stars and many dark dust lanes in photographs. Discovered in 1997, the Antlia Dwarf is a 14.8m dwarf spheroidal galaxy that belongs to the Local Group of galaxies. In 2018 the discovery was announced of a very low surface brightness galaxy near Epsilon Antliae, Antlia 2, which is a satellite galaxy of the Milky Way.
The Antlia Cluster, also known as Abell S0636, is a cluster of galaxies located in the Hydra–Centaurus Supercluster. It is the third nearest to the Local Group after the Virgo Cluster and the Fornax Cluster. The cluster's distance from earth is Located in the southeastern corner of the constellation, it boasts the giant elliptical galaxies NGC 3268 and NGC 3258 as the main members of a southern and northern subgroup respectively, and contains around 234 galaxies in total.
Antlia is home to the huge Antlia Supernova Remnant, one of the largest supernova remnants in the sky.
Notes
References
Citations
Sources
External links
The Deep Photographic Guide to the Constellations: Antlia
The clickable Antlia
Southern constellations
Constellations listed by Lacaille | Antlia | [
"Astronomy"
] | 2,431 | [
"Antlia",
"Southern constellations",
"Constellations",
"Constellations listed by Lacaille"
] |
1,927 | https://en.wikipedia.org/wiki/Ara%20%28constellation%29 | Ara (Latin for "the Altar") is a southern constellation between Scorpius, Telescopium, Triangulum Australe, and Norma. It was (as ) one of the Greek bulk (namely 48) described by the 2nd-century astronomer Ptolemy, and it remains one of the 88 modern constellations designated by the International Astronomical Union.
The orange supergiant Beta Arae, to us its brightest star measured with near-constant apparent magnitude of 2.85, is marginally brighter than blue-white Alpha Arae. Seven star systems are known to host planets. Sunlike Mu Arae hosts four known planets. Gliese 676 is a (gravity-paired) binary red dwarf system with four known planets.
The Milky Way crosses the northwestern part of Ara. Within the constellation is Westerlund 1, a super star cluster that contains the red supergiant Westerlund 1-26, one of the largest stars known.
History
In ancient Greek mythology, Ara was identified as the altar where the gods first made offerings and formed an alliance before defeating the Titans. One of the southernmost constellations depicted by Ptolemy, it had been recorded by Aratus in 270 BC as lying close to the horizon, and the Almagest portrays stars as far south as Gamma Arae. Professor Bradley Schaefer proposes such Ancients must have been able to see as far south as Zeta Arae, for a pattern that looked like an altar.
In illustrations, Ara is usually depicted as compact classical altar with its smoke 'rising' southward. However, depictions often vary. In the early days of printing, a 1482 woodcut of Gaius Julius Hyginus's classic Poeticon Astronomicon depicts the altar as surrounded by demons. Johann Bayer in 1603 depicted Ara as an altar with burning incense. Indeed, frankincense burners were common throughout the Levant especially in the Yemen, where they are known as Mabkhara. This required live coals or burning embers called Jamra', in order to burn the incense. Willem Blaeu, a Dutch uranographer of the 16th and 17th centuries, drew Ara as an altar for sacrifices, with a burning animal offering unusually whose smoke rises northward, represented by Alpha Arae.
The Castle of Knowledge by Robert Record of 1556 lists the constellation stating that "Under the Scorpions tayle, standeth the Altar."; a decade later a translation of a fairly recent mainly astrological work by Marcellus Palingenius of 1565, by Barnabe Googe states "Here mayst thou both the Altar, and the myghty Cup beholde."
Equivalents
In Chinese astronomy, the stars of the constellation Ara lie within The Azure Dragon of the East (, ). Five stars of Ara formed (), a tortoise, while another three formed (), a pestle.
The Wardaman people of the Northern Territory in Australia saw the stars of Ara and the neighbouring constellation Pavo as flying foxes.
Characteristics
Covering 237.1 square degrees and hence 0.575% of the sky, Ara ranks 63rd of the 88 modern constellations by area. Its position in the Southern Celestial Hemisphere means that the whole constellation is visible to observers south of 22°N. Scorpius runs along the length of its northern border, while Norma and Triangulum Australe border it to the west, Apus to the south, and Pavo and Telescopium to the east respectively. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union, is "Ara". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of twelve segments. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −45.49° and −67.69°.
Features
Stars
Bayer gave eight stars Bayer designations, labelling them Alpha through to Theta, though he had never seen the constellation directly as it never rises above the horizon in Germany. After charting the southern constellations, French astronomer Nicolas-Louis de Lacaille recharted the stars of Ara from Alpha through to Sigma, including three pairs of stars next to each other as Epsilon, Kappa and Nu.
Ara contains part of the Milky Way to the south of Scorpius and thus has rich star fields. Within the constellation's borders, there are 71 stars brighter than or equal to apparent magnitude 6.5.
Beta Arae, apparent magnitude 2.85, is the brightest star in the constellation, about 0.1 mag brighter than Alpha Arae although the difference in brightness between the two is undetectable by the unaided eye. Beta is an orange-hued star of spectral type K3Ib-IIa that has been classified as a supergiant or bright giant, and lies around 650 light-years from Earth. It is over 8 times as massive and 5,636 times as luminous as the Sun. Close to Beta Arae is Gamma Arae, a blue-hued supergiant of spectral type B1Ib. Of apparent magnitude 3.3, it is 1110 ± 60 light-years from Earth. It has been estimated to be between 12.5 and 25 times as massive as the Sun, and have around 120,000 times its luminosity.
Alpha Arae is a blue-white main sequence star of magnitude 2.95, that is 270 ± 20 light-years from Earth. This star is around 9.6 times as massive as the Sun, and has an average of 4.5 times its radius. It is 5,800 times as luminous as the Sun, its energy emitted from its outer envelope at an effective temperature of 18,044 K. A Be star, Alpha Arae is surrounded by a dense equatorial disk of material in Keplerian (rather than uniform) rotation. The star is losing mass by a polar stellar wind with a terminal velocity of approximately 1,000 km/s.
The third brightest star in Ara at magnitude 3.13 is Zeta Arae, an orange giant of spectral type K3III that is located 490 ± 10 light-years from Earth. Around 7–8 times as massive as the Sun, it has swollen to a diameter around 114 times that of the Sun and is 3800 times as luminous. Were it not dimmer by intervening interstellar dust, it would be significantly brighter at magnitude 2.11.
Delta Arae is a blue-white main sequence star of spectral type B8Vn and magnitude 3.6, 198 ± 4 light-years from Earth. It is around 3.56 times as massive as the Sun.
Epsilon1 Arae is an orange giant of apparent magnitude 4.1, 360 ± 10 light-years distant from Earth. It is around 74% more massive than the Sun. At an age of about 1.7 billion years, the outer envelope of the star has expanded to almost 34 times the Sun's radius.
Eta Arae is an orange giant of apparent magnitude 3.76, located 299 ± 5 light-years distant from Earth. Estimated to be around five billion years old, it has reached the giant star stage of its evolution. With 1.12 times the mass of the Sun, it has an outer envelope that has expanded to 40 times the Sun's radius. The star is now spinning so slowly that it takes more than eleven years to complete a single rotation.
GX 339-4 (V821 Arae) is a moderately strong variable galactic low-mass X-ray binary (LMXB) source and black-hole candidate that flares from time to time. From spectroscopic measurements, the mass of the black-hole was found to be at least of 5.8 solar masses.
Exoplanets have been discovered in seven star systems in the constellation. Mu Arae (Cervantes) is a sunlike star that hosts four planets. HD 152079 is a sunlike star with a jupiter-like planet with an orbital period of 2097 ± 930 days. HD 154672 is an ageing sunlike star with a Hot Jupiter. HD 154857 is a sunlike star with one confirmed and one suspected planet. HD 156411 is a star hotter and larger than the sun with a gas giant planet in orbit. Gliese 674 is a nearby red dwarf star with a planet. Gliese 676 is a binary star system composed of two red dwarves with four planets.
Deep-sky objects
The northwest corner of Ara is crossed by the galactic plane of the Milky Way and contains several open clusters (notably NGC 6200) and diffuse nebulae (including the bright cluster/nebula pair NGC 6188 and NGC 6193). The brightest of the globular clusters, sixth magnitude NGC 6397, lies at a distance of just , making it one of the closest globular clusters to the Solar System.
Ara also contains Westerlund 1, a super star cluster containing itself the possible red supergiant Westerlund 1-237 and the red supergiant Westerlund 1-26. The latter is one of the largest stars known with an estimate varying between and .
Although Ara lies close to the heart of the Milky Way, two spiral galaxies (NGC 6215 and NGC 6221) are visible near star Eta Arae.
Open clusters
NGC 6193 is an open cluster containing approximately 30 stars with an overall magnitude of 5.0 and a size of 0.25 square degrees, about half the size of the full Moon. It is approximately 4200 light-years from Earth. It has one bright member, a double star with a blue-white hued primary of magnitude 5.6 and a secondary of magnitude 6.9. NGC 6193 is surrounded by NGC 6188, a faint nebula only normally visible in long-exposure photographs.
NGC 6200
NGC 6204
NGC 6208
NGC 6250
NGC 6253
IC 4651
Globular clusters
NGC 6352
NGC 6362
NGC 6397 is a globular cluster with an overall magnitude of 6.0; it is visible to the naked eye under exceptionally dark skies and is normally visible in binoculars. It is a fairly close globular cluster, at a distance of 10,500 light-years.
Planetary Nebulae
The Stingray Nebula (Hen 3–1357), the youngest known planetary nebula as of 2010, formed in Ara; the light from its formation was first observable around 1987.
NGC 6326. A planetary nebula that might have a binary system at its center.
Notes
References
Bibliography
Online sources
External links
The Deep Photographic Guide to the Constellations: Ara
Warburg Institute Iconographic Database (medieval and early modern images of Ara)
Constellations
Southern constellations
Constellations listed by Ptolemy | Ara (constellation) | [
"Astronomy"
] | 2,223 | [
"Constellations listed by Ptolemy",
"Ara (constellation)",
"Southern constellations",
"Constellations",
"Sky regions"
] |
1,933 | https://en.wikipedia.org/wiki/Apus | Apus is a small constellation in the southern sky. It represents a bird-of-paradise, and its name means "without feet" in Greek because the bird-of-paradise was once wrongly believed to lack feet. First depicted on a celestial globe by Petrus Plancius in 1598, it was charted on a star atlas by Johann Bayer in his 1603 Uranometria. The French explorer and astronomer Nicolas Louis de Lacaille charted and gave the brighter stars their Bayer designations in 1756.
The five brightest stars are all reddish in hue. Shading the others at apparent magnitude 3.8 is Alpha Apodis, an orange giant that has around 48 times the diameter and 928 times the luminosity of the Sun. Marginally fainter is Gamma Apodis, another aging giant star. Delta Apodis is a double star, the two components of which are 103 arcseconds apart and visible with the naked eye. Two star systems have been found to have planets.
History
Apus was one of twelve constellations published by Petrus Plancius from the observations of Pieter Dirkszoon Keyser and Frederick de Houtman who had sailed on the first Dutch trading expedition, known as the Eerste Schipvaart, to the East Indies. It first appeared on a 35-cm (14 in) diameter celestial globe published in 1598 in Amsterdam by Plancius with Jodocus Hondius. De Houtman included it in his southern star catalogue in 1603 under the Dutch name De Paradijs Voghel, "The Bird of Paradise", and Plancius called the constellation Paradysvogel Apis Indica; the first word is Dutch for "bird of paradise". Apis (Latin for "bee") is assumed to have been a typographical error for avis ("bird").
After its introduction on Plancius's globe, the constellation's first known appearance in a celestial atlas was in German cartographer Johann Bayer's Uranometria of 1603. Bayer called it Apis Indica while fellow astronomers Johannes Kepler and his son-in-law Jakob Bartsch called it Apus or Avis Indica. The name Apus is derived from the Greek apous, meaning "without feet". This referred to the Western misconception that the bird-of-paradise had no feet, which arose because the only specimens available in the West had their feet and wings removed. Such specimens began to arrive in Europe in 1522, when the survivors of Ferdinand Magellan's expedition brought them home. The constellation later lost some of its tail when Nicolas-Louis de Lacaille used those stars to establish Octans in the 1750s.
Characteristics
Covering 206.3 square degrees and hence 0.5002% of the sky, Apus ranks 67th of the 88 modern constellations by area. Its position in the Southern Celestial Hemisphere means that the whole constellation is visible to observers south of 7°N. It is bordered by Ara, Triangulum Australe and Circinus to the north, Musca and Chamaeleon to the west, Octans to the south, and Pavo to the east. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "Aps". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of six segments (illustrated in infobox). In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −67.48° and −83.12°.
Features
Stars
Lacaille gave twelve stars Bayer designations, labelling them Alpha through to Kappa, including two stars next to each other as Delta and another two stars near each other as Kappa. Within the constellation's borders, there are 39 stars brighter than or equal to apparent magnitude 6.5. Beta, Gamma and Delta Apodis form a narrow triangle, with Alpha Apodis lying to the east. The five brightest stars are all red-tinged, which is unusual among constellations.
Alpha Apodis is an orange giant of spectral type K3III located 430 ± 20 light-years away from Earth, with an apparent magnitude of 3.8. It spent much of its life as a blue-white (B-type) main sequence star before expanding, cooling and brightening as it used up its core hydrogen. It has swollen to 48 times the Sun's diameter, and shines with a luminosity approximately 928 times that of the Sun, with a surface temperature of 4312 K. Beta Apodis is an orange giant 149 ± 2 light-years away, with a magnitude of 4.2. It is around 1.84 times as massive as the Sun, with a surface temperature of 4677 K. Gamma Apodis is a yellow giant of spectral type G8III located 150 ± 4 light-years away, with a magnitude of 3.87. It is approximately 63 times as luminous the Sun, with a surface temperature of 5279 K. Delta Apodis is a double star, the two components of which are 103 arcseconds apart and visible through binoculars. Delta1 is a red giant star of spectral type M4III located 630 ± 30 light-years away. It is a semiregular variable that varies from magnitude +4.66 to +4.87, with pulsations of multiple periods of 68.0, 94.9 and 101.7 days. Delta2 is an orange giant star of spectral type K3III, located 550 ± 10 light-years away, with a magnitude of 5.3. The separate components can be resolved with the naked eye.
The fifth-brightest star is Zeta Apodis at magnitude 4.8, a star that has swollen and cooled to become an orange giant of spectral type K1III, with a surface temperature of 4649 K and a luminosity 133 times that of the Sun. It is 300 ± 4 light-years distant. Near Zeta is Iota Apodis, a binary star system 1,040 ± 60 light-years distant, that is composed of two blue-white main sequence stars that orbit each other every 59.32 years. Of spectral types B9V and B9.5 V, they are both over three times as massive as the Sun.
Eta Apodis is a white main sequence star located 140.8 ± 0.9 light-years distant. Of apparent magnitude 4.89, it is 1.77 times as massive, 15.5 times as luminous as the Sun and has 2.13 times its radius. Aged 250 ± 200 million years old, this star is emitting an excess of 24 μm infrared radiation, which may be caused by a debris disk of dust orbiting at a distance of more than 31 astronomical units from it.
Theta Apodis is a cool red giant of spectral type M7 III located 350 ± 30 light-years distant. It shines with a luminosity approximately 3879 times that of the Sun and has a surface temperature of 3151 K. A semiregular variable, it varies by 0.56 magnitudes with a period of 119 days—or approximately 4 months. It is losing mass at the rate of times the mass of the Sun per year through its stellar wind. Dusty material ejected from this star is interacting with the surrounding interstellar medium, forming a bow shock as the star moves through the galaxy. NO Apodis is a red giant of spectral type M3III that varies between magnitudes 5.71 and 5.95. Located 780 ± 20 light-years distant, it shines with a luminosity estimated at 2059 times that of the Sun and has a surface temperature of 3568 K. S Apodis is a rare R Coronae Borealis variable, an extremely hydrogen-deficient supergiant thought to have arisen as the result of the merger of two white dwarfs; fewer than 100 have been discovered as of 2012. It has a baseline magnitude of 9.7. R Apodis is a star that was given a variable star designation, yet has turned out not to be variable. Of magnitude 5.3, it is another orange giant.
Two star systems have had exoplanets discovered by doppler spectroscopy, and the substellar companion of a third star system—the sunlike star HD 131664—has since been found to be a brown dwarf with a calculated mass of the companion to 23 times that of Jupiter (minimum of 18 and maximum of 49 Jovian masses). HD 134606 is a yellow sunlike star of spectral type G6IV that has begun expanding and cooling off the main sequence. Three planets orbit it with periods of 12, 59.5 and 459 days, successively larger as they are further away from the star. HD 137388 is another star—of spectral type K2IV—that is cooler than the Sun and has begun cooling off the main sequence. Around 47% as luminous and 88% as massive as the Sun, with 85% of its diameter, it is thought to be around 7.4 ± 3.9 billion years old. It has a planet that is 79 times as massive as the Earth and orbits its sun every 330 days at an average distance of 0.89 astronomical units (AU).
Deep-sky objects
The Milky Way covers much of the constellation's area. Of the deep-sky objects in Apus, there are two prominent globular clusters—NGC 6101 and IC 4499—and a large faint nebula that covers several degrees east of Beta and Gamma Apodis. NGC 6101 is a globular cluster of apparent magnitude 9.2 located around 50,000 light-years distant from Earth, which is around 160 light-years across. Around 13 billion years old, it contains a high concentration of massive bright stars known as blue stragglers, thought to be the result of two stars merging. IC 4499 is a loose globular cluster in the medium-far galactic halo; its apparent magnitude is 10.6.
The galaxies in the constellation are faint. IC 4633 is a very faint spiral galaxy surrounded by a vast amount of Milky Way line-of-sight integrated flux nebulae—large faint clouds thought to be lit by large numbers of stars.
See also
IAU-recognized constellations
Notes
References
External links
The Deep Photographic Guide to the Constellations: Apus
The clickable Apus
Southern constellations
Constellations listed by Petrus Plancius | Apus | [
"Astronomy"
] | 2,182 | [
"Constellations listed by Petrus Plancius",
"Apus",
"Southern constellations",
"Constellations"
] |
1,938 | https://en.wikipedia.org/wiki/Andrew%20Carnegie | Andrew Carnegie ( , ; November 25, 1835August 11, 1919) was a Scottish-American industrialist and philanthropist. Carnegie led the expansion of the American steel industry in the late-19th century and became one of the richest Americans in history.
He became a leading philanthropist in the United States, Great Britain, and the British Empire. During the last 18 years of his life, he gave away around $350 million (equivalent to $ billion in ), almost 90 percent of his fortune, to charities, foundations and universities. His 1889 article proclaiming "The Gospel of Wealth" called on the rich to use their wealth to improve society, expressed support for progressive taxation and an estate tax, and stimulated a wave of philanthropy.
Carnegie was born in Dunfermline, Scotland. He immigrated to what is now Pittsburgh, Pennsylvania, United States with his parents in 1848 at the age of 12. Carnegie started work in a cotton mill and later as a telegrapher. By the 1860s he had investments in railroads, railroad sleeping cars, bridges, and oil derricks. He accumulated further wealth as a bond salesman, raising money for American enterprise in Europe. He built Pittsburgh's Carnegie Steel Company, which he sold to J. P. Morgan in 1901 for $303,450,000 (equal to $ today); it formed the basis of the U.S. Steel Corporation. After selling Carnegie Steel, he surpassed John D. Rockefeller as the richest American of the time.
Carnegie devoted the remainder of his life to large-scale philanthropy, with special emphasis on building local libraries, working for world peace, education, and scientific research. He funded Carnegie Hall in New York City, the Peace Palace in The Hague, founded the Carnegie Corporation of New York, Carnegie Endowment for International Peace, Carnegie Institution for Science, Carnegie Trust for the Universities of Scotland, Carnegie Hero Fund, Carnegie Mellon University, and the Carnegie Museums of Pittsburgh, among others.
Biography
Early life
Andrew Carnegie was born to Margaret (Morrison) Carnegie and William Carnegie in Dunfermline, Scotland, in a typical weaver's cottage with only one main room. It consisted of half the ground floor, which was shared with the neighboring weaver's family. The main room served as a living room, dining room and bedroom. He was named after his paternal grandfather. William Carnegie had a successful weaving business and owned multiple looms.
In 1836, the family moved to a larger house in Edgar Street (opposite Reid's Park), following the demand for more heavy damask, from which his father benefited. Carnegie was educated at the Free School in Dunfermline, a gift to the town from philanthropist Adam Rolland of Gask.
Carnegie's maternal uncle, Scottish political leader George Lauder Sr., deeply influenced him as a boy by introducing him to Robert Burns' writings and historical Scottish heroes such as Robert the Bruce, William Wallace, and Rob Roy. Lauder's son, also named George Lauder, grew up with Carnegie and later became his business partner in the United States.
When Carnegie was 12, his father had fallen on tough times as a handloom weaver. Making matters worse, the country was in starvation. His mother helped support the family by assisting her brother and by selling potted meats at her "sweetie shop", becoming the primary breadwinner. Struggling to make ends meet, the Carnegies decided to borrow money from George Lauder, Sr. and move to the United States in 1848 for the prospect of a better life. They headed to Allegheny, Pennsylvania, where they heard there was a demand for workers. Carnegie's emigration to America was his second journey outside Dunfermline. The first was a family outing to Edinburgh to see Queen Victoria.
In September 1848, Carnegie and his family arrived in Allegheny. Carnegie's father struggled to sell his product on his own. Eventually, the father and son both received job offers at Anchor Cotton Mills, a Scottish-owned facility. Carnegie's first job in 1848 was as a bobbin boy, changing spools of thread in a cotton mill 12 hours a day, 6 days a week in a Pittsburgh cotton factory. His starting wage was $1.20 per week ($ by inflation).
His father soon quit his position at the cotton mill, returning to his loom, and was again removed as a substantial breadwinner. But Carnegie attracted the attention of John Hay, a Scottish manufacturer of bobbins, who offered him a job for $2.00 per week ($ by inflation).
In his autobiography, Carnegie writes about the hardships he had to endure with this new job:
Telegraph
In 1849, Carnegie became a telegraph messenger boy in the Pittsburgh Office of the Ohio Telegraph Company, at $2.50 per week ($ by inflation) following the recommendation of his uncle. He was a hard worker and would memorize all of the locations of Pittsburgh's businesses and the faces of important men. He made many connections this way. He also paid close attention to his work and quickly learned to distinguish the different sounds the incoming telegraph signals produced. He developed the ability to translate signals by ear, without using the paper slip.
Within a year he was promoted to an operator. Carnegie's education and passion for reading were given a boost by Colonel James Anderson, who opened his personal library of 400 volumes to working boys each Saturday night. Carnegie was a consistent borrower and a "self-made man" in both his economic development and his intellectual and cultural development. He was so grateful to Colonel Anderson for the use of his library that he "resolved, if ever wealth came to me, [to see to it] that other poor boys might receive opportunities similar to those for which we were indebted to the nobleman". His capacity, his willingness for hard work, his perseverance, and his alertness soon brought him opportunities.
Railroads
Starting in 1853, when Carnegie was around 18 years old, Thomas A. Scott of the Pennsylvania Railroad employed him as a secretary/telegraph operator at a salary of $4.00 per week ($ by inflation). Carnegie accepted the job with the railroad as he saw more prospects for career growth and experience there than with the telegraph company. When Carnegie was 24 years old, Scott asked him if he could handle being superintendent of the Western Division of the Pennsylvania Railroad.
On December 1, 1859, Carnegie officially became superintendent of the Western Division. He hired his sixteen-year-old brother Tom to be his personal secretary and telegraph operator. Carnegie also hired his cousin, Maria Hogan, who became the first female telegraph operator in the country. As superintendent, Carnegie made a salary of $1500 a year ($ by inflation). His employment by the Pennsylvania Railroad would be vital to his later success. The railroads were the first big businesses in America, and the Pennsylvania was one of the largest. Carnegie learned much about management and cost control during these years, and from Scott in particular.
Scott also helped him with his first investments. Many of these were part of the corruption indulged in by Scott and the president of the Pennsylvania Railroad, John Edgar Thomson, which consisted of inside trading in companies with which the railroad did business, or payoffs made by contracting parties "as part of a quid pro quo". In 1855, Scott made it possible for Carnegie to invest $500 in the Adams Express Company, which contracted with the Pennsylvania to carry its messengers. The money was secured by his mother's placing of a $600 mortgage on the family's $700 home, but the opportunity was available only because of Carnegie's close relationship with Scott. A few years later, he received a few shares in Theodore Tuttle Woodruff's sleeping car company as a reward for holding shares that Woodruff had given to Scott and Thomson, as a payoff. Reinvesting his returns in such inside investments in railroad-related industries (iron, bridges, and rails), Carnegie slowly accumulated capital, the basis for his later success. Throughout his later career, he made use of his close connections to Thomson and Scott, as he established businesses that supplied rails and bridges to the railroad, offering the two men stakes in his enterprises.
1860–1865: American Civil War
Before the American Civil War, Carnegie arranged a merger between Woodruff's company and that of George Pullman, the inventor of the sleeping car for first-class travel, which facilitated business travel at distances over . The investment proved a success and a source of profit for Woodruff and Carnegie. The young Carnegie continued to work for Pennsylvania's Tom Scott and introduced several improvements in the service.
In the spring of 1861, Carnegie was appointed by Scott, who was now Assistant Secretary of War in charge of military transportation, as Superintendent of the Military Railways and the Union Government's telegraph lines in the East. Carnegie helped open the rail lines into Washington D.C. that the rebels had cut; he rode the locomotive pulling the first brigade of Union troops to reach Washington D.C. Following the defeat of Union forces at Bull Run, he personally supervised the transportation of the defeated forces. Under his organization, the telegraph service rendered efficient service to the Union cause and significantly assisted in the eventual victory. Carnegie later joked that he was "the first casualty of the war" when he gained a scar on his cheek from freeing a trapped telegraph wire.
The defeat of the Confederacy required vast supplies of munitions, railroads and telegraph lines to deliver the goods. The war demonstrated how integral the industries were to Union success.
Keystone Bridge Company
In 1864, Carnegie was one of the early investors in the Columbia Oil Company in Venango County, Pennsylvania. In one year, the firm yielded over $1 million in cash dividends, and petroleum from oil wells on the property sold profitably. The demand for iron products, such as armor for gunboats, cannons, and shells, as well as a hundred other industrial products, made Pittsburgh a center of wartime production. Carnegie worked with others in establishing a steel rolling mill, and steel production and control of industry became the source of his fortune. Carnegie had some investments in the iron industry before the war.
After the war, Carnegie left the railroads to devote his energies to the ironworks trade. Carnegie worked to develop several ironworks, eventually forming the Keystone Bridge Works and the Union Ironworks, in Pittsburgh. Although he had left the Pennsylvania Railroad Company, he remained connected to its management, namely Thomas A. Scott and J. Edgar Thomson. He used his connection to the two men to acquire contracts for his Keystone Bridge Company and the rails produced by his ironworks. He also gave stock in his businesses to Scott and Thomson, and the Pennsylvania was his best customer. When he built his first steel plant, he made a point of naming it after Thomson. As well as having good business sense, Carnegie possessed charm and literary knowledge. He was invited to many important social functions, which Carnegie exploited to his advantage.
Carnegie, through Keystone, supplied the steel for and owned shares in the landmark Eads Bridge project across the Mississippi River at St. Louis, Missouri (completed 1874). This project was an important proof-of-concept for steel technology, which marked the opening of a new steel market.
Carnegie believed in using his fortune for others and doing more than making money. In 1868, at age 33, he wrote:
Industrialist
1875–1900: Steel empire
Carnegie made his fortune in the steel industry, controlling the most extensive integrated iron and steel operations ever owned by an individual in the United States. One of his two great innovations was in the cheap and efficient mass production of steel by adopting and adapting the Bessemer process, which allowed the high carbon content of pig iron to be burnt away in a controlled and rapid way during steel production. Steel prices dropped as a result, and Bessemer steel was rapidly adopted for rails; however, it was not suitable for buildings and bridges.
The second was in his vertical integration of all suppliers of raw materials. In 1883, Carnegie bought the rival Homestead Steel Works, which included an extensive plant served by tributary coal and iron fields, a railway, and a line of lake steamships. In the late 1880s, Carnegie Steel was the largest manufacturer of pig iron, steel rails, and coke in the world, with a capacity to produce approximately 2,000 tons of pig iron per day.
By 1889, the U.S. output of steel exceeded that of the UK, and Carnegie owned a large part of it. Carnegie's empire grew to include the J. Edgar Thomson Steel Works in Braddock (named for John Edgar Thomson, Carnegie's former boss and president of the Pennsylvania Railroad), the Pittsburgh Bessemer Steel Works, the Lucy Furnaces, the Union Iron Mills, the Union Mill (Wilson, Walker & County), the Keystone Bridge Works, the Hartman Steel Works, the Frick Coke Company, and the Scotia ore mines. Carnegie combined his assets and those of his associates in 1892 with the launching of the Carnegie Steel Company.
Carnegie's success was also due to his relationship with the railroad industries, which not only relied on steel for track, but were also making money from steel transport. The steel and railroad barons worked closely to negotiate prices instead of allowing free-market competition.
Besides Carnegie's market manipulation, United States trade tariffs were also working in favor of the steel industry. Carnegie spent energy and resources lobbying Congress for a continuation of favorable tariffs from which he earned millions of dollars a year. Carnegie tried to keep this information concealed, but legal documents released in 1900, during proceedings with the ex-chairman of Carnegie Steel, Henry Clay Frick, revealed how favorable the tariffs had been.
1901: U.S. Steel
In 1901, Carnegie was 65 years of age and considering retirement. He reformed his enterprises into conventional joint stock corporations as preparation for this. John Pierpont Morgan was a banker and America's most important financial deal maker. He had observed how efficiently Carnegie produced profits. He envisioned an integrated steel industry that would cut costs, lower prices to consumers, produce in greater quantities and raise wages to workers. To this end, he needed to buy out Carnegie and several other major producers and integrate them into one company, thereby eliminating duplication and waste. He concluded negotiations on March 2, 1901, and formed the United States Steel Corporation. It was the first corporation in the world with a market capitalization of over $1 billion.
The buyout, secretly negotiated by Charles M. Schwab (no relation to Charles R. Schwab), was the largest such industrial takeover in United States history to date. The holdings were incorporated in the United States Steel Corporation, a trust organized by Morgan, and Carnegie retired from business. His steel enterprises were bought out for $303,450,000.
Carnegie's share of this amounted to $225.64 million (in , $), which was paid to him in the form of 5%, 50-year gold bonds. The letter agreeing to sell his share was signed on February 26, 1901. On March 2, the circular formally filed the organization and capitalization (at $1.4 billion—4% of the U.S. gross domestic product at the time) of the United States Steel Corporation actually completed the contract. The bonds were to be delivered within two weeks to the Hudson Trust Company of Hoboken, New Jersey, in trust to Robert A. Franks, Carnegie's business secretary. There, a special vault was built to house the physical bulk of nearly $230 million worth of bonds.
Scholar and activist
1880–1900
Carnegie continued his business career; some of his literary intentions were fulfilled. He befriended the English poet Matthew Arnold, the English philosopher Herbert Spencer, and the American humorist Mark Twain, as well as being in correspondence and acquaintance with most of the U.S. Presidents, statesmen, and notable writers.
Carnegie constructed commodious swimming-baths for the people of his hometown in Dunfermline in 1879. In the following year, Carnegie gave £8,000 for the establishment of a Dunfermline Carnegie Library in Scotland. In 1884, he gave $50,000 to Bellevue Hospital Medical College (now part of New York University Medical Center) to create a histological laboratory, now called the Carnegie Laboratory.
In 1881, Carnegie took his family, including his 70-year-old mother, on a trip to the United Kingdom. They toured Scotland by coach and enjoyed several receptions en route. The highlight was a return to Dunfermline, where Carnegie's mother laid the foundation stone of a Carnegie Library which he funded. Carnegie's criticism of British society did not mean dislike; on the contrary, one of Carnegie's ambitions was to act as a catalyst for a close association between English-speaking peoples. To this end, in the early 1880s in partnership with Samuel Storey, he purchased numerous newspapers in Britain, all of which were to advocate the abolition of the monarchy and the establishment of "the British Republic". Carnegie's charm, aided by his wealth, afforded him many British friends, including Prime Minister William Ewart Gladstone.
In 1886, Carnegie's younger brother Thomas died at age 43. While owning steel works, Carnegie had purchased at low cost the most valuable of the iron ore fields around Lake Superior.
Following his tour of the UK, he wrote about his experiences in a book entitled An American Four-in-hand in Britain. In 1886, Carnegie wrote his most radical work to date, entitled Triumphant Democracy. Liberal in its use of statistics to make its arguments, the book argued his view that the American republican system of government was superior to the British monarchical system. It gave a highly favorable and idealized view of American progress and criticized the British royal family. The cover depicted an upended royal crown and a broken scepter. The book created considerable controversy in the UK. The book made many Americans appreciate their country's economic progress and sold over 40,000 copies, mostly in the U.S.
Although actively involved in running his many businesses, Carnegie had become a regular contributor to numerous magazines, most notably The Nineteenth Century, under the editorship of James Knowles, and the influential North American Review, led by the editor Lloyd Bryce. In 1889, Carnegie published "Wealth" in the June issue of the North American Review. After reading it, Gladstone requested its publication in Britain, where it appeared as "The Gospel of Wealth" in The Pall Mall Gazette. Carnegie argued that the life of a wealthy industrialist should comprise two parts. The first part was the gathering and the accumulation of wealth. The second part was for the subsequent distribution of this wealth to benevolent causes. Philanthropy was key to making life worthwhile.
Carnegie was a well-regarded writer. He published three books on travel.
Anti-imperialism
In the aftermath of the Spanish–American War, the United States seemed poised to annex Cuba, Guam, Puerto Rico and the Philippines. Carnegie strongly opposed the idea of American colonies. He opposed the annexation of the Philippines almost to the point of supporting William Jennings Bryan against McKinley in 1900. In 1898, Carnegie tried to arrange independence for the Philippines. As the conclusion of the Spanish–American War neared, the United States purchased the Philippines from Spain for $20 million. To counter what he perceived as American imperialism, Carnegie personally offered $20 million to the Philippines so that the Filipino people could purchase their independence from the United States. However, nothing came of the offer. In 1898 Carnegie joined the American Anti-Imperialist League, in opposition to the U.S. annexation of the Philippines. Its membership included former presidents of the United States Grover Cleveland and Benjamin Harrison and literary figures such as Mark Twain.
1901–1919: Philanthropist
Carnegie spent his last years as a philanthropist. From 1901 forward, public attention was turned from the shrewd business acumen which had enabled Carnegie to accumulate such a fortune, to the public-spirited way in which he devoted himself to using it on philanthropic projects. He had written about his views on social subjects and the responsibilities of great wealth in Triumphant Democracy (1886) and Gospel of Wealth (1889). Carnegie devoted the rest of his life to providing capital for purposes of public interest and social and educational advancement. He saved letters of appreciation from those he helped in a desk drawer labeled "Gratitude and Sweet Words."
He provided $25,000 a year to the movement for spelling reform. His organization, the Simplified Spelling Board, created the Handbook of Simplified Spelling, which was written wholly in reformed spelling.
3,000 public libraries
Among his many philanthropic efforts, the establishment of public libraries throughout the United States, Britain, Canada, New Zealand, and mostly other English-speaking countries was especially prominent. In this special driving interest of his, Carnegie was inspired by meetings with philanthropist Enoch Pratt (1808–1896). The Enoch Pratt Free Library (1886) of Baltimore, Maryland, impressed Carnegie deeply; he said, "Pratt was my guide and inspiration."
Carnegie turned over management of the library project by 1908 to his staff, led by James Bertram (1874–1934). The first Carnegie Library opened in 1883 in Dunfermline. His method was to provide funds to build and equip the library, but only on the condition that the local authority matched that by providing the land and a budget for operation and maintenance.
To secure local interest, in 1885, he gave $500,000 to Pittsburgh, Pennsylvania, for a public library; in 1886, he gave $250,000 to Allegheny City, Pennsylvania, for a music hall and library; and he gave $250,000 to Edinburgh for a free library. In total, Carnegie funded some 3,000 libraries, located in 47 U.S. states, and also in Canada, Britain, Ireland, Belgium, Serbia, France, Australia, New Zealand, South Africa, the West Indies, and Fiji. He also donated £50,000 to help set up the University of Birmingham in 1899.
As Van Slyck (1991) showed, during the last years of the 19th century, there was the increasing adoption of the idea that free libraries should be available to the American public. But the design of such libraries was the subject of prolonged and heated debate. On one hand, the library profession called for designs that supported efficiency in administration and operation; on the other, wealthy philanthropists favored buildings that reinforced the paternalistic metaphor and enhanced civic pride. Between 1886 and 1917, Carnegie reformed both library philanthropy and library design, encouraging a closer correspondence between the two.
Investing in education, science, pensions, civil heroism, music, and world peace
In 1900, Carnegie gave $2 million to start the Carnegie Institute of Technology (CIT) at Pittsburgh and the same amount in 1902 to create the Carnegie Institution at Washington, D.C., to encourage research and discovery. He later contributed more to these and other schools. CIT is now known as Carnegie Mellon University after it merged with the Mellon Institute of Industrial Research. Carnegie also served on the Boards of Cornell University and Stevens Institute of Technology.
In 1911, Carnegie became a sympathetic benefactor to George Ellery Hale, who was trying to build the Hooker Telescope at Mount Wilson, and donated an additional ten million dollars to the Carnegie Institution with the following suggestion to expedite the construction of the telescope: "I hope the work at Mount Wilson will be vigorously pushed, because I am so anxious to hear the expected results from it. I should like to be satisfied before I depart, that we are going to repay to the old land some part of the debt we owe them by revealing more clearly than ever to them the new heavens." The telescope saw first light on November 2, 1917, with Carnegie still alive.
In 1901, in Scotland, he gave $10 million to establish the Carnegie Trust for the Universities of Scotland. It was created by a deed that he signed on June 7, 1901, and it was incorporated by royal charter on August 21, 1902. The establishing gift of $10 million was then an unprecedented sum: at the time, total government assistance to all four Scottish universities was about £50,000 a year. The aim of the Trust was to improve and extend the opportunities for scientific research in the Scottish universities and to enable the deserving and qualified youth of Scotland to attend a university. He was subsequently elected Lord Rector of University of St. Andrews in December 1901, and formally installed as such in October 1902, serving until 1907. He also donated large sums of money to Dunfermline, the place of his birth. In addition to a library, Carnegie also bought the private estate which became Pittencrieff Park and opened it to all members of the public, establishing the Carnegie Dunfermline Trust to benefit the people of Dunfermline. A statue of Carnegie was later built between 1913 and 1914 in the park as a commemoration for his creation of the park.
Carnegie was a major patron of music. He was a founding financial backer of Jeannette Thurber's National Conservatory of Music of America in 1885. He built the music performing venue Carnegie Hall in New York City; it opened in 1891 and remained in his family until 1925. His interest in music led him to fund the construction of 7,000 pipe organs in churches and temples, with no apparent preference for any religious denomination or sect.
He gave a further $10 million in 1913 to endow the Carnegie United Kingdom Trust, a grant-making foundation. He transferred to the trust the charge of all his existing and future benefactions, other than university benefactions in the United Kingdom. He gave the trustees a wide discretion, and they inaugurated a policy of financing rural library schemes rather than erecting library buildings, and of assisting the musical education of the people rather than granting organs to churches.
In 1901, Carnegie also established large pension funds for his former employees at Homestead and, in 1905, for American college professors. The latter fund evolved into TIAA-CREF. One critical requirement was that church-related schools had to sever their religious connections to get his money.
Carnegie was a large benefactor of the Tuskegee Institute for Black American education under Booker T. Washington. He helped Washington create the National Negro Business League.
In 1904, he founded the Carnegie Hero Fund for the United States and Canada (a few years later also established in the United Kingdom, Switzerland, Norway, Sweden, France, Italy, the Netherlands, Belgium, Denmark, and Germany) for the recognition of deeds of heroism. Carnegie contributed $1.5 million in 1903 for the erection of the Peace Palace at The Hague; and he donated $150,000 for a Pan-American Palace in Washington as a home for the International Bureau of American Republics.
When it became obvious that Carnegie could not give away his entire fortune within his lifetime, he established the Carnegie Corporation of New York in 1911 "to promote the advancement and diffusion of knowledge and understanding" and continue his program of giving.
Carnegie was honored for his philanthropy and support of the arts by initiation as an honorary member of Phi Mu Alpha Sinfonia fraternity on October 14, 1917, at the New England Conservatory of Music in Boston, Massachusetts. The fraternity's mission reflects Carnegie's values by developing young men to share their talents to create harmony in the world.
Death
Carnegie died on August 11, 1919, in Lenox, Massachusetts, at his Shadow Brook estate, of bronchial pneumonia. He had already given away $350,695,653 (approximately US$ in dollars) of his wealth. After his death, his last $30 million was given to foundations, charities, and to pensioners.
He was buried at Sleepy Hollow Cemetery in Sleepy Hollow, New York. The grave site is located on the Arcadia Hebron plot of land at the corner of Summit Avenue and Dingle Road. Carnegie is buried only a few yards away from union organizer Samuel Gompers, another important figure of industry in the Gilded Age.
Controversies
1889: Johnstown Flood
Carnegie was one of more than 50 members of the South Fork Fishing and Hunting Club, which has been blamed for the Johnstown Flood that killed 2,209 people in 1889.
At the suggestion of his friend Benjamin Ruff, Carnegie's partner Henry Clay Frick had formed the exclusive South Fork Fishing and Hunting Club high above Johnstown, Pennsylvania. The sixty-odd club members were the leading business tycoons of Western Pennsylvania and included among their number Frick's best friend, Andrew Mellon, his attorneys Philander Knox and James Hay Reed, as well as Frick's business partner, Carnegie. High above the city, near the small town of South Fork, the South Fork Dam was originally built between 1838 and 1853 by the Commonwealth of Pennsylvania as part of a canal system to be used as a reservoir for a canal basin in Johnstown. With the coming-of-age of railroads superseding canal barge transport, the lake was abandoned by the Commonwealth, sold to the Pennsylvania Railroad, and sold again to private interests, and eventually came to be owned by the South Fork Fishing and Hunting Club in 1881. Prior to the flood, speculators had purchased the abandoned reservoir, made less than well-engineered repairs to the old dam, raised the lake level, built cottages and a clubhouse, and created the South Fork Fishing and Hunting Club. Less than downstream from the dam sat the city of Johnstown.
The dam was high and long. Between 1881, when the club was opened, and 1889, the dam frequently sprang leaks and was patched, mostly with mud and straw. Additionally, a previous owner removed and sold for scrap the three cast iron discharge pipes that previously allowed a controlled release of water. There had been some speculation as to the dam's integrity, and concerns had been raised by the head of the Cambria Iron Works downstream in Johnstown. Such repair work, a reduction in height, and unusually high snowmelt and heavy spring rains combined to cause the dam to give way on May 31, 1889, resulting in twenty million tons of water sweeping down the valley as the Johnstown Flood. When word of the dam's failure was telegraphed to Pittsburgh, Frick and other members of the South Fork Fishing and Hunting Club gathered to form the Pittsburgh Relief Committee for assistance to the flood victims as well as determining never to speak publicly about the club or the flood. This strategy was a success, and Knox and Reed were able to fend off all lawsuits that would have placed blame upon the club's members.
Although Cambria Iron and Steel's facilities were heavily damaged by the flood, they returned to full production within a year. After the flood, Carnegie built Johnstown a new library to replace the one built by Cambria's chief legal counsel Cyrus Elder, which was destroyed in the flood. The Carnegie-donated library is now owned by the Johnstown Area Heritage Association and houses the Flood Museum.
1892: Homestead Strike
The Homestead Strike was a bloody labor confrontation lasting 143 days in 1892, one of the most serious in U.S. history. The conflict was centered on Carnegie Steel's main plant in Homestead, Pennsylvania, and grew out of a labor dispute between the Amalgamated Association of Iron and Steel Workers (AA) and the Carnegie Steel Company.
Carnegie left on a trip to Scotland before the unrest peaked. In doing so, Carnegie left mediation of the dispute in the hands of his associate and partner Henry Clay Frick. Frick was well known in industrial circles for maintaining staunch anti-union sentiment. With the collective bargaining agreement between the union and company expiring at the end of June, Frick and the leaders of the local AA union entered into negotiations in February. With the steel industry doing well and prices higher, the AA asked for a wage increase; the AA represented about 800 of the 3,800 workers at the plant. Frick immediately countered with an average 22% wage decrease that would affect nearly half the union's membership and remove a number of positions from the bargaining unit.
The union and company failed to come to an agreement, and management locked the union out. Workers considered the stoppage a "lockout" by management and not a "strike" by workers. As such, the workers would have been well within their rights to protest, and subsequent government action would have been a set of criminal procedures designed to crush what was seen as a pivotal demonstration of the growing labor rights movement, strongly opposed by management. Frick brought in thousands of strikebreakers to work the steel mills and Pinkerton agents to safeguard them.
On July 6, the arrival of a force of 300 Pinkerton agents from New York City and Chicago resulted in a fight in which 10 men — seven strikers and three Pinkertons — were killed and hundreds were injured. Pennsylvania Governor Robert Pattison ordered two brigades of the state militia to the strike site. Then allegedly in response to the fight between the striking workers and the Pinkertons, anarchist Alexander Berkman shot at Frick in an attempted assassination, wounding him. While not directly connected to the strike, Berkman was tied in for the assassination attempt. According to Berkman, "...with the elimination of Frick, responsibility for Homestead conditions would rest with Carnegie." Afterwards, the company successfully resumed operations with non-union immigrant employees in place of the Homestead plant workers, and Carnegie returned to the United States. However, Carnegie's reputation was permanently damaged by the Homestead events.
Theodore Roosevelt
According to David Nasaw, after 1898, when the United States entered a war with Spain, Carnegie increasingly devoted his energy to supporting pacifism. He strongly opposed the war and the subsequent imperialistic American takeover of the Philippines. When Theodore Roosevelt became president in 1901, Carnegie and Roosevelt were in frequent contact. They exchanged letters, communicated through mutual friends such as Secretary of State John Hay, and met in person. Carnegie hoped that Roosevelt would turn the Philippines free, not realizing he was more of an imperialist and believer in warrior virtues than President McKinley had been. He saluted Roosevelt for forcing Germany and Britain to arbitrate their conflict with Venezuela in 1903, and especially for becoming the mediator who negotiated an end to the war between Russia and Japan in 1907–1908. Roosevelt relied on Carnegie for financing his expedition to Africa in 1909. In return he asked the ex-president to mediate the growing conflict between the cousins who ruled Britain and Germany. Roosevelt started to do so but the scheme collapsed when king Edward VII suddenly died. Nasaw argues that Roosevelt systematically deceived and manipulated Carnegie and held the elderly man in contempt. Nasaw quotes a private letter Roosevelt wrote to Whitelaw Reid in 1905: [I have] tried hard to like Carnegie, but it is pretty difficult. There is no type of man for whom I feel a more contemptuous abhorrence than for the one who makes a God of mere money-making and at the same time is always yelling out that kind of utterly stupid condemnation of war which in almost every case springs from a combination of defective physical courage, of unmanly shrinking from pain and effort, and of hopelessly twisted ideals. All the suffering from Spanish war comes far short of the suffering, preventable and non-preventable, among the operators of the Carnegie steel works, and among the small investors, during the time that Carnegie was making his fortune…. It is as noxious folly to denounce war per se as it is to denounce business per se. Unrighteous war is a hideous evil; but I am not at all sure that it is worse evil than business unrighteousness.
Personal life
Family
Carnegie did not want to marry during his mother's lifetime, instead choosing to take care of her in her illness towards the end of her life. After she died in 1886, the 51-year-old Carnegie married Louise Whitfield, who was 21 years his junior. In 1897, the couple had their only child, Margaret, whom they named after Carnegie's mother.
Residences
Carnegie bought Skibo Castle in Scotland, and made his home partly there and partly in his New York mansion located at 2 East 91st Street at Fifth Avenue. The building was completed in late 1902, and he lived there until his death in 1919. His wife Louise continued to live there until her death in 1946. The building has been used since 1976 as the Cooper-Hewitt, Smithsonian Design Museum, part of the Smithsonian Institution. The surrounding neighborhood on Manhattan's Upper East Side has come to be called Carnegie Hill. The mansion was designated as a National Historic Landmark in 1966.
Philosophy
Politics
Carnegie gave "formal allegiance" to the Republican Party, though he was said to be "a violent opponent of some of the most sacred doctrines" of the party.
Andrew Carnegie Dictum
In his final days, Carnegie had pneumonia. Before his death on August 11, 1919, Carnegie had donated $350,695,654 for various causes. The "Andrew Carnegie Dictum" was:
To spend the first third of one's life getting all the education one can.
To spend the next third making all the money one can.
To spend the last third giving it all away for worthwhile causes.
Carnegie was involved in philanthropic causes, but he kept himself away from religious circles. He wanted to be identified by the world as a "positivist". He was highly influenced in public life by John Bright.
On wealth
As early as 1868, at age 33, he drafted a memo to himself. He wrote: "...The amassing of wealth is one of the worse species of idolatry. No idol more debasing than the worship of money." In order to avoid degrading himself, he wrote in the same memo he would retire at age 35 to pursue the practice of philanthropic giving, for "... the man who dies thus rich dies disgraced." However, he did not begin his philanthropic work in all earnest until 1881, at age 46, with the gift of a library to his hometown of Dunfermline, Scotland.
Carnegie wrote "The Gospel of Wealth", an article in which he stated his belief that the rich should use their wealth to help enrich society. In that article, Carnegie also expressed sympathy for the ideas of progressive taxation and an estate tax:
The following is taken from one of Carnegie's memos to himself:
Intellectual influences
Herbert Spencer; evolutionary thought
Carnegie claimed to be a champion of evolutionary thought—particularly the work of Herbert Spencer, even declaring Spencer his teacher.
However although Carnegie claimed to be a disciple of Spencer, many of his actions went against the ideas he espoused.
Spencerian evolution was for individual rights and against government interference. Furthermore, Spencerian evolution held that those unfit to sustain themselves must be allowed to perish. Spencer believed that just as there were many varieties of beetles, respectively modified to existence in a particular place in nature, so too had human society "spontaneously fallen into division of labour". Individuals who survived to this, the latest and highest stage of evolutionary progress would be "those in whom the power of self-preservation is the greatest—are the select of their generation." Moreover, Spencer perceived governmental authority as borrowed from the people to perform the transitory aims of establishing social cohesion, insurance of rights, and security. Spencerian 'survival of the fittest' firmly credits any provisions made to assist the weak, unskilled, poor and distressed to be an imprudent disservice to evolution. Spencer insisted people should resist for the benefit of collective humanity, as severe fate singles out the weak, debauched, and disabled.
Laissez-faire economics
Andrew Carnegie's political and economic focus during the late nineteenth and early twentieth century was the defense of laissez-faire economics. Carnegie emphatically resisted government intrusion in commerce, as well as government-sponsored charities. Carnegie believed the concentration of capital was essential for societal progress and should be encouraged. Carnegie was an ardent supporter of commercial "survival of the fittest" and sought to attain immunity from business challenges by dominating all phases of the steel manufacturing procedure. Carnegie's determination to lower costs included cutting labor expenses as well. In a notably Spencerian manner, Carnegie argued that unions impeded the natural reduction of prices by pushing up costs, which blocked evolutionary progress. Carnegie felt that unions represented the narrow interest of the few while his actions benefited the entire community.
On the surface, Andrew Carnegie appears to be a strict laissez-faire capitalist and follower of Herbert Spencer, often referring to himself as a disciple of Spencer. Conversely, Carnegie, a titan of industry, seems to embody all of the qualities of Spencerian survival of the fittest. The two men enjoyed a mutual respect for one another and maintained a correspondence until Spencer's death in 1903. There are, however, some major discrepancies between Spencer's capitalist evolutionary conceptions and Andrew Carnegie's capitalist practices.
Market concentration
Spencer wrote that in production the advantages of the superior individual are comparatively minor, and thus acceptable, yet the benefit that dominance provides those who control a large segment of production might be hazardous to competition. Spencer feared that an absence of "sympathetic self-restraint" of those with too much power could lead to the ruin of their competitors. He did not think free-market competition necessitated competitive warfare. Furthermore, Spencer argued that individuals with superior resources who deliberately used investment schemes to put competitors out of business were committing acts of "commercial murder". Carnegie built his wealth in the steel industry by maintaining an extensively integrated operating system. Carnegie also bought out some regional competitors, and merged with others, usually maintaining the majority shares in the companies. Over the course of twenty years, Carnegie's steel properties grew to include the Edgar Thomson Steel Works, the Lucy Furnace Works, the Union Iron Mills, the Homestead Works, the Keystone Bridge Works, the Hartman Steel Works, the Frick Coke Company, and the Scotia ore mines among many other industry-related assets.
Herbert Spencer absolutely was against government interference in business in the form of regulatory limitations, taxes, and tariffs as well. Spencer saw tariffs as a form of taxation that levied against the majority in service to "the benefit of a small minority of manufacturers and artisans".
Despite Carnegie's personal dedication to Herbert Spencer as a friend, his adherence to Spencer's political and economic ideas is more contentious. In particular, it appears Carnegie either misunderstood or intentionally misrepresented some of Spencer's principal arguments. Spencer remarked upon his first visit to Carnegie's steel mills in Pittsburgh, which Carnegie saw as the manifestation of Spencer's philosophy, "Six months' residence here would justify suicide."
Charitable institutions
On the subject of charity Andrew Carnegie's actions diverged in the most significant and complex manner from Herbert Spencer's philosophies. In his 1854 essay "Manners and Fashion", Spencer referred to public education as "Old schemes". He went on to declare that public schools and colleges fill the heads of students with inept, useless knowledge and exclude useful knowledge. Spencer stated that he trusted no organization of any kind, "political, religious, literary, philanthropic", and believed that as they expanded in influence so too did their regulations expand. In addition, Spencer thought that as all institutions grow they become ever more corrupted by the influence of power and money. The institution eventually loses its "original spirit, and sinks into a lifeless mechanism". Spencer insisted that all forms of philanthropy that uplift the poor and downtrodden were reckless and incompetent. Spencer thought any attempt to prevent "the really salutary sufferings" of the less fortunate "bequeath to posterity a continually increasing curse". Carnegie, a self-proclaimed devotee of Spencer, testified to Congress on February 5, 1915: "My business is to do as much good in the world as I can; I have retired from all other business."
Charity to enable people to develop
Carnegie held that societal progress relied on individuals who maintained moral obligations to themselves and to society. Furthermore, he believed that charity supplied the means for those who wish to improve themselves to achieve their goals. Carnegie urged other wealthy people to contribute to society in the form of parks, works of art, libraries and other endeavors that improve the community and contribute to the "lasting good". Carnegie also held a strong opinion against inherited wealth. Carnegie believed that the sons of prosperous businesspersons were rarely as talented as their fathers. By leaving large sums of money to their children, wealthy business leaders were wasting resources that could be used to benefit society. Most notably, Carnegie believed that the future leaders of society would rise from the ranks of the poor. Carnegie strongly believed in this because he had risen from the bottom. He believed the poor possessed an advantage over the wealthy because they receive greater attention from their parents and are taught better work ethics.
Religion and worldview
Carnegie and his family belonged to the Presbyterian Church in the United States of America, also known informally as the Northern Presbyterian Church. In his early life Carnegie was skeptical of Calvinism, and religion as a whole, but reconciled with it later in his life. In his autobiography, Carnegie describes his family as moderate Presbyterian believers, writing that "there was not one orthodox Presbyterian" in his family; various members of his family having somewhat distanced themselves from Calvinism, some of them leaning more towards Swedenborgianism. While a child, his family led vigorous theological and political disputes. His mother avoided the topic of religion. His father left the Presbyterian church after a sermon on infant damnation, while, according to Carnegie, still remaining very religious on his own.
Witnessing sectarianism and strife in 19th century Scotland regarding religion and philosophy, Carnegie kept his distance from organized religion and theism. Carnegie instead preferred to see things through naturalistic and scientific terms stating, "Not only had I got rid of the theology and the supernatural, but I had found the truth of evolution."
Later in life, Carnegie's firm opposition to religion softened. For many years he was a member of Madison Avenue Presbyterian Church, pastored from 1905 to 1926 by Social Gospel exponent Henry Sloane Coffin, while his wife and daughter belonged to the Brick Presbyterian Church. He also prepared (but did not deliver) an address in which he professed a belief in "an Infinite and Eternal Energy from which all things proceed". Records exist of a short period of correspondence around 1912–1913 between Carnegie and 'Abdu'l-Bahá, the eldest son of Bahá'u'lláh, founder of the Baháʼí Faith. In these letters, one of which was published in The New York Times in full text, Carnegie is extolled as a "lover of the world of humanity and one of the founders of Universal Peace".
World peace
Influenced by his "favorite living hero in public life" John Bright, Carnegie started his efforts in pursuit of world peace at a young age, and supported causes that opposed military intervention. His motto, "All is well since all grows better", served not only as a good rationalization of his successful business career, but also his view of international relations.
Despite his efforts towards international peace, Carnegie faced many dilemmas on his quest. These dilemmas are often regarded as conflicts between his view on international relations and his other loyalties. Throughout the 1880s and 1890s, for example, Carnegie allowed his steel works to fill large orders of armor plate for the building of an enlarged and modernized United States Navy, but he opposed American overseas expansion.
Despite that, Carnegie served as a major donor for the newly established International Court of Arbitration's Peace Palace—brainchild of Russian tsar Nicholas II.
His largest and in the long run most influential peace organization was the Carnegie Endowment for International Peace, formed in 1910 with a $10 million endowment. In 1913, at the dedication of the Peace Palace in The Hague, Carnegie predicted that the end of the war was as certain to come, and come soon, as day follows night.
In 1914, on the eve of the First World War, Carnegie founded the Church Peace Union (CPU), a group of leaders in religion, academia, and politics. Through the CPU, Carnegie hoped to mobilize the world's churches, religious organizations, and other spiritual and moral resources to join in promoting moral leadership to put an end to war forever. For its inaugural international event, the CPU sponsored a conference to be held on August 1, 1914, on the shores of Lake Constance in southern Germany. As the delegates made their way to the conference by train, Germany was invading Belgium.
Despite its inauspicious beginning, the CPU thrived. Today its focus is on ethics, and it is known as the Carnegie Council for Ethics in International Affairs, an independent, nonpartisan, nonprofit organization, whose mission is to be the voice for ethics in international affairs.
The outbreak of the First World War was clearly a shock to Carnegie and his optimistic view on world peace. Although his promotion of anti-imperialism and world peace had all failed, and the Carnegie Endowment had not fulfilled his expectations, his beliefs and ideas on international relations had helped build the foundation of the League of Nations after his death, which took world peace to another level.
United States colonial expansion
On the matter of American colonial expansion, Carnegie had always thought it is an unwise gesture for the United States. He did not oppose the annexation of the Hawaiian islands or Puerto Rico, but he opposed the annexation of the Philippines. Carnegie believed that it involved a denial of the fundamental democratic principle, and he also urged William McKinley to withdraw American troops and allow the Filipinos to live with their independence. This act strongly impressed the other American anti-imperialists, who soon elected him vice-president of the Anti-Imperialist League.
After he sold his steel company in 1901, Carnegie was able to get fully involved in the peace cause, both financially and personally. He gave away much of his fortunes to various peacekeeping agencies in order to keep them growing. When a friend, the British writer William T. Stead, asked him to create a new organization for the goal of a peace and arbitration society, his reply was:
Carnegie believed that it is the effort and will of the people, that maintains the peace in international relations. Money is just a push for the act. If world peace depended solely on financial support, it would not seem a goal, but more like an act of pity.
Like Stead, he believed that the United States and the British Empire would merge into one nation, telling him "We are heading straight to the Re-United States". Carnegie believed that the combined country's power would maintain world peace and disarmament. The creation of the Carnegie Endowment for International Peace in 1910 was regarded as a milestone on the road to the ultimate goal of abolition of war. Beyond a gift of $10 million for peace promotion, Carnegie also encouraged the "scientific" investigation of the various causes of war, and the adoption of judicial methods that should eventually eliminate them. He believed that the Endowment exists to promote information on the nations' rights and responsibilities under existing international law and to encourage other conferences to codify this law.
Legacy and honors
In 1899 Andrew Carnegie was awarded American Library Association Honorary Membership.
Carnegie received the honorary Doctor of Laws (DLL) from the University of Glasgow in June 1901, and received the Freedom of the City of Glasgow "in recognition of his munificence" later the same year.
In July 1902 he received the Freedom of the city of St Andrews, "in testimony of his great zeal for the welfare of his fellow-men on both sides of the Atlantic", and in October 1902 the Freedom of the City of Perth "in testimony of his high personal worth and beneficial influence, and in recognition of widespread benefactions bestowed on this and other lands, and especially in gratitude for the endowment granted by him for the promotion of University education in Scotland." and the Freedom of the City of Dundee. Also in 1902, he was elected as a member to the American Philosophical Society.
He received an honorary Doctor of Laws (LLD) from the University of Aberdeen in 1906. In 1910, he received the Freedom of the City of Belfast and was made as well Commander of the National Order of the Legion of Honour by the French government. Carnegie was awarded as Knight Grand Cross of the Order of Orange-Nassau by Queen Wilhelmina of the Netherlands on August 25, 1913. Carnegie received July 1, 1914, an honorary doctorate from the University of Groningen the Netherlands.
The dinosaur Diplodocus carnegiei (Hatcher) was named for Carnegie after he sponsored the expedition that discovered its remains in the Morrison Formation (Jurassic) of Utah. Carnegie was so proud of "Dippy" that he had casts made of the bones and plaster replicas of the whole skeleton donated to several museums in Europe and South America. The original fossil skeleton is assembled and stands in the Hall of Dinosaurs at the Carnegie Museum of Natural History in Pittsburgh, Pennsylvania.
After the Spanish–American War, Carnegie offered to donate $20 million to the Philippines so they could buy their independence.
Carnegie, Pennsylvania, and Carnegie, Oklahoma, were named in his honor.
The Saguaro cactus's scientific name, Carnegiea gigantea, is named after him.
The Carnegie Medal for the best children's literature published in the UK was established in his name.
The Carnegie Faculty of Sport and Education, at Leeds Beckett University, UK, is named after him.
The concert halls in Dunfermline and New York are named after him.
At the height of his career, Carnegie was the second-richest person in the world, behind only John D. Rockefeller of Standard Oil.
Carnegie Mellon University in Pittsburgh was named after Carnegie, who founded the institution as the Carnegie Technical Schools.
Lauder College (named after his uncle George Lauder Sr.) in the Halbeath area of Dunfermline was renamed Carnegie College in 2007.
A street in Belgrade (Serbia), next to the Belgrade University Library which is one of the Carnegie libraries, is named in his honor.
An American high school, Carnegie Vanguard High School in Houston, Texas, is named after him
Carnegie was awarded the Freedom of the Burgh of Kilmarnock in Scotland in 1903, prior to laying the foundation stone of Loanhead Public School.
Benefactions
According to biographer Burton J. Hendrick:
His benefactions amounted to $350,000,000—for he gave away not only his annual income of something more than $12,500,000, but most of the principal as well. Of this sum, $62,000,000 was allotted to the British Empire and $288,000,000 to the United States, for Carnegie, in the main, confined his benefactions to the English-speaking nations. His largest gifts were $125,000,000 to the Carnegie Corporation of New York (this same body also became his residuary legatee), $60,000,000 to public library buildings, $20,000,000 to colleges (usually the smaller ones), $6,000,000 to church organs, $29,000,000 to the Carnegie Foundation for the Advancement of Teaching, $22,000,000 to the Carnegie Institute of Pittsburgh, $22,000,000 to the Carnegie Institution of Washington, $10,000,000 to Hero Funds, $10,000,000 to the Endowment for International Peace, $10,000,000 to the Scottish Universities Trust, $10,000,000 to the United Kingdom Trust, and $3,750,000 to the Dunfermline Trust.
Hendrick argues that:
These gifts fairly picture Carnegie's conception of the best ways to improve the status of the common man. They represent all his personal tastes—his love of books, art, music, and nature—and the reforms which he regarded as most essential to human progress—scientific research, education both literary and technical, and, above all, the abolition of war. The expenditure the public most associates with Carnegie's name is that for public libraries. Carnegie himself frequently said that his favorite benefaction was the Hero Fund—among other reasons, because "it came up my ain back"; but probably deep in his own mind his library gifts took precedence over all others in importance. There was only one genuine remedy, he believed, for the ills that beset the human race, and that was enlightenment. "Let there be light" was the motto that, in the early days, he insisted on placing in all his library buildings. As to the greatest endowment of all, the Carnegie Corporation, that was merely Andrew Carnegie in permanently organized form; it was established to carry on, after Carnegie's death, the work to which he had given personal attention in his own lifetime.
Research sources
Carnegie's personal papers are at the Library of Congress Manuscript Division.
The Carnegie Collections of the Columbia University Rare Book and Manuscript Library consist of the archives of the following organizations founded by Carnegie: The Carnegie Corporation of New York (CCNY); The Carnegie Endowment for International Peace (CEIP); the Carnegie Foundation for the Advancement of Teaching (CFAT);The Carnegie Council on Ethics and International Affairs (CCEIA). These collections deal primarily with Carnegie philanthropy and have very little personal material related to Carnegie. Carnegie Mellon University and the Carnegie Library of Pittsburgh jointly administer the Andrew Carnegie Collection of digitized archives on Carnegie's life.
Moral appraisal
By the standards of 19th-century tycoons, Carnegie was not a particularly ruthless man but a humanitarian with enough acquisitiveness to go in the ruthless pursuit of money. "Maybe with the giving away of his money," commented biographer Joseph Wall, "he would justify what he had done to get that money."
To some, Carnegie represents the idea of the American dream. He was an immigrant from Scotland who came to America and became successful. He is not only known for his successes but his huge amounts of philanthropic works, not only for charities but also to promote democracy and independence to colonized countries.
Works
Carnegie was a frequent contributor to periodicals on labor issues.
Books
Our Coaching Trip, Brighton to Inverness (1882).
An American Four-in-hand in Britain (1883).
Round the World. New York: Charles Scribner's Sons (1884).
An American Four-in-Hand in Britain. New York: Charles Scribner's Sons (1886).
Triumphant Democracy, or, Fifty Years' March of the Republic. New York: Charles Scribner's Sons (1886).
The Gospel of Wealth (1889).
The Gospel of Wealth and Other Timely Essays. New York: The Century Co. (1901).
The Empire of Business (1902).
Audiobook via LibriVox.
The Secret of Business is the Management of Men (1903).
James Watt (Famous Scots Series). New York: Doubleday, Page and Co. (1905).
Problems of Today: Wealth–Labor–Socialism. New York: Doubleday, Page and Co. (1907).
Autobiography of Andrew Carnegie (posthumous). Boston: Houghton Mifflin (1920).
Audiobook via Librivox.
Articles
"Wealth". North American Review, vol. 148, no. 381 (Jun. 1889), pp. 653–64. Original version of The Gospel of Wealth.
"The Bugaboo of Trusts". North American Review, vol. 148, no. 377 (Feb. 1889).
Pamphlets
The Bugaboo of Trusts. Reprinted from North American Review, vol. 148, no. 377 (Feb. 1889).
Public speaking
Industrial Peace: Address at the Annual Dinner of the National Civic Federation, New York City, December 15, 1904. [n.c.]: National Civic Federation (1904).
Edwin M. Stanton: An Address by Andrew Carnegie on Stanton Memorial Day at Kenyon College. New York: Doubleday, Page and Co. (1906).
The Negro in America: An Address Delivered Before the Philosophical Institution of Edinburg, October 16, 1907. Inverness: R. Carruthers & Sons, Courier Office (1907).
Speech at the Annual Meeting of the Peace Society, at the Guildhall, London, EC, May 24, 1910. London: The Peace Society (1910).
A League of Peace: A Rectorial Address Delivered to the Students in the University of St. Andrews, October 17, 1905. New York: New York Peace Society (1911).
Collected works
Wall, Joseph Frazier, ed. The Andrew Carnegie Reader (1992).
See also
Carnegie (disambiguation)
Commemoration of the American Civil War on postage stamps
History of public library advocacy
List of Carnegie libraries in the United States
List of peace activists
List of richest Americans in history
List of colleges and universities named after people
Explanatory notes
References
Bibliography
Ernsberger, Richard Jr. (October 2018). "A Fool for Peace". American History, Vol. 53, Issue 4. Interview with Nasaw.
Wall, Joseph Frazier (1989). Andrew Carnegie. . Along with Nasaw the most detailed scholarly biography.
Collections
Further reading
Bostaph, Samuel (2015). Andrew Carnegie: An Economic Biography. Lanham, MD: Lexington Books. . 125pp online review
Ernsberger, Richard Jr. (February 2015). "Robber Baron Turned Robin Hood". American History. 49#6 pp. 32–41, cover story.
Farrah, Margaret Ann. Andrew Carnegie: A Psychohistorical Sketch (PhD dissertation, Carnegie Mellon University; ProQuest Dissertations Publishing, 1982. 8209384).
Goldin, Milton (1997). "Andrew Carnegie and the Robber Baron Myth". In: Myth America: A Historical Anthology, Volume II. Gerster, Patrick, and Cords, Nicholas, eds. St. James, NY: Brandywine Press .
Harvey, Charles, et al. Andrew Carnegie and the foundations of contemporary entrepreneurial philanthropy. Business History (2011) 53#3 pp. 425–450.
Hendrick, Burton Jesse (1933). The Life of Andrew Carnegie (2 vol.). Vol. 2 online.
Josephson, Matthew (1938). The Robber Barons: The Great American Capitalists, 1861–1901. .
Krass, Peter (2002). Carnegie. Wiley. . Scholarly biography.
Lester, Robert M. (1941). Forty Years of Carnegie Giving: A Summary of the Benefactions of Andrew Carnegie and of the Work of the Philanthropic Trusts Which He Created. New York: Charles Scribner's Sons.
Livesay, Harold C. (1999). Andrew Carnegie and the Rise of Big Business, 2nd ed. . Short biography by a scholar.
McGormick, Blaine, and Burton W. Folsom Jr. "Survey of Business Historians on America's Greatest Entrepreneurs." Business History Review (2003), 77#4, pp. 703–716. Carnegie ranks #3 behind Ford and Rockefeller.
Patterson, David S. (1970). "Andrew Carnegie's Quest for World Peace." Proceedings of the American Philosophical Society 114#5 (1970): 371–383. .
Rees, Jonathan. (1997). "Homestead in Context: Andrew Carnegie and the Decline of the Amalgamated Association of Iron and Steel Workers." Pennsylvania History 64(4): 509–533. .
Skrabec, Quentin R. Jr. Henry Clay Frick: The life of the perfect capitalist (McFarland, 2010). online
Skrabec, Quentin R. Jr. The Carnegie Boys: The Lieutenants of Andrew Carnegie that Changed America (McFarland, 2012) online.
VanSlyck, Abigail A. (1991). "'The Utmost Amount of Effective Accommodation': Andrew Carnegie and the Reform of the American Library." Journal of the Society of Architectural Historians. 50(4): 359–383. .
Zimmerman, Jonathan. "Simplified Spelling and the Cult of Efficiency in the 'Progressiv' Era." Journal of the Gilded Age & Progressive Era (2010) 9#3 pp. 365–394.
External links
Documentary: "Andrew Carnegie: Rags to Riches, Power to Peace"
Carnegie Birthplace Museum website
Booknotes interview with Peter Krass on Carnegie, November 24, 2002.
Marguerite Martyn, "Andrew Carnegie on Prosperity, Income Tax, and the Blessings of Poverty," May 1, 1914, City Desk Publishing
1835 births
1919 deaths
20th-century American businesspeople
Activists from Massachusetts
American billionaires
American Civil War industrialists
American company founders
American industrialists
American librarianship and human rights
20th-century American philanthropists
American railway entrepreneurs
American spiritualists
American steel industry businesspeople
Bessemer Gold Medal
Lauder Greenway family
Burials at Sleepy Hollow Cemetery
Businesspeople from Pittsburgh
Carnegie Endowment for International Peace
Carnegie family
Carnegie Mellon University people
Deaths from pneumonia in Massachusetts
Deaths from bronchopneumonia
English-language spelling reform advocates
Gilded Age
Hall of Fame for Great Americans inductees
Massachusetts Republicans
People associated with the University of Birmingham
People from Dunfermline
Naturalized citizens of the United States
People from Lenox, Massachusetts
People of the American Industrial Revolution
Presidents of the Saint Andrew's Society of the State of New York
Progressive Era in the United States
Rectors of the University of Aberdeen
Rectors of the University of St Andrews
Scottish billionaires
Scottish emigrants to the United States
Scottish spiritualists
U.S. Steel people
University and college founders | Andrew Carnegie | [
"Chemistry"
] | 13,349 | [
"Bessemer Gold Medal",
"Chemical engineering awards"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.