id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
1,941 | https://en.wikipedia.org/wiki/Aeon | The word aeon , also spelled eon (in American and Australian English), originally meant "life", "vital force" or "being", "generation" or "a period of time", though it tended to be translated as "age" in the sense of "ages", "forever", "timeless" or "for eternity". It is a Latin transliteration from the ancient Greek word (), from the archaic () meaning "century". In Greek, it literally refers to the timespan of one hundred years. A cognate Latin word (cf. ) for "age" is present in words such as eternal, longevity and mediaeval.
Although the term aeon may be used in reference to a period of a billion years (especially in geology, cosmology and astronomy), its more common usage is for any long, indefinite period. Aeon can also refer to the four aeons on the geologic time scale that make up the Earth's history, the Hadean, Archean, Proterozoic, and the current aeon, Phanerozoic.
Astronomy and cosmology
In astronomy, an aeon is defined as a billion years (109 years, abbreviated AE).
Roger Penrose uses the word aeon to describe the period between successive and cyclic Big Bangs within the context of conformal cyclic cosmology.
Philosophy and mysticism
In Buddhism, an "aeon" or (Sanskrit: ) is often said to be 1,334,240,000 years, the life cycle of the world. Yet, these numbers are symbolic, not literal.
Christianity's idea of "eternal life" comes from the word for life, (), and a form of (), which could mean life in the next aeon, the Kingdom of God, or Heaven, just as much as immortality, as in John 3:16.
According to Christian universalism, the Greek New Testament scriptures use the word () to mean a long period and the word () to mean "during a long period"; thus, there was a time before the aeons, and the aeonian period is finite. After each person's mortal life ends, they are judged worthy of aeonian life or aeonian punishment. That is, after the period of the aeons, all punishment will cease and death is overcome and then God becomes the all in each one (1Cor 15:28). This contrasts with the conventional Christian belief in eternal life and eternal punishment.
Occultists of the Thelema and Ordo Templi Orientis (English: "Order of the Temple of the East") traditions sometimes speak of a "magical Aeon" that may last for perhaps as little as 2,000 years.
Gnosticism
In many Gnostic systems, the various emanations of God, who is also known by such names as the One, the Monad, Aion teleos ("The Broadest Aeon", Greek: ), Bythos ("depth or profundity", Greek: ), Proarkhe ("before the beginning", Greek: ), ("the beginning", Greek: ), ("wisdom"), and ("the Anointed One"), are called Aeons. In the different systems these emanations are differently named, classified, and described, but the emanation theory itself is common to all forms of Gnosticism.
In the Basilidian Gnosis they are called sonships ( ; singular: ); according to Marcus, they are numbers and sounds; in Valentinianism they form male/female pairs called "" (Greek , from ).
See also
Aion (deity)
Kalpa (aeon)
Saeculum – comparable Latin concept
Aeon (company)
References
New Testament Greek words and phrases
Time
Units of time
Gnosticism | Aeon | [
"Physics",
"Mathematics"
] | 804 | [
"Physical quantities",
"Time",
"Units of time",
"Quantity",
"Spacetime",
"Wikipedia categories named after physical quantities",
"Units of measurement"
] |
1,962 | https://en.wikipedia.org/wiki/Apparent%20magnitude | Apparent magnitude () is a measure of the brightness of a star, astronomical object or other celestial objects like artificial satellites. Its value depends on its intrinsic luminosity, its distance, and any extinction of the object's light caused by interstellar dust along the line of sight to the observer.
Unless stated otherwise, the word magnitude in astronomy usually refers to a celestial object's apparent magnitude. The magnitude scale likely dates to before the ancient Roman astronomer Claudius Ptolemy, whose star catalog popularized the system by listing stars from 1st magnitude (brightest) to 6th magnitude (dimmest). The modern scale was mathematically defined to closely match this historical system by Norman Pogson in 1856.
The scale is reverse logarithmic: the brighter an object is, the lower its magnitude number. A difference of 1.0 in magnitude corresponds to the brightness ratio of , or about 2.512. For example, a magnitude 2.0 star is 2.512 times as bright as a magnitude 3.0 star, 6.31 times as magnitude 4.0, and 100 times magnitude 7.0.
The brightest astronomical objects have negative apparent magnitudes: for example, Venus at −4.2 or Sirius at −1.46. The faintest stars visible with the naked eye on the darkest night have apparent magnitudes of about +6.5, though this varies depending on a person's eyesight and with altitude and atmospheric conditions. The apparent magnitudes of known objects range from the Sun at −26.832 to objects in deep Hubble Space Telescope images of magnitude +31.5.
The measurement of apparent magnitude is called photometry. Photometric measurements are made in the ultraviolet, visible, or infrared wavelength bands using standard passband filters belonging to photometric systems such as the UBV system or the Strömgren uvbyβ system. Measurement in the V-band may be referred to as the apparent visual magnitude.
Absolute magnitude is a related quantity which measures the luminosity that a celestial object emits, rather than its apparent brightness when observed, and is expressed on the same reverse logarithmic scale. Absolute magnitude is defined as the apparent magnitude that a star or object would have if it were observed from a distance of . Therefore, it is of greater use in stellar astrophysics since it refers to a property of a star regardless of how close it is to Earth. But in observational astronomy and popular stargazing, references to "magnitude" are understood to mean apparent magnitude.
Amateur astronomers commonly express the darkness of the sky in terms of limiting magnitude, i.e. the apparent magnitude of the faintest star they can see with the naked eye. This can be useful as a way of monitoring the spread of light pollution.
Apparent magnitude is technically a measure of illuminance, which can also be measured in photometric units such as lux.
History
The scale used to indicate magnitude originates in the Hellenistic practice of dividing stars visible to the naked eye into six magnitudes. The brightest stars in the night sky were said to be of first magnitude ( = 1), whereas the faintest were of sixth magnitude ( = 6), which is the limit of human visual perception (without the aid of a telescope). Each grade of magnitude was considered twice the brightness of the following grade (a logarithmic scale), although that ratio was subjective as no photodetectors existed. This rather crude scale for the brightness of stars was popularized by Ptolemy in his Almagest and is generally believed to have originated with Hipparchus. This cannot be proved or disproved because Hipparchus's original star catalogue is lost. The only preserved text by Hipparchus himself (a commentary to Aratus) clearly documents that he did not have a system to describe brightness with numbers: He always uses terms like "big" or "small", "bright" or "faint" or even descriptions such as "visible at full moon".
In 1856, Norman Robert Pogson formalized the system by defining a first magnitude star as a star that is 100 times as bright as a sixth-magnitude star, thereby establishing the logarithmic scale still in use today. This implies that a star of magnitude is about 2.512 times as bright as a star of magnitude . This figure, the fifth root of 100, became known as Pogson's Ratio. The 1884 Harvard Photometry and 1886 Potsdamer Duchmusterung star catalogs popularized Pogson's ratio, and eventually it became a de facto standard in modern astronomy to describe differences in brightness.
Defining and calibrating what magnitude 0.0 means is difficult, and different types of measurements which detect different kinds of light (possibly by using filters) have different zero points. Pogson's original 1856 paper defined magnitude 6.0 to be the faintest star the unaided eye can see, but the true limit for faintest possible visible star varies depending on the atmosphere and how high a star is in the sky. The Harvard Photometry used an average of 100 stars close to Polaris to define magnitude 5.0. Later, the Johnson UVB photometric system defined multiple types of photometric measurements with different filters, where magnitude 0.0 for each filter is defined to be the average of six stars with the same spectral type as Vega. This was done so the color index of these stars would be 0. Although this system is often called "Vega normalized", Vega is slightly dimmer than the six-star average used to define magnitude 0.0, meaning Vega's magnitude is normalized to 0.03 by definition.
With the modern magnitude systems, brightness is described using Pogson's ratio. In practice, magnitude numbers rarely go above 30 before stars become too faint to detect. While Vega is close to magnitude 0, there are four brighter stars in the night sky at visible wavelengths (and more at infrared wavelengths) as well as the bright planets Venus, Mars, and Jupiter, and since brighter means smaller magnitude, these must be described by negative magnitudes. For example, Sirius, the brightest star of the celestial sphere, has a magnitude of −1.4 in the visible. Negative magnitudes for other very bright astronomical objects can be found in the table below.
Astronomers have developed other photometric zero point systems as alternatives to Vega normalized systems. The most widely used is the AB magnitude system, in which photometric zero points are based on a hypothetical reference spectrum having constant flux per unit frequency interval, rather than using a stellar spectrum or blackbody curve as the reference. The AB magnitude zero point is defined such that an object's AB and Vega-based magnitudes will be approximately equal in the V filter band. However, the AB magnitude system is defined assuming an idealized detector measuring only one wavelength of light, while real detectors accept energy from a range of wavelengths.
Measurement
Precision measurement of magnitude (photometry) requires calibration of the photographic or (usually) electronic detection apparatus. This generally involves contemporaneous observation, under identical conditions, of standard stars whose magnitude using that spectral filter is accurately known. Moreover, as the amount of light actually received by a telescope is reduced due to transmission through the Earth's atmosphere, the airmasses of the target and calibration stars must be taken into account. Typically one would observe a few different stars of known magnitude which are sufficiently similar. Calibrator stars close in the sky to the target are favoured (to avoid large differences in the atmospheric paths). If those stars have somewhat different zenith angles (altitudes) then a correction factor as a function of airmass can be derived and applied to the airmass at the target's position. Such calibration obtains the brightness as would be observed from above the atmosphere, where apparent magnitude is defined.
The apparent magnitude scale in astronomy reflects the received power of stars and not their amplitude. Newcomers should consider using the relative brightness measure in astrophotography to adjust exposure times between stars. Apparent magnitude also integrates over the entire object, regardless of its focus, and this needs to be taken into account when scaling exposure times for objects with significant apparent size, like the Sun, Moon and planets. For example, directly scaling the exposure time from the Moon to the Sun works because they are approximately the same size in the sky. However, scaling the exposure from the Moon to Saturn would result in an overexposure if the image of Saturn takes up a smaller area on your sensor than the Moon did (at the same magnification, or more generally, f/#).
Calculations
The dimmer an object appears, the higher the numerical value given to its magnitude, with a difference of 5 magnitudes corresponding to a brightness factor of exactly 100. Therefore, the magnitude , in the spectral band , would be given by
which is more commonly expressed in terms of common (base-10) logarithms as
where is the observed irradiance using spectral filter , and is the reference flux (zero-point) for that photometric filter. Since an increase of 5 magnitudes corresponds to a decrease in brightness by a factor of exactly 100, each magnitude increase implies a decrease in brightness by the factor (Pogson's ratio). Inverting the above formula, a magnitude difference implies a brightness factor of
Example: Sun and Moon
What is the ratio in brightness between the Sun and the full Moon?
The apparent magnitude of the Sun is −26.832 (brighter), and the mean magnitude of the full moon is −12.74 (dimmer).
Difference in magnitude:
Brightness factor:
The Sun appears to be approximately times as bright as the full Moon.
Magnitude addition
Sometimes one might wish to add brightness. For example, photometry on closely separated double stars may only be able to produce a measurement of their combined light output. To find the combined magnitude of that double star knowing only the magnitudes of the individual components, this can be done by adding the brightness (in linear units) corresponding to each magnitude.
Solving for yields
where is the resulting magnitude after adding the brightnesses referred to by and .
Apparent bolometric magnitude
While magnitude generally refers to a measurement in a particular filter band corresponding to some range of wavelengths, the apparent or absolute bolometric magnitude (mbol) is a measure of an object's apparent or absolute brightness integrated over all wavelengths of the electromagnetic spectrum (also known as the object's irradiance or power, respectively). The zero point of the apparent bolometric magnitude scale is based on the definition that an apparent bolometric magnitude of 0 mag is equivalent to a received irradiance of 2.518×10−8 watts per square metre (W·m−2).
Absolute magnitude
While apparent magnitude is a measure of the brightness of an object as seen by a particular observer, absolute magnitude is a measure of the intrinsic brightness of an object. Flux decreases with distance according to an inverse-square law, so the apparent magnitude of a star depends on both its absolute brightness and its distance (and any extinction). For example, a star at one distance will have the same apparent magnitude as a star four times as bright at twice that distance. In contrast, the intrinsic brightness of an astronomical object, does not depend on the distance of the observer or any extinction.
The absolute magnitude , of a star or astronomical object is defined as the apparent magnitude it would have as seen from a distance of . The absolute magnitude of the Sun is 4.83 in the V band (visual), 4.68 in the Gaia satellite's G band (green) and 5.48 in the B band (blue).
In the case of a planet or asteroid, the absolute magnitude rather means the apparent magnitude it would have if it were from both the observer and the Sun, and fully illuminated at maximum opposition (a configuration that is only theoretically achievable, with the observer situated on the surface of the Sun).
Standard reference values
The magnitude scale is a reverse logarithmic scale. A common misconception is that the logarithmic nature of the scale is because the human eye itself has a logarithmic response. In Pogson's time this was thought to be true (see Weber–Fechner law), but it is now believed that the response is a power law .
Magnitude is complicated by the fact that light is not monochromatic. The sensitivity of a light detector varies according to the wavelength of the light, and the way it varies depends on the type of light detector. For this reason, it is necessary to specify how the magnitude is measured for the value to be meaningful. For this purpose the UBV system is widely used, in which the magnitude is measured in three different wavelength bands: U (centred at about 350 nm, in the near ultraviolet), B (about 435 nm, in the blue region) and V (about 555 nm, in the middle of the human visual range in daylight). The V band was chosen for spectral purposes and gives magnitudes closely corresponding to those seen by the human eye. When an apparent magnitude is discussed without further qualification, the V magnitude is generally understood.
Because cooler stars, such as red giants and red dwarfs, emit little energy in the blue and UV regions of the spectrum, their power is often under-represented by the UBV scale. Indeed, some L and T class stars have an estimated magnitude of well over 100, because they emit extremely little visible light, but are strongest in infrared.
Measures of magnitude need cautious treatment and it is extremely important to measure like with like. On early 20th century and older orthochromatic (blue-sensitive) photographic film, the relative brightnesses of the blue supergiant Rigel and the red supergiant Betelgeuse irregular variable star (at maximum) are reversed compared to what human eyes perceive, because this archaic film is more sensitive to blue light than it is to red light. Magnitudes obtained from this method are known as photographic magnitudes, and are now considered obsolete.
For objects within the Milky Way with a given absolute magnitude, 5 is added to the apparent magnitude for every tenfold increase in the distance to the object. For objects at very great distances (far beyond the Milky Way), this relationship must be adjusted for redshifts and for non-Euclidean distance measures due to general relativity.
For planets and other Solar System bodies, the apparent magnitude is derived from its phase curve and the distances to the Sun and observer.
List of apparent magnitudes
Some of the listed magnitudes are approximate. Telescope sensitivity depends on observing time, optical bandpass, and interfering light from scattering and airglow.
See also
Angular diameter
Distance modulus
List of nearest bright stars
List of nearest stars
Luminosity
Surface brightness
References
External links
Observational astronomy
Logarithmic scales of measurement | Apparent magnitude | [
"Physics",
"Astronomy",
"Mathematics"
] | 3,043 | [
"Physical quantities",
"Quantity",
"Observational astronomy",
"Logarithmic scales of measurement",
"Astronomical sub-disciplines"
] |
1,963 | https://en.wikipedia.org/wiki/Absolute%20magnitude | In astronomy, absolute magnitude () is a measure of the luminosity of a celestial object on an inverse logarithmic astronomical magnitude scale; the more luminous (intrinsically bright) an object, the lower its magnitude number. An object's absolute magnitude is defined to be equal to the apparent magnitude that the object would have if it were viewed from a distance of exactly , without extinction (or dimming) of its light due to absorption by interstellar matter and cosmic dust. By hypothetically placing all objects at a standard reference distance from the observer, their luminosities can be directly compared among each other on a magnitude scale. For Solar System bodies that shine in reflected light, a different definition of absolute magnitude (H) is used, based on a standard reference distance of one astronomical unit.
Absolute magnitudes of stars generally range from approximately −10 to +20. The absolute magnitudes of galaxies can be much lower (brighter).
The more luminous an object, the smaller the numerical value of its absolute magnitude. A difference of 5 magnitudes between the absolute magnitudes of two objects corresponds to a ratio of 100 in their luminosities, and a difference of n magnitudes in absolute magnitude corresponds to a luminosity ratio of 100n/5. For example, a star of absolute magnitude MV = 3.0 would be 100 times as luminous as a star of absolute magnitude MV = 8.0 as measured in the V filter band. The Sun has absolute magnitude MV = +4.83. Highly luminous objects can have negative absolute magnitudes: for example, the Milky Way galaxy has an absolute B magnitude of about −20.8.
As with all astronomical magnitudes, the absolute magnitude can be specified for different wavelength ranges corresponding to specified filter bands or passbands; for stars a commonly quoted absolute magnitude is the absolute visual magnitude, which uses the visual (V) band of the spectrum (in the UBV photometric system). Absolute magnitudes are denoted by a capital M, with a subscript representing the filter band used for measurement, such as MV for absolute magnitude in the V band.
An object's absolute bolometric magnitude (Mbol) represents its total luminosity over all wavelengths, rather than in a single filter band, as expressed on a logarithmic magnitude scale. To convert from an absolute magnitude in a specific filter band to absolute bolometric magnitude, a bolometric correction (BC) is applied.
Stars and galaxies
In stellar and galactic astronomy, the standard distance is 10 parsecs (about 32.616 light-years, 308.57 petameters or 308.57 trillion kilometres). A star at 10 parsecs has a parallax of 0.1″ (100 milliarcseconds). Galaxies (and other extended objects) are much larger than 10 parsecs; their light is radiated over an extended patch of sky, and their overall brightness cannot be directly observed from relatively short distances, but the same convention is used. A galaxy's magnitude is defined by measuring all the light radiated over the entire object, treating that integrated brightness as the brightness of a single point-like or star-like source, and computing the magnitude of that point-like source as it would appear if observed at the standard 10 parsecs distance. Consequently, the absolute magnitude of any object equals the apparent magnitude it would have if it were 10 parsecs away.
Some stars visible to the naked eye have such a low absolute magnitude that they would appear bright enough to outshine the planets and cast shadows if they were at 10 parsecs from the Earth. Examples include Rigel (−7.8), Deneb (−8.4), Naos (−6.2), and Betelgeuse (−5.8). For comparison, Sirius has an absolute magnitude of only 1.4, which is still brighter than the Sun, whose absolute visual magnitude is 4.83. The Sun's absolute bolometric magnitude is set arbitrarily, usually at 4.75.
Absolute magnitudes of stars generally range from approximately −10 to +20. The absolute magnitudes of galaxies can be much lower (brighter). For example, the giant elliptical galaxy M87 has an absolute magnitude of −22 (i.e. as bright as about 60,000 stars of magnitude −10). Some active galactic nuclei (quasars like CTA-102) can reach absolute magnitudes in excess of −32, making them the most luminous persistent objects in the observable universe, although these objects can vary in brightness over astronomically short timescales. At the extreme end, the optical afterglow of the gamma ray burst GRB 080319B reached, according to one paper, an absolute r magnitude brighter than −38 for a few tens of seconds.
Apparent magnitude
The Greek astronomer Hipparchus established a numerical scale to describe the brightness of each star appearing in the sky. The brightest stars in the sky were assigned an apparent magnitude , and the dimmest stars visible to the naked eye are assigned . The difference between them corresponds to a factor of 100 in brightness. For objects within the immediate neighborhood of the Sun, the absolute magnitude and apparent magnitude from any distance (in parsecs, with 1 pc = 3.2616 light-years) are related by
where is the radiant flux measured at distance (in parsecs), the radiant flux measured at distance . Using the common logarithm, the equation can be written as
where it is assumed that extinction from gas and dust is negligible. Typical extinction rates within the Milky Way galaxy are 1 to 2 magnitudes per kiloparsec, when dark clouds are taken into account.
For objects at very large distances (outside the Milky Way) the luminosity distance (distance defined using luminosity measurements) must be used instead of , because the Euclidean approximation is invalid for distant objects. Instead, general relativity must be taken into account. Moreover, the cosmological redshift complicates the relationship between absolute and apparent magnitude, because the radiation observed was shifted into the red range of the spectrum. To compare the magnitudes of very distant objects with those of local objects, a K correction might have to be applied to the magnitudes of the distant objects.
The absolute magnitude can also be written in terms of the apparent magnitude and stellar parallax :
or using apparent magnitude and distance modulus :
Examples
Rigel has a visual magnitude of 0.12 and distance of about 860 light-years:
Vega has a parallax of 0.129″, and an apparent magnitude of 0.03:
The Black Eye Galaxy has a visual magnitude of 9.36 and a distance modulus of 31.06:
Bolometric magnitude
The absolute bolometric magnitude () takes into account electromagnetic radiation at all wavelengths. It includes those unobserved due to instrumental passband, the Earth's atmospheric absorption, and extinction by interstellar dust. It is defined based on the luminosity of the stars. In the case of stars with few observations, it must be computed assuming an effective temperature.
Classically, the difference in bolometric magnitude is related to the luminosity ratio according to:
which makes by inversion:
where
is the Sun's luminosity (bolometric luminosity)
is the star's luminosity (bolometric luminosity)
is the bolometric magnitude of the Sun
is the bolometric magnitude of the star.
In August 2015, the International Astronomical Union passed Resolution B2 defining the zero points of the absolute and apparent bolometric magnitude scales in SI units for power (watts) and irradiance (W/m2), respectively. Although bolometric magnitudes had been used by astronomers for many decades, there had been systematic differences in the absolute magnitude-luminosity scales presented in various astronomical references, and no international standardization. This led to systematic differences in bolometric corrections scales. Combined with incorrect assumed absolute bolometric magnitudes for the Sun, this could lead to systematic errors in estimated stellar luminosities (and other stellar properties, such as radii or ages, which rely on stellar luminosity to be calculated).
Resolution B2 defines an absolute bolometric magnitude scale where corresponds to luminosity , with the zero point luminosity set such that the Sun (with nominal luminosity ) corresponds to absolute bolometric magnitude . Placing a radiation source (e.g. star) at the standard distance of 10 parsecs, it follows that the zero point of the apparent bolometric magnitude scale corresponds to irradiance . Using the IAU 2015 scale, the nominal total solar irradiance ("solar constant") measured at 1 astronomical unit () corresponds to an apparent bolometric magnitude of the Sun of .
Following Resolution B2, the relation between a star's absolute bolometric magnitude and its luminosity is no longer directly tied to the Sun's (variable) luminosity:
where
is the star's luminosity (bolometric luminosity) in watts
is the zero point luminosity
is the bolometric magnitude of the star
The new IAU absolute magnitude scale permanently disconnects the scale from the variable Sun. However, on this SI power scale, the nominal solar luminosity corresponds closely to , a value that was commonly adopted by astronomers before the 2015 IAU resolution.
The luminosity of the star in watts can be calculated as a function of its absolute bolometric magnitude as:
using the variables as defined previously.
Solar System bodies ()
For planets and asteroids, a definition of absolute magnitude that is more meaningful for non-stellar objects is used. The absolute magnitude, commonly called , is defined as the apparent magnitude that the object would have if it were one astronomical unit (AU) from both the Sun and the observer, and in conditions of ideal solar opposition (an arrangement that is impossible in practice). Because Solar System bodies are illuminated by the Sun, their brightness varies as a function of illumination conditions, described by the phase angle. This relationship is referred to as the phase curve. The absolute magnitude is the brightness at phase angle zero, an arrangement known as opposition, from a distance of one AU.
Apparent magnitude
The absolute magnitude can be used to calculate the apparent magnitude of a body. For an object reflecting sunlight, and are connected by the relation
where is the phase angle, the angle between the body-Sun and body–observer lines. is the phase integral (the integration of reflected light; a number in the 0 to 1 range).
By the law of cosines, we have:
Distances:
is the distance between the body and the observer
is the distance between the body and the Sun
is the distance between the observer and the Sun
, a unit conversion factor, is the constant 1 AU, the average distance between the Earth and the Sun
Approximations for phase integral
The value of depends on the properties of the reflecting surface, in particular on its roughness. In practice, different approximations are used based on the known or assumed properties of the surface. The surfaces of terrestrial planets are generally more difficult to model than those of gaseous planets, the latter of which have smoother visible surfaces.
Planets as diffuse spheres
Planetary bodies can be approximated reasonably well as ideal diffuse reflecting spheres. Let be the phase angle in degrees, then
A full-phase diffuse sphere reflects two-thirds as much light as a diffuse flat disk of the same diameter. A quarter phase () has as much light as full phase ().
By contrast, a diffuse disk reflector model is simply , which isn't realistic, but it does represent the opposition surge for rough surfaces that reflect more uniform light back at low phase angles.
The definition of the geometric albedo , a measure for the reflectivity of planetary surfaces, is based on the diffuse disk reflector model. The absolute magnitude , diameter (in kilometers) and geometric albedo of a body are related by
or equivalently,
Example: The Moon's absolute magnitude can be calculated from its diameter and geometric albedo :
We have ,
At quarter phase, (according to the diffuse reflector model), this yields an apparent magnitude of The actual value is somewhat lower than that, This is not a good approximation, because the phase curve of the Moon is too complicated for the diffuse reflector model. A more accurate formula is given in the following section.
More advanced models
Because Solar System bodies are never perfect diffuse reflectors, astronomers use different models to predict apparent magnitudes based on known or assumed properties of the body. For planets, approximations for the correction term in the formula for have been derived empirically, to match observations at different phase angles. The approximations recommended by the Astronomical Almanac are (with in degrees):
Here is the effective inclination of Saturn's rings (their tilt relative to the observer), which as seen from Earth varies between 0° and 27° over the course of one Saturn orbit, and is a small correction term depending on Uranus' sub-Earth and sub-solar latitudes. is the Common Era year. Neptune's absolute magnitude is changing slowly due to seasonal effects as the planet moves along its 165-year orbit around the Sun, and the approximation above is only valid after the year 2000. For some circumstances, like for Venus, no observations are available, and the phase curve is unknown in those cases. The formula for the Moon is only applicable to the near side of the Moon, the portion that is visible from the Earth.
Example 1: On 1 January 2019, Venus was from the Sun, and from Earth, at a phase angle of (near quarter phase). Under full-phase conditions, Venus would have been visible at Accounting for the high phase angle, the correction term above yields an actual apparent magnitude of This is close to the value of predicted by the Jet Propulsion Laboratory.
Example 2: At first quarter phase, the approximation for the Moon gives With that, the apparent magnitude of the Moon is close to the expected value of about . At last quarter, the Moon is about 0.06 mag fainter than at first quarter, because that part of its surface has a lower albedo.
Earth's albedo varies by a factor of 6, from 0.12 in the cloud-free case to 0.76 in the case of altostratus cloud. The absolute magnitude in the table corresponds to an albedo of 0.434. Due to the variability of the weather, Earth's apparent magnitude cannot be predicted as accurately as that of most other planets.
Asteroids
If an object has an atmosphere, it reflects light more or less isotropically in all directions, and its brightness can be modelled as a diffuse reflector. Bodies with no atmosphere, like asteroids or moons, tend to reflect light more strongly to the direction of the incident light, and their brightness increases rapidly as the phase angle approaches . This rapid brightening near opposition is called the opposition effect. Its strength depends on the physical properties of the body's surface, and hence it differs from asteroid to asteroid.
In 1985, the IAU adopted the semi-empirical -system, based on two parameters and called absolute magnitude and slope, to model the opposition effect for the ephemerides published by the Minor Planet Center.
where
the phase integral is and
for or , , , and .
This relation is valid for phase angles , and works best when .
The slope parameter relates to the surge in brightness, typically , when the object is near opposition. It is known accurately only for a small number of asteroids, hence for most asteroids a value of is assumed. In rare cases, can be negative. An example is 101955 Bennu, with .
In 2012, the -system was officially replaced by an improved system with three parameters , and , which produces more satisfactory results if the opposition effect is very small or restricted to very small phase angles. However, as of 2022, this -system has not been adopted by either the Minor Planet Center nor Jet Propulsion Laboratory.
The apparent magnitude of asteroids varies as they rotate, on time scales of seconds to weeks depending on their rotation period, by up to or more. In addition, their absolute magnitude can vary with the viewing direction, depending on their axial tilt. In many cases, neither the rotation period nor the axial tilt are known, limiting the predictability. The models presented here do not capture those effects.
Cometary magnitudes
The brightness of comets is given separately as total magnitude (, the brightness integrated over the entire visible extend of the coma) and nuclear magnitude (, the brightness of the core region alone). Both are different scales than the magnitude scale used for planets and asteroids, and can not be used for a size comparison with an asteroid's absolute magnitude .
The activity of comets varies with their distance from the Sun. Their brightness can be approximated as
where are the total and nuclear apparent magnitudes of the comet, respectively, are its "absolute" total and nuclear magnitudes, and are the body-sun and body-observer distances, is the Astronomical Unit, and are the slope parameters characterising the comet's activity. For , this reduces to the formula for a purely reflecting body (showing no cometary activity).
For example, the lightcurve of comet C/2011 L4 (PANSTARRS) can be approximated by On the day of its perihelion passage, 10 March 2013, comet PANSTARRS was from the Sun and from Earth. The total apparent magnitude is predicted to have been at that time. The Minor Planet Center gives a value close to that, .
The absolute magnitude of any given comet can vary dramatically. It can change as the comet becomes more or less active over time or if it undergoes an outburst. This makes it difficult to use the absolute magnitude for a size estimate. When comet 289P/Blanpain was discovered in 1819, its absolute magnitude was estimated as . It was subsequently lost and was only rediscovered in 2003. At that time, its absolute magnitude had decreased to , and it was realised that the 1819 apparition coincided with an outburst. 289P/Blanpain reached naked eye brightness (5–8 mag) in 1819, even though it is the comet with the smallest nucleus that has ever been physically characterised, and usually doesn't become brighter than 18 mag.
For some comets that have been observed at heliocentric distances large enough to distinguish between light reflected from the coma, and light from the nucleus itself, an absolute magnitude analogous to that used for asteroids has been calculated, allowing to estimate the sizes of their nuclei.
Meteors
For a meteor, the standard distance for measurement of magnitudes is at an altitude of at the observer's zenith.
See also
Araucaria Project
Hertzsprung–Russell diagram – relates absolute magnitude or luminosity versus spectral color or surface temperature.
Jansky - the preferred unit for radio astronomy – linear in power/unit area
List of most luminous stars
Photographic magnitude
Surface brightness – the magnitude for extended objects
Zero point (photometry) – the typical calibration point for star flux
References
External links
Reference zero-magnitude fluxes
International Astronomical Union
Absolute Magnitude of a Star calculator
The Magnitude system
About stellar magnitudes
Obtain the magnitude of any star – SIMBAD
Converting magnitude of minor planets to diameter
Another table for converting asteroid magnitude to estimated diameter
Observational astronomy | Absolute magnitude | [
"Astronomy"
] | 4,008 | [
"Observational astronomy",
"Astronomical sub-disciplines"
] |
1,979 | https://en.wikipedia.org/wiki/Alpha%20Centauri | Alpha Centauri (, α Cen, or Alpha Cen) is a triple star system in the southern constellation of Centaurus. It consists of three stars: Rigil Kentaurus (), Toliman (), and Proxima Centauri (). Proxima Centauri is the closest star to the Sun at 4.2465 light-years (1.3020 pc).
and B are Sun-like stars (class G and K, respectively) that together form the binary star system . To the naked eye, these two main components appear to be a single star with an apparent magnitude of −0.27. It is the brightest star in the constellation and the third-brightest in the night sky, outshone by only Sirius and Canopus.
(Rigil Kentaurus) has 1.1 times the mass and 1.5 times the luminosity of the Sun, while (Toliman) is smaller and cooler, at 0.9 solar masses and less than 0.5 solar luminosities. The pair orbit around a common centre with an orbital period of 79 years. Their elliptical orbit is eccentric, so that the distance between A and B varies from 35.6 astronomical units (AU), or about the distance between Pluto and the Sun, to or about the distance between Saturn and the Sun. One astronomical unit is the distance from Earth to the Sun, 150 million kilometers.
Proxima Centauri, or , is a small faint red dwarf (class M). Though not visible to the naked eye, Proxima Centauri is the closest star to the Sun at a distance of , slightly closer than . Currently, the distance between Proxima Centauri and is about , equivalent to about 430 times the radius of Neptune's orbit.
Proxima Centauri has one confirmed planet: Proxima b, an Earth-sized planet in the habitable zone (though it is unlikely to be habitable), one candidate planet, Proxima d, sub-Earth which orbits very closely to the star, and the controversial Proxima c, a mini-Neptune astronomical units away. may have a Neptune-sized planet in the habitable zone, though it is not yet known with certainty to be planetary in nature and could be an artifact of the discovery mechanism. has no known planets: Planet , purportedly discovered in 2012, was later disproven, and no other planet has yet been confirmed.
Etymology and nomenclature
α Centauri (Latinised to Alpha Centauri) is the system's designation given by J. Bayer in 1603. It belongs to the constellation Centaurus, named after the half human, half horse creature in Greek mythology. Heracles accidentally wounded the centaur and placed him in the sky after his death. Alpha Centauri marks the right front hoof of the Centaur. The common name Rigil Kentaurus is a Latinisation of the Arabic translation Rijl al-Qinṭūrus, meaning "the Foot of the Centaur". Qinṭūrus is the Arabic transliteration of the Greek (Kentaurus). The name is frequently abbreviated to Rigil Kent () or even Rigil, though the latter name is better known for Rigel ( Orionis).
An alternative name found in European sources, Toliman, is an approximation of the Arabic aẓ-Ẓalīmān (in older transcription, aṭ-Ṭhalīmān), meaning 'the (two male) Ostriches', an appellation Zakariya al-Qazwini had applied to the pair of stars Lambda and Mu Sagittarii; it was often not clear on old star maps which name was intended to go with which star (or stars), and the referents changed over time. The name Toliman originates with Jacobus Golius' 1669 edition of Al-Farghani's Compendium. Tolimân is Golius' Latinisation of the Arabic name "the ostriches", the name of an asterism of which Alpha Centauri formed the main star.
was discovered in 1915 by Robert T. A. Innes, who suggested that it be named Proxima Centaurus, . The name Proxima Centauri later became more widely used and is now listed by the International Astronomical Union (IAU) as the approved proper name; it is frequently abbreviated to Proxima.
In 2016, the Working Group on Star Names of the IAU, having decided to attribute proper names to individual component stars rather than to multiple systems, approved the name Rigil Kentaurus () as being restricted to and the name Proxima Centauri () for On 10 August 2018, the IAU approved the name Toliman () for
Other names
During the 19th century, the northern amateur popularist E.H. Burritt used the now-obscure name Bungula (). Its origin is not known, but it may have been coined from the Greek letter beta () and Latin 'hoof', originally for Beta Centauri (the other hoof).
In Chinese astronomy, Nán Mén, meaning Southern Gate, refers to an asterism consisting of Alpha Centauri and Epsilon Centauri. Consequently, the Chinese name for Alpha Centauri itself is Nán Mén Èr, the Second Star of the Southern Gate.
To the Indigenous Boorong people of northwestern Victoria in Australia, Alpha Centauri and Beta Centauri are Bermbermgle, two brothers noted for their courage and destructiveness, who speared and killed Tchingal "The Emu" (the Coalsack Nebula). The form in Wotjobaluk is Bram-bram-bult.
Observation
To the naked eye, appears to be a single star, the brightest in the southern constellation of Centaurus. Their apparent angular separation varies over about 80 years between 2 and 22 arcseconds (the naked eye has a resolution of 60 arcsec), but through much of the orbit, both are easily resolved in binoculars or small telescopes. At −0.27 apparent magnitude (combined for A and B magnitudes ), Alpha Centauri is a first-magnitude star and is fainter only than Sirius and Canopus. It is the outer star of The Pointers or The Southern Pointers, so called because the line through Beta Centauri (Hadar/Agena), some 4.5° west, points to the constellation Crux—the Southern Cross. The Pointers easily distinguish the true Southern Cross from the fainter asterism known as the False Cross.
South of about 29° South latitude, is circumpolar and never sets below the horizon. North of about 29° N latitude, Alpha Centauri never rises. Alpha Centauri lies close to the southern horizon when viewed from the 29° North latitude to the equator (close to Hermosillo and Chihuahua City in Mexico; Galveston, Texas; Ocala, Florida; and Lanzarote, the Canary Islands of Spain), but only for a short time around its culmination. The star culminates each year at local midnight on 24 April and at local 9 p.m. on 8 June.
As seen from Earth, Proxima Centauri is 2.2° southwest from this distance is about four times the angular diameter of the Moon. Proxima Centauri appears as a deep-red star of a typical apparent magnitude of 11.1 in a sparsely populated star field, requiring moderately sized telescopes to be seen. Listed as V645 Cen in the General Catalogue of Variable Stars, version 4.2, this UV Ceti star or "flare star" can unexpectedly brighten rapidly by as much as 0.6 magnitude at visual wavelengths, then fade after only a few minutes. Some amateur and professional astronomers regularly monitor for outbursts using either optical or radio telescopes. In August 2015, the largest recorded flares of the star occurred, with the star becoming 8.3 times brighter than normal on 13 August, in the B band (blue light region).
Alpha Centauri may be inside the G-cloud of the Local Bubble, and its nearest known system is the binary brown dwarf system Luhman 16, at from it.
Observational history
Alpha Centauri is listed in the 2nd century the star catalog appended to Ptolemy's Almagest. He gave its ecliptic coordinates, but texts differ as to whether the ecliptic latitude reads or . (Presently the ecliptic latitude is , but it has decreased by a fraction of a degree since Ptolemy's time due to proper motion.) In Ptolemy's time, Alpha Centauri was visible from Alexandria, Egypt, at but, due to precession, its declination is now , and it can no longer be seen at that latitude. English explorer Robert Hues brought Alpha Centauri to the attention of European observers in his 1592 work Tractatus de Globis, along with Canopus and Achernar, noting:
The binary nature of Alpha Centauri AB was recognized in December 1689 by Jean Richaud, while observing a passing comet from his station in Puducherry. Alpha Centauri was only the third binary star to be discovered, preceded by Mizar AB and Acrux.
The large proper motion of Alpha Centauri AB was discovered by Manuel John Johnson, observing from Saint Helena, who informed Thomas Henderson at the Royal Observatory, Cape of Good Hope of it. The parallax of Alpha Centauri was subsequently determined by Henderson from many exacting positional observations of the AB system between April 1832 and May 1833. He withheld his results, however, because he suspected they were too large to be true, but eventually published them in 1839 after Bessel released his own accurately determined parallax for in 1838. For this reason, Alpha Centauri is sometimes considered as the second star to have its distance measured because Henderson's work was not fully acknowledged at first. (The distance of Alpha Centauri from the Earth is now reckoned at 4.396 light-years or .)
Later, John Herschel made the first micrometrical observations in 1834. Since the early 20th century, measures have been made with photographic plates.
By 1926, William Stephen Finsen calculated the approximate orbit elements close to those now accepted for this system. All future positions are now sufficiently accurate for visual observers to determine the relative places of the stars from a binary star ephemeris. Others, like D. Pourbaix (2002), have regularly refined the precision of new published orbital elements.
Robert T. A. Innes discovered Proxima Centauri in 1915 by blinking photographic plates taken at different times during a proper motion survey. These showed large proper motion and parallax similar in both size and direction to those of which suggested that Proxima Centauri is part of the system and slightly closer to Earth than . As such, Innes concluded that Proxima Centauri was the closest star to Earth yet discovered.
Kinematics
All components of display significant proper motion against the background sky. Over centuries, this causes their apparent positions to slowly change. Proper motion was unknown to ancient astronomers. Most assumed that the stars were permanently fixed on the celestial sphere, as stated in the works of the philosopher Aristotle. In 1718, Edmond Halley found that some stars had significantly moved from their ancient astrometric positions.
In the 1830s, Thomas Henderson discovered the true distance to by analysing his many astrometric mural circle observations. He then realised this system also likely had a high proper motion. In this case, the apparent stellar motion was found using Nicolas Louis de Lacaille's astrometric observations of 1751–1752, by the observed differences between the two measured positions in different epochs.
Calculated proper motion of the centre of mass for is about 3620 mas/y (milliarcseconds per year) toward the west and 694 mas/y toward the north, giving an overall motion of 3686 mas/y in a direction 11° north of west. The motion of the centre of mass is about 6.1 arcmin each century, or 1.02° each millennium. The speed in the western direction is and in the northerly direction . Using spectroscopy the mean radial velocity has been determined to be around towards the Solar System. This gives a speed with respect to the Sun of , very close to the peak in the distribution of speeds of nearby stars.
Since is almost exactly in the plane of the Milky Way as viewed from Earth, many stars appear behind it. In early May 2028, will pass between the Earth and a distant red star, when there is a 45% probability that an Einstein ring will be observed. Other conjunctions will also occur in the coming decades, allowing accurate measurement of proper motions and possibly giving information on planets.
Predicted future changes
Based on the system's common proper motion and radial velocities, will continue to change its position in the sky significantly and will gradually brighten. For example, in about 6,200 CE, α Centauri's true motion will cause an extremely rare first-magnitude stellar conjunction with Beta Centauri, forming a brilliant optical double star in the southern sky. It will then pass just north of the Southern Cross or Crux, before moving northwest and up towards the present celestial equator and away from the galactic plane. By about 26,700 CE, in the present-day constellation of Hydra, will reach perihelion at away, though later calculations suggest that this will occur in 27,000 AD. At its nearest approach, α Centauri will attain a maximum apparent magnitude of −0.86, comparable to present-day magnitude of Canopus, but it will still not surpass that of Sirius, which will brighten incrementally over the next 60,000 years, and will continue to be the brightest star as seen from Earth (other than the Sun) for the next 210,000 years.
Stellar system
Alpha Centauri is a triple star system, with its two main stars, A and B, together comprising a binary component. The AB designation, or older A×B, denotes the mass centre of a main binary system relative to companion star(s) in a multiple star system. AB-C refers to the component of Proxima Centauri in relation to the central binary, being the distance between the centre of mass and the outlying companion. Because the distance between Proxima (C) and either of Alpha Centauri A or B is similar, the AB binary system is sometimes treated as a single gravitational object.
Orbital properties
The A and B components of Alpha Centauri have an orbital period of 79.762 years. Their orbit is moderately eccentric, as it has an eccentricity of almost 0.52; their closest approach or periastron is , or about the distance between the Sun and Saturn; and their furthest separation or apastron is , about the distance between the Sun and Pluto. The most recent periastron was in August 1955 and the next will occur in May 2035; the most recent apastron was in May 1995 and will next occur in 2075.
Viewed from Earth, the apparent orbit of A and B means that their separation and position angle (PA) are in continuous change throughout their projected orbit. Observed stellar positions in 2019 are separated by 4.92 arcsec through the PA of 337.1°, increasing to 5.49 arcsec through 345.3° in 2020. The closest recent approach was in February 2016, at 4.0 arcsec through the PA of 300°. The observed maximum separation of these stars is about 22 arcsec, while the minimum distance is 1.7 arcsec. The widest separation occurred during February 1976, and the next will be in January 2056.
Alpha Centauri C is about from Alpha Centauri AB, equivalent to about 5% of the distance between Alpha Centauri AB and the Sun. Until 2017, measurements of its small speed and its trajectory were of too little accuracy and duration in years to determine whether it is bound to Alpha Centauri AB or unrelated.
Radial velocity measurements made in 2017 were precise enough to show that Proxima Centauri and Alpha Centauri AB are gravitationally bound. The orbital period of Proxima Centauri is approximately years, with an eccentricity of 0.5, much more eccentric than Mercury's. Proxima Centauri comes within of AB at periastron, and its apastron occurs at .
Physical properties
Asteroseismic studies, chromospheric activity, and stellar rotation (gyrochronology) are all consistent with the Alpha Centauri system being similar in age to, or slightly older than, the Sun. Asteroseismic analyses that incorporate tight observational constraints on the stellar parameters for the Alpha Centauri stars have yielded age estimates of Gyr, Gyr, 6.4 Gyr, and Gyr. Age estimates for the stars based on chromospheric activity (Calcium H & K emission) yield whereas gyrochronology yields Gyr. Stellar evolution theory implies both stars are slightly older than the Sun at 5 to 6 billion years, as derived by their mass and spectral characteristics.
From the orbital elements, the total mass of Alpha Centauri AB is about
– or twice that of the Sun. The average individual stellar masses are about and , respectively, though slightly different masses have also been quoted in recent years, such as and , totaling . Alpha Centauri A and B have absolute magnitudes of +4.38 and +5.71, respectively.
Alpha Centauri AB System
Alpha Centauri A
Alpha Centauri A, also known as Rigil Kentaurus, is the principal member, or primary, of the binary system. It is a solar-like main-sequence star with a similar yellowish colour, whose stellar classification is spectral type G2-V; it is about 10% more massive than the Sun, with a radius about 22% larger. When considered among the individual brightest stars in the night sky, it is the fourth-brightest at an apparent magnitude of +0.01, being slightly fainter than Arcturus at an apparent magnitude of −0.05.
The type of magnetic activity on Alpha Centauri A is comparable to that of the Sun, showing coronal variability due to star spots, as modulated by the rotation of the star. However, since 2005 the activity level has fallen into a deep minimum that might be similar to the Sun's historical Maunder Minimum. Alternatively, it may have a very long stellar activity cycle and is slowly recovering from a minimum phase.
Alpha Centauri B
Alpha Centauri B, also known as Toliman, is the secondary star of the binary system. It is a main-sequence star of spectral type K1-V, making it more an orange colour than Alpha Centauri A; it has around 90% of the mass of the Sun and a 14% smaller diameter. Although it has a lower luminosity than A, Alpha Centauri B emits more energy in the X-ray band. Its light curve varies on a short time scale, and there has been at least one observed flare. It is more magnetically active than Alpha Centauri A, showing a cycle of compared to 11 years for the Sun, and has about half the minimum-to-peak variation in coronal luminosity of the Sun. Alpha Centauri B has an apparent magnitude of +1.35, slightly dimmer than Mimosa.
Alpha Centauri C (Proxima Centauri)
Alpha Centauri C, better known as Proxima Centauri, is a small main-sequence red dwarf of spectral class M6-Ve. It has an absolute magnitude of +15.60, over 20,000 times fainter than the Sun. Its mass is calculated to be . It is the closest star to the Sun but is too faint to be visible to the naked eye.
Planetary system
The Alpha Centauri system as a whole has two confirmed planets, both of them around Proxima Centauri. While other planets have been claimed to exist around all of the stars, none of the discoveries have been confirmed.
Planets of Proxima Centauri
Proxima Centauri b is a terrestrial planet discovered in 2016 by astronomers at the European Southern Observatory (ESO). It has an estimated minimum mass of 1.17 (Earth masses) and orbits approximately 0.049 AU from Proxima Centauri, placing it in the star's habitable zone.
The discovery of Proxima Centauri c was formally published in 2020 and could be a super-Earth or mini-Neptune. It has a mass of roughly 7 and orbits about from Proxima Centauri with a period of . In June 2020, a possible direct imaging detection of the planet hinted at the presence of a large ring system. However, a 2022 study disputed the existence of this planet.
A 2020 paper refining Proxima b's mass excludes the presence of extra companions with masses above at periods shorter than 50 days, but the authors detected a radial-velocity curve with a periodicity of 5.15 days, suggesting the presence of a planet with a mass of about . This planet, Proxima Centauri d, was detected in 2022.
Planets of Alpha Centauri A
In 2021, a candidate planet named Candidate 1 (or C1) was detected around Alpha Centauri A, thought to orbit at approximately with a period of about one year, and to have a mass between that of Neptune and one-half that of Saturn, though it may be a dust disk or an artifact. The possibility of C1 being a background star has been ruled out. If this candidate is confirmed, the temporary name C1 will most likely be replaced with the scientific designation Alpha Centauri Ab in accordance with current naming conventions.
GO Cycle 1 observations are planned for the James Webb Space Telescope (JWST) to search for planets around Alpha Centauri A, as well as observations of Epsilon Muscae. The coronographic observations, which occurred on July 26 and 27, 2023, were failures, though there are follow-up observations in March 2024. Pre-launch estimates predicted that JWST will be able to find planets with a radius of 5 at . Multiple observations every 3–6 months could push the limit down to 3 . Post-launch estimates based on observations of HIP 65426 b find that JWST will be able to find planets even closer to Alpha Centauri A and could find a 5 planet at . Candidate 1 has an estimated radius between and orbits at . It is therefore likely within the reach of JWST observations.
Planets of Alpha Centauri B
The first claim of a planet around Alpha Centauri B was that of Alpha Centauri Bb in 2012, which was proposed to be an Earth-mass planet in a 3.2-day orbit. This was refuted in 2015 when the apparent planet was shown to be an artifact of the way the radial velocity data was processed.
A search for transits of planet Bb was conducted with the Hubble Space Telescope from 2013 to 2014. This search detected one potential transit-like event, which could be associated with a different planet with a radius around . This planet would most likely orbit Alpha Centauri B with an orbital period of 20.4 days or less, with only a 5% chance of it having a longer orbit. The median of the likely orbits is 12.4 days. Its orbit would likely have an eccentricity of 0.24 or less. It could have lakes of molten lava and would be far too close to Alpha Centauri B to harbour life. If confirmed, this planet might be called . However, the name has not been used in the literature, as it is not a claimed discovery.
Hypothetical planets
Additional planets may exist in the Alpha Centauri system, either orbiting Alpha Centauri A or Alpha Centauri B individually, or in large orbits around Alpha Centauri AB. Because both stars are fairly similar to the Sun (for example, in age and metallicity), astronomers have been especially interested in making detailed searches for planets in the Alpha Centauri system. Several established planet-hunting teams have used various radial velocity or star transit methods in their searches around these two bright stars. All the observational studies have so far failed to find evidence for brown dwarfs or gas giants.
In 2009, computer simulations showed that a planet might have been able to form near the inner edge of Alpha Centauri B's habitable zone, which extends from from the star. Certain special assumptions, such as considering that the Alpha Centauri pair may have initially formed with a wider separation and later moved closer to each other (as might be possible if they formed in a dense star cluster), would permit an accretion-friendly environment farther from the star. Bodies around Alpha Centauri A would be able to orbit at slightly farther distances due to its stronger gravity. In addition, the lack of any brown dwarfs or gas giants in close orbits around Alpha Centauri make the likelihood of terrestrial planets greater than otherwise. A theoretical study indicates that a radial velocity analysis might detect a hypothetical planet of in Alpha Centauri B's habitable zone.
Radial velocity measurements of Alpha Centauri B made with the High Accuracy Radial Velocity Planet Searcher spectrograph were sufficiently sensitive to detect a planet within the habitable zone of the star (i.e. with an orbital period P = 200 days), but no planets were detected.
Current estimates place the probability of finding an Earth-like planet around Alpha Centauri at roughly 75%. The observational thresholds for planet detection in the habitable zones by the radial velocity method are currently (2017) estimated to be about for Alpha Centauri A, for Alpha Centauri B, and for Proxima Centauri.
Early computer-generated models of planetary formation predicted the existence of terrestrial planets around both Alpha Centauri A and B, but most recent numerical investigations have shown that the gravitational pull of the companion star renders the accretion of planets difficult. Despite these difficulties, given the similarities to the Sun in spectral types, star type, age and probable stability of the orbits, it has been suggested that this stellar system could hold one of the best possibilities for harbouring extraterrestrial life on a potential planet.
In the Solar System, it was once thought that Jupiter and Saturn were probably crucial in perturbing comets into the inner Solar System, providing the inner planets with a source of water and various other ices. However, since isotope measurements of the deuterium to hydrogen (D/H) ratio in comets Halley, Hyakutake, Hale–Bopp, 2002T7, and Tuttle yield values approximately twice that of Earth's oceanic water, more recent models and research predict that less than 10% of Earth's water was supplied from comets. In the system, Proxima Centauri may have influenced the planetary disk as the system was forming, enriching the area around Alpha Centauri with volatile materials. This would be discounted if, for example, happened to have gas giants orbiting (or vice versa), or if and B themselves were able to perturb comets into each other's inner systems, as Jupiter and Saturn presumably have done in the Solar System. Such icy bodies probably also reside in Oort clouds of other planetary systems. When they are influenced gravitationally by either the gas giants or disruptions by passing nearby stars, many of these icy bodies then travel star-wards. Such ideas also apply to the close approach of Alpha Centauri or other stars to the Solar system, when, in the distant future, the Oort Cloud might be disrupted enough to increase the number of active comets.
To be in the habitable zone, a planet around Alpha Centauri A would have an orbital radius of between about 1.2 and so as to have similar planetary temperatures and conditions for liquid water to exist. For the slightly less luminous and cooler , the habitable zone is between about 0.7 and .
With the goal of finding evidence of such planets, both Proxima Centauri and were among the listed "Tier-1" target stars for NASA's Space Interferometry Mission (S.I.M.). Detecting planets as small as three Earth-masses or smaller within two AU of a "Tier-1" target would have been possible with this new instrument. The S.I.M. mission, however, was cancelled due to financial issues in 2010.
Circumstellar discs
Based on observations between 2007 and 2012, a study found a slight excess of emissions in the 24 μm (mid/far-infrared) band surrounding , which may be interpreted as evidence for a sparse circumstellar disc or dense interplanetary dust. The total mass was estimated to be between to the mass of the Moon, or 10–100 times the mass of the Solar System's zodiacal cloud. If such a disc existed around both stars, disc would likely be stable to and would likely be stable to This would put A's disc entirely within the frost line, and a small part of B's outer disc just outside.
View from this system
The sky from would appear much as it does from the Earth, except that Centaurus's brightest star, being itself, would be absent from the constellation. The Sun would appear as a white star of apparent magnitude +0.5, roughly the same as the average brightness of Betelgeuse from Earth. It would be at the antipodal point of current right ascension and declination, at (2000), in eastern Cassiopeia, easily outshining all the rest of the stars in the constellation. With the placement of the Sun east of the magnitude 3.4 star Epsilon Cassiopeiae, nearly in front of the Heart Nebula, the "W" line of stars of Cassiopeia would have a "/W" shape.
Other nearby stars' placements may be affected somewhat drastically. Sirius, at 9.2 light years away from the system, would still be the brightest star in the night sky, with a magnitude of -1.2, but would be located in Orion less than a degree away from Betelgeuse. Procyon, which would also be at a slightly further distance than from the Sun, would move to outshine Pollux in the middle of Gemini.
A planet around either or B would see the other star as a very bright secondary. For example, an Earth-like planet at from (with a revolution period of 1.34 years) would get Sun-like illumination from its primary, and would appear 5.7–8.6 magnitudes dimmer (−21.0 to −18.2), 190–2,700 times dimmer than but still 150–2,100 times brighter than the full Moon. Conversely, an Earth-like planet at from (with a revolution period of 0.63 years) would get nearly Sun-like illumination from its primary, and would appear 4.6–7.3 magnitudes dimmer (−22.1 to −19.4), 70 to 840 times dimmer than but still 470–5,700 times brighter than the full Moon.
Proxima Centauri would appear dim as one of many stars, being magnitude 4.5 at its current distance, and magnitude 2.6 at periastron.
Future exploration
Alpha Centauri is a first target for crewed or robotic interstellar exploration. Using current spacecraft technologies, crossing the distance between the Sun and Alpha Centauri would take several millennia, though the possibility of nuclear pulse propulsion or laser light sail technology, as considered in the Breakthrough Starshot program, could make the journey to Alpha Centauri in 20 years. An objective of such a mission would be to make a fly-by of, and possibly photograph, planets that might exist in the system. The existence of Proxima Centauri b, announced by the European Southern Observatory (ESO) in August 2016, would be a target for the Starshot program.
NASA released a mission concept in 2017 that would send a spacecraft to Alpha Centauri in 2069, scheduled to coincide with the 100th anniversary of the first crewed lunar landing in 1969, Even at speed 10% of the speed of light (about 108 million km/h), which NASA experts say may be possible, it would take a spacecraft 44 years to reach the constellation, by the year 2113, and would take another 4 years for a signal, by the year 2117 to reach Earth. The concept received no further funding or development.
Historical distance estimates
{| class="wikitable sortable mw-collapsible"
|+ Alpha Centauri AB historical distance estimates
|-
! rowspan="2" | Source
! rowspan="2" |Year
! rowspan="2" |Subject!! rowspan="2" | Parallax (mas) !! colspan="3" | Distance !! rowspan="2" | References
|-
!parsecs !! light-years !! petametres
|-
| H. Henderson || 1839 || AB || || || 2.81 ± 0.53 || ||
|-
| T. Henderson
|1842
|AB
|
| 1.10 ± 0.15
| 3.57 ± 0.5
|
|
|-
| Maclear
|1851
|AB
|
|
|
| 32.4 ± 2.5
|
|-
| Moesta
|1868
|AB
|
|
|
|
|
|-
| Gill & Elkin
|1885
|AB
|
|
|
|
|
|-
| Roberts
|1895
|AB
|
| 1.32 ± 0.2
| 4.29 ± 0.65
|
|
|-
| Woolley et al.
|1970
|AB
|
|
|
|
|
|-
| Gliese & Jahreiß
|1991
|AB
|
|
|
|
|
|-
| van Altena et al.
| 1995
| AB
|
|
|
|
|
|-
| Perryman et al.
| 1997
| AB
|
|
|
|
|-
| Söderhjelm
| 1999
| AB
|
|
|
|
|
|-
|rowspan="2"| van Leeuwen
|rowspan="2"| 2007
| A
|
|
|
|
|
|-
| B
|
|
|
| 37.5 ± 2.5
|
|-
| RECONS TOP100
|2012
|AB
|
|
|
|
|
|}
In culture
Alpha Centauri has been recognized and associated throughout history, particularly in the Southern Hemisphere. Polynesians have been using Alpha Centauri for their star navigation and have called it Kamailehope. In the Ngarrindjeri culture of Australia, Alpha Centauri represents with Beta Centauri two sharks chasing a stingray, the Southern Cross, and in Incan culture it with Beta Centauri form the eyes of a llama-shaped dark constellation embedded in the band of stars that the visible Milky Way forms in the sky. In ancient Egypt it was also revered and in China it is known as part of the South Gate asterism. Due to its proximity, the Alpha Centauri system has appeared in many works of fiction.
See also
Alpha Centauri in fiction
List of nearest stars
Project Longshot
Sagan Planet Walk
Notes
References
External links
Hypothetical planets or exploration
G-type main-sequence stars
K-type main-sequence stars
M-type main-sequence stars
Centauri, Alpha
Maunder Minimum
Triple star systems
Hypothetical planetary systems
Centaurus
Rigil Kentaurus
Centauri, Alpha
PD-60 05483
0559
128620 and 128621
071681 and 071683
5759 and 5760
Articles containing video clips
16891215
Astronomical objects known since antiquity | Alpha Centauri | [
"Astronomy"
] | 7,451 | [
"Maunder Minimum",
"Magnetism in astronomy",
"Centaurus",
"Constellations"
] |
1,997 | https://en.wikipedia.org/wiki/Algebraic%20geometry | Algebraic geometry is a branch of mathematics which uses abstract algebraic techniques, mainly from commutative algebra, to solve geometrical problems. Classically, it studies zeros of multivariate polynomials; the modern approach generalizes this in a few different aspects.
The fundamental objects of study in algebraic geometry are algebraic varieties, which are geometric manifestations of solutions of systems of polynomial equations. Examples of the most studied classes of algebraic varieties are lines, circles, parabolas, ellipses, hyperbolas, cubic curves like elliptic curves, and quartic curves like lemniscates and Cassini ovals. These are plane algebraic curves. A point of the plane lies on an algebraic curve if its coordinates satisfy a given polynomial equation. Basic questions involve the study of points of special interest like singular points, inflection points and points at infinity. More advanced questions involve the topology of the curve and the relationship between curves defined by different equations.
Algebraic geometry occupies a central place in modern mathematics and has multiple conceptual connections with such diverse fields as complex analysis, topology and number theory. As a study of systems of polynomial equations in several variables, the subject of algebraic geometry begins with finding specific solutions via equation solving, and then proceeds to understand the intrinsic properties of the totality of solutions of a system of equations. This understanding requires both conceptual theory and computational technique.
In the 20th century, algebraic geometry split into several subareas.
The mainstream of algebraic geometry is devoted to the study of the complex points of the algebraic varieties and more generally to the points with coordinates in an algebraically closed field.
Real algebraic geometry is the study of the real algebraic varieties.
Diophantine geometry and, more generally, arithmetic geometry is the study of algebraic varieties over fields that are not algebraically closed and, specifically, over fields of interest in algebraic number theory, such as the field of rational numbers, number fields, finite fields, function fields, and p-adic fields.
A large part of singularity theory is devoted to the singularities of algebraic varieties.
Computational algebraic geometry is an area that has emerged at the intersection of algebraic geometry and computer algebra, with the rise of computers. It consists mainly of algorithm design and software development for the study of properties of explicitly given algebraic varieties.
Much of the development of the mainstream of algebraic geometry in the 20th century occurred within an abstract algebraic framework, with increasing emphasis being placed on "intrinsic" properties of algebraic varieties not dependent on any particular way of embedding the variety in an ambient coordinate space; this parallels developments in topology, differential and complex geometry. One key achievement of this abstract algebraic geometry is Grothendieck's scheme theory which allows one to use sheaf theory to study algebraic varieties in a way which is very similar to its use in the study of differential and analytic manifolds. This is obtained by extending the notion of point: In classical algebraic geometry, a point of an affine variety may be identified, through Hilbert's Nullstellensatz, with a maximal ideal of the coordinate ring, while the points of the corresponding affine scheme are all prime ideals of this ring. This means that a point of such a scheme may be either a usual point or a subvariety. This approach also enables a unification of the language and the tools of classical algebraic geometry, mainly concerned with complex points, and of algebraic number theory. Wiles' proof of the longstanding conjecture called Fermat's Last Theorem is an example of the power of this approach.
Basic notions
Zeros of simultaneous polynomials
In classical algebraic geometry, the main objects of interest are the vanishing sets of collections of polynomials, meaning the set of all points that simultaneously satisfy one or more polynomial equations. For instance, the two-dimensional sphere of radius 1 in three-dimensional Euclidean space R3 could be defined as the set of all points with
A "slanted" circle in R3 can be defined as the set of all points which satisfy the two polynomial equations
Affine varieties
First we start with a field k. In classical algebraic geometry, this field was always the complex numbers C, but many of the same results are true if we assume only that k is algebraically closed. We consider the affine space of dimension n over k, denoted An(k) (or more simply An, when k is clear from the context). When one fixes a coordinate system, one may identify An(k) with kn. The purpose of not working with kn is to emphasize that one "forgets" the vector space structure that kn carries.
A function f : An → A1 is said to be polynomial (or regular) if it can be written as a polynomial, that is, if there is a polynomial p in k[x1,...,xn] such that f(M) = p(t1,...,tn) for every point M with coordinates (t1,...,tn) in An. The property of a function to be polynomial (or regular) does not depend on the choice of a coordinate system in An.
When a coordinate system is chosen, the regular functions on the affine n-space may be identified with the ring of polynomial functions in n variables over k. Therefore, the set of the regular functions on An is a ring, which is denoted k[An].
We say that a polynomial vanishes at a point if evaluating it at that point gives zero. Let S be a set of polynomials in k[An]. The vanishing set of S (or vanishing locus or zero set) is the set V(S) of all points in An where every polynomial in S vanishes. Symbolically,
A subset of An which is V(S), for some S, is called an algebraic set. The V stands for variety (a specific type of algebraic set to be defined below).
Given a subset U of An, can one recover the set of polynomials which generate it? If U is any subset of An, define I(U) to be the set of all polynomials whose vanishing set contains U. The I stands for ideal: if two polynomials f and g both vanish on U, then f+g vanishes on U, and if h is any polynomial, then hf vanishes on U, so I(U) is always an ideal of the polynomial ring k[An].
Two natural questions to ask are:
Given a subset U of An, when is U = V(I(U))?
Given a set S of polynomials, when is S = I(V(S))?
The answer to the first question is provided by introducing the Zariski topology, a topology on An whose closed sets are the algebraic sets, and which directly reflects the algebraic structure of k[An]. Then U = V(I(U)) if and only if U is an algebraic set or equivalently a Zariski-closed set. The answer to the second question is given by Hilbert's Nullstellensatz. In one of its forms, it says that I(V(S)) is the radical of the ideal generated by S. In more abstract language, there is a Galois connection, giving rise to two closure operators; they can be identified, and naturally play a basic role in the theory; the example is elaborated at Galois connection.
For various reasons we may not always want to work with the entire ideal corresponding to an algebraic set U. Hilbert's basis theorem implies that ideals in k[An] are always finitely generated.
An algebraic set is called irreducible if it cannot be written as the union of two smaller algebraic sets. Any algebraic set is a finite union of irreducible algebraic sets and this decomposition is unique. Thus its elements are called the irreducible components of the algebraic set. An irreducible algebraic set is also called a variety. It turns out that an algebraic set is a variety if and only if it may be defined as the vanishing set of a prime ideal of the polynomial ring.
Some authors do not make a clear distinction between algebraic sets and varieties and use irreducible variety to make the distinction when needed.
Regular functions
Just as continuous functions are the natural maps on topological spaces and smooth functions are the natural maps on differentiable manifolds, there is a natural class of functions on an algebraic set, called regular functions or polynomial functions. A regular function on an algebraic set V contained in An is the restriction to V of a regular function on An. For an algebraic set defined on the field of the complex numbers, the regular functions are smooth and even analytic.
It may seem unnaturally restrictive to require that a regular function always extend to the ambient space, but it is very similar to the situation in a normal topological space, where the Tietze extension theorem guarantees that a continuous function on a closed subset always extends to the ambient topological space.
Just as with the regular functions on affine space, the regular functions on V form a ring, which we denote by k[V]. This ring is called the coordinate ring of V.
Since regular functions on V come from regular functions on An, there is a relationship between the coordinate rings. Specifically, if a regular function on V is the restriction of two functions f and g in k[An], then f − g is a polynomial function which is null on V and thus belongs to I(V). Thus k[V] may be identified with k[An]/I(V).
Morphism of affine varieties
Using regular functions from an affine variety to A1, we can define regular maps from one affine variety to another. First we will define a regular map from a variety into affine space: Let V be a variety contained in An. Choose m regular functions on V, and call them f1, ..., fm. We define a regular map f from V to Am by letting . In other words, each fi determines one coordinate of the range of f.
If V′ is a variety contained in Am, we say that f is a regular map from V to V′ if the range of f is contained in V′.
The definition of the regular maps apply also to algebraic sets.
The regular maps are also called morphisms, as they make the collection of all affine algebraic sets into a category, where the objects are the affine algebraic sets and the morphisms are the regular maps. The affine varieties is a subcategory of the category of the algebraic sets.
Given a regular map g from V to V′ and a regular function f of k[V′], then . The map is a ring homomorphism from k[V′] to k[V]. Conversely, every ring homomorphism from k[V′] to k[V] defines a regular map from V to V′. This defines an equivalence of categories between the category of algebraic sets and the opposite category of the finitely generated reduced k-algebras. This equivalence is one of the starting points of scheme theory.
Rational function and birational equivalence
In contrast to the preceding sections, this section concerns only varieties and not algebraic sets. On the other hand, the definitions extend naturally to projective varieties (next section), as an affine variety and its projective completion have the same field of functions.
If V is an affine variety, its coordinate ring is an integral domain and has thus a field of fractions which is denoted k(V) and called the field of the rational functions on V or, shortly, the function field of V. Its elements are the restrictions to V of the rational functions over the affine space containing V. The domain of a rational function f is not V but the complement of the subvariety (a hypersurface) where the denominator of f vanishes.
As with regular maps, one may define a rational map from a variety V to a variety V'. As with the regular maps, the rational maps from V to V' may be identified to the field homomorphisms from k(V') to k(V).
Two affine varieties are birationally equivalent if there are two rational functions between them which are inverse one to the other in the regions where both are defined. Equivalently, they are birationally equivalent if their function fields are isomorphic.
An affine variety is a rational variety if it is birationally equivalent to an affine space. This means that the variety admits a rational parameterization, that is a parametrization with rational functions. For example, the circle of equation is a rational curve, as it has the parametric equation
which may also be viewed as a rational map from the line to the circle.
The problem of resolution of singularities is to know if every algebraic variety is birationally equivalent to a variety whose projective completion is nonsingular (see also smooth completion). It was solved in the affirmative in characteristic 0 by Heisuke Hironaka in 1964 and is yet unsolved in finite characteristic.
Projective variety
Just as the formulas for the roots of second, third, and fourth degree polynomials suggest extending real numbers to the more algebraically complete setting of the complex numbers, many properties of algebraic varieties suggest extending affine space to a more geometrically complete projective space. Whereas the complex numbers are obtained by adding the number i, a root of the polynomial , projective space is obtained by adding in appropriate points "at infinity", points where parallel lines may meet.
To see how this might come about, consider the variety . If we draw it, we get a parabola. As x goes to positive infinity, the slope of the line from the origin to the point (x, x2) also goes to positive infinity. As x goes to negative infinity, the slope of the same line goes to negative infinity.
Compare this to the variety V(y − x3). This is a cubic curve. As x goes to positive infinity, the slope of the line from the origin to the point (x, x3) goes to positive infinity just as before. But unlike before, as x goes to negative infinity, the slope of the same line goes to positive infinity as well; the exact opposite of the parabola. So the behavior "at infinity" of V(y − x3) is different from the behavior "at infinity" of V(y − x2).
The consideration of the projective completion of the two curves, which is their prolongation "at infinity" in the projective plane, allows us to quantify this difference: the point at infinity of the parabola is a regular point, whose tangent is the line at infinity, while the point at infinity of the cubic curve is a cusp. Also, both curves are rational, as they are parameterized by x, and the Riemann-Roch theorem implies that the cubic curve must have a singularity, which must be at infinity, as all its points in the affine space are regular.
Thus many of the properties of algebraic varieties, including birational equivalence and all the topological properties, depend on the behavior "at infinity" and so it is natural to study the varieties in projective space. Furthermore, the introduction of projective techniques made many theorems in algebraic geometry simpler and sharper: For example, Bézout's theorem on the number of intersection points between two varieties can be stated in its sharpest form only in projective space. For these reasons, projective space plays a fundamental role in algebraic geometry.
Nowadays, the projective space Pn of dimension n is usually defined as the set of the lines passing through a point, considered as the origin, in the affine space of dimension , or equivalently to the set of the vector lines in a vector space of dimension . When a coordinate system has been chosen in the space of dimension , all the points of a line have the same set of coordinates, up to the multiplication by an element of k. This defines the homogeneous coordinates of a point of Pn as a sequence of elements of the base field k, defined up to the multiplication by a nonzero element of k (the same for the whole sequence).
A polynomial in variables vanishes at all points of a line passing through the origin if and only if it is homogeneous. In this case, one says that the polynomial vanishes at the corresponding point of Pn. This allows us to define a projective algebraic set in Pn as the set , where a finite set of homogeneous polynomials vanishes. Like for affine algebraic sets, there is a bijection between the projective algebraic sets and the reduced homogeneous ideals which define them. The projective varieties are the projective algebraic sets whose defining ideal is prime. In other words, a projective variety is a projective algebraic set, whose homogeneous coordinate ring is an integral domain, the projective coordinates ring being defined as the quotient of the graded ring or the polynomials in variables by the homogeneous (reduced) ideal defining the variety. Every projective algebraic set may be uniquely decomposed into a finite union of projective varieties.
The only regular functions which may be defined properly on a projective variety are the constant functions. Thus this notion is not used in projective situations. On the other hand, the field of the rational functions or function field is a useful notion, which, similarly to the affine case, is defined as the set of the quotients of two homogeneous elements of the same degree in the homogeneous coordinate ring.
Real algebraic geometry
Real algebraic geometry is the study of real algebraic varieties.
The fact that the field of the real numbers is an ordered field cannot be ignored in such a study. For example, the curve of equation is a circle if , but has no real points if . Real algebraic geometry also investigates, more broadly, semi-algebraic sets, which are the solutions of systems of polynomial inequalities. For example, neither branch of the hyperbola of equation is a real algebraic variety. However, the branch in the first quadrant is a semi-algebraic set defined by and .
One open problem in real algebraic geometry is the following part of Hilbert's sixteenth problem: Decide which respective positions are possible for the ovals of a nonsingular plane curve of degree 8.
Computational algebraic geometry
One may date the origin of computational algebraic geometry to meeting EUROSAM'79 (International Symposium on Symbolic and Algebraic Manipulation) held at Marseille, France, in June 1979. At this meeting,
Dennis S. Arnon showed that George E. Collins's Cylindrical algebraic decomposition (CAD) allows the computation of the topology of semi-algebraic sets,
Bruno Buchberger presented Gröbner bases and his algorithm to compute them,
Daniel Lazard presented a new algorithm for solving systems of homogeneous polynomial equations with a computational complexity which is essentially polynomial in the expected number of solutions and thus simply exponential in the number of the unknowns. This algorithm is strongly related with Macaulay's multivariate resultant.
Since then, most results in this area are related to one or several of these items either by using or improving one of these algorithms, or by finding algorithms whose complexity is simply exponential in the number of the variables.
A body of mathematical theory complementary to symbolic methods called numerical algebraic geometry has been developed over the last several decades. The main computational method is homotopy continuation. This supports, for example, a model of floating point computation for solving problems of algebraic geometry.
Gröbner basis
A Gröbner basis is a system of generators of a polynomial ideal whose computation allows the deduction of many properties of the affine algebraic variety defined by the ideal.
Given an ideal I defining an algebraic set V:
V is empty (over an algebraically closed extension of the basis field), if and only if the Gröbner basis for any monomial ordering is reduced to {1}.
By means of the Hilbert series one may compute the dimension and the degree of V from any Gröbner basis of I for a monomial ordering refining the total degree.
If the dimension of V is 0, one may compute the points (finite in number) of V from any Gröbner basis of I (see Systems of polynomial equations).
A Gröbner basis computation allows one to remove from V all irreducible components which are contained in a given hypersurface.
A Gröbner basis computation allows one to compute the Zariski closure of the image of V by the projection on the k first coordinates, and the subset of the image where the projection is not proper.
More generally Gröbner basis computations allow one to compute the Zariski closure of the image and the critical points of a rational function of V into another affine variety.
Gröbner basis computations do not allow one to compute directly the primary decomposition of I nor the prime ideals defining the irreducible components of V, but most algorithms for this involve Gröbner basis computation. The algorithms which are not based on Gröbner bases use regular chains but may need Gröbner bases in some exceptional situations.
Gröbner bases are deemed to be difficult to compute. In fact they may contain, in the worst case, polynomials whose degree is doubly exponential in the number of variables and a number of polynomials which is also doubly exponential. However, this is only a worst case complexity, and the complexity bound of Lazard's algorithm of 1979 may frequently apply. Faugère F5 algorithm realizes this complexity, as it may be viewed as an improvement of Lazard's 1979 algorithm. It follows that the best implementations allow one to compute almost routinely with algebraic sets of degree more than 100. This means that, presently, the difficulty of computing a Gröbner basis is strongly related to the intrinsic difficulty of the problem.
Cylindrical algebraic decomposition (CAD)
CAD is an algorithm which was introduced in 1973 by G. Collins to implement with an acceptable complexity the Tarski–Seidenberg theorem on quantifier elimination over the real numbers.
This theorem concerns the formulas of the first-order logic whose atomic formulas are polynomial equalities or inequalities between polynomials with real coefficients. These formulas are thus the formulas which may be constructed from the atomic formulas by the logical operators and (∧), or (∨), not (¬), for all (∀) and exists (∃). Tarski's theorem asserts that, from such a formula, one may compute an equivalent formula without quantifier (∀, ∃).
The complexity of CAD is doubly exponential in the number of variables. This means that CAD allows, in theory, to solve every problem of real algebraic geometry which may be expressed by such a formula, that is almost every problem concerning explicitly given varieties and semi-algebraic sets.
While Gröbner basis computation has doubly exponential complexity only in rare cases, CAD has almost always this high complexity. This implies that, unless if most polynomials appearing in the input are linear, it may not solve problems with more than four variables.
Since 1973, most of the research on this subject is devoted either to improving CAD or finding alternative algorithms in special cases of general interest.
As an example of the state of art, there are efficient algorithms to find at least a point in every connected component of a semi-algebraic set, and thus to test if a semi-algebraic set is empty. On the other hand, CAD is yet, in practice, the best algorithm to count the number of connected components.
Asymptotic complexity vs. practical efficiency
The basic general algorithms of computational geometry have a double exponential worst case complexity. More precisely, if d is the maximal degree of the input polynomials and n the number of variables, their complexity is at most for some constant c, and, for some inputs, the complexity is at least for another constant c′.
During the last 20 years of the 20th century, various algorithms have been introduced to solve specific subproblems with a better complexity. Most of these algorithms have a complexity .
Among these algorithms which solve a sub problem of the problems solved by Gröbner bases, one may cite testing if an affine variety is empty and solving nonhomogeneous polynomial systems which have a finite number of solutions. Such algorithms are rarely implemented because, on most entries Faugère's F4 and F5 algorithms have a better practical efficiency and probably a similar or better complexity (probably because the evaluation of the complexity of Gröbner basis algorithms on a particular class of entries is a difficult task which has been done only in a few special cases).
The main algorithms of real algebraic geometry which solve a problem solved by CAD are related to the topology of semi-algebraic sets. One may cite counting the number of connected components, testing if two points are in the same components or computing a Whitney stratification of a real algebraic set. They have a complexity of
, but the constant involved by O notation is so high that using them to solve any nontrivial problem effectively solved by CAD, is impossible even if one could use all the existing computing power in the world. Therefore, these algorithms have never been implemented and this is an active research area to search for algorithms with have together a good asymptotic complexity and a good practical efficiency.
Abstract modern viewpoint
The modern approaches to algebraic geometry redefine and effectively extend the range of basic objects in various levels of generality to schemes, formal schemes, ind-schemes, algebraic spaces, algebraic stacks and so on. The need for this arises already from the useful ideas within theory of varieties, e.g. the formal functions of Zariski can be accommodated by introducing nilpotent elements in structure rings; considering spaces of loops and arcs, constructing quotients by group actions and developing formal grounds for natural intersection theory and deformation theory lead to some of the further extensions.
Most remarkably, in the early 1960s, algebraic varieties were subsumed into Alexander Grothendieck's concept of a scheme. Their local objects are affine schemes or prime spectra which are locally ringed spaces which form a category which is antiequivalent to the category of commutative unital rings, extending the duality between the category of affine algebraic varieties over a field k, and the category of finitely generated reduced k-algebras. The gluing is along Zariski topology; one can glue within the category of locally ringed spaces, but also, using the Yoneda embedding, within the more abstract category of presheaves of sets over the category of affine schemes. The Zariski topology in the set theoretic sense is then replaced by a Grothendieck topology. Grothendieck introduced Grothendieck topologies having in mind more exotic but geometrically finer and more sensitive examples than the crude Zariski topology, namely the étale topology, and the two flat Grothendieck topologies: fppf and fpqc; nowadays some other examples became prominent including Nisnevich topology. Sheaves can be furthermore generalized to stacks in the sense of Grothendieck, usually with some additional representability conditions leading to Artin stacks and, even finer, Deligne–Mumford stacks, both often called algebraic stacks.
Sometimes other algebraic sites replace the category of affine schemes. For example, Nikolai Durov has introduced commutative algebraic monads as a generalization of local objects in a generalized algebraic geometry. Versions of a tropical geometry, of an absolute geometry over a field of one element and an algebraic analogue of Arakelov's geometry were realized in this setup.
Another formal generalization is possible to universal algebraic geometry in which every variety of algebras has its own algebraic geometry. The term variety of algebras should not be confused with algebraic variety.
The language of schemes, stacks and generalizations has proved to be a valuable way of dealing with geometric concepts and became cornerstones of modern algebraic geometry.
Algebraic stacks can be further generalized and for many practical questions like deformation theory and intersection theory, this is often the most natural approach. One can extend the Grothendieck site of affine schemes to a higher categorical site of derived affine schemes, by replacing the commutative rings with an infinity category of differential graded commutative algebras, or of simplicial commutative rings or a similar category with an appropriate variant of a Grothendieck topology. One can also replace presheaves of sets by presheaves of simplicial sets (or of infinity groupoids). Then, in presence of an appropriate homotopic machinery one can develop a notion of derived stack as such a presheaf on the infinity category of derived affine schemes, which is satisfying certain infinite categorical version of a sheaf axiom (and to be algebraic, inductively a sequence of representability conditions). Quillen model categories, Segal categories and quasicategories are some of the most often used tools to formalize this yielding the derived algebraic geometry, introduced by the school of Carlos Simpson, including Andre Hirschowitz, Bertrand Toën, Gabrielle Vezzosi, Michel Vaquié and others; and developed further by Jacob Lurie, Bertrand Toën, and Gabriele Vezzosi. Another (noncommutative) version of derived algebraic geometry, using A-infinity categories has been developed from the early 1990s by Maxim Kontsevich and followers.
History
Before the 16th century
Some of the roots of algebraic geometry date back to the work of the Hellenistic Greeks from the 5th century BC. The Delian problem, for instance, was to construct a length x so that the cube of side x contained the same volume as the rectangular box a2b for given sides a and b. Menaechmus () considered the problem geometrically by intersecting the pair of plane conics ay = x2 and xy = ab. In the 3rd century BC, Archimedes and Apollonius systematically studied additional problems on conic sections using coordinates. Apollonius in the Conics further developed a method that is so similar to analytic geometry that his work is sometimes thought to have anticipated the work of Descartes by some 1800 years. His application of reference lines, a diameter and a tangent is essentially no different from our modern use of a coordinate frame, where the distances measured along the diameter from the point of tangency are the abscissas, and the segments parallel to the tangent and intercepted between the axis and the curve are the ordinates. He further developed relations between the abscissas and the corresponding coordinates using geometric methods like using parabolas and curves. Medieval mathematicians, including Omar Khayyam, Leonardo of Pisa, Gersonides and Nicole Oresme in the Medieval Period, solved certain cubic and quadratic equations by purely algebraic means and then interpreted the results geometrically. The Persian mathematician Omar Khayyám (born 1048 AD) believed that there was a relationship between arithmetic, algebra and geometry. This was criticized by Jeffrey Oaks, who claims that the study of curves by means of equations originated with Descartes in the seventeenth century.
Renaissance
Such techniques of applying geometrical constructions to algebraic problems were also adopted by a number of Renaissance mathematicians such as Gerolamo Cardano and Niccolò Fontana "Tartaglia" on their studies of the cubic equation. The geometrical approach to construction problems, rather than the algebraic one, was favored by most 16th and 17th century mathematicians, notably Blaise Pascal who argued against the use of algebraic and analytical methods in geometry. The French mathematicians Franciscus Vieta and later René Descartes and Pierre de Fermat revolutionized the conventional way of thinking about construction problems through the introduction of coordinate geometry. They were interested primarily in the properties of algebraic curves, such as those defined by Diophantine equations (in the case of Fermat), and the algebraic reformulation of the classical Greek works on conics and cubics (in the case of Descartes).
During the same period, Blaise Pascal and Gérard Desargues approached geometry from a different perspective, developing the synthetic notions of projective geometry. Pascal and Desargues also studied curves, but from the purely geometrical point of view: the analog of the Greek ruler and compass construction. Ultimately, the analytic geometry of Descartes and Fermat won out, for it supplied the 18th century mathematicians with concrete quantitative tools needed to study physical problems using the new calculus of Newton and Leibniz. However, by the end of the 18th century, most of the algebraic character of coordinate geometry was subsumed by the calculus of infinitesimals of Lagrange and Euler.
19th and early 20th century
It took the simultaneous 19th century developments of non-Euclidean geometry and Abelian integrals in order to bring the old algebraic ideas back into the geometrical fold. The first of these new developments was seized up by Edmond Laguerre and Arthur Cayley, who attempted to ascertain the generalized metric properties of projective space. Cayley introduced the idea of homogeneous polynomial forms, and more specifically quadratic forms, on projective space. Subsequently, Felix Klein studied projective geometry (along with other types of geometry) from the viewpoint that the geometry on a space is encoded in a certain class of transformations on the space. By the end of the 19th century, projective geometers were studying more general kinds of transformations on figures in projective space. Rather than the projective linear transformations which were normally regarded as giving the fundamental Kleinian geometry on projective space, they concerned themselves also with the higher degree birational transformations. This weaker notion of congruence would later lead members of the 20th century Italian school of algebraic geometry to classify algebraic surfaces up to birational isomorphism.
The second early 19th century development, that of Abelian integrals, would lead Bernhard Riemann to the development of Riemann surfaces.
In the same period began the algebraization of the algebraic geometry through commutative algebra. The prominent results in this direction are Hilbert's basis theorem and Hilbert's Nullstellensatz, which are the basis of the connection between algebraic geometry and commutative algebra, and Macaulay's multivariate resultant, which is the basis of elimination theory. Probably because of the size of the computation which is implied by multivariate resultants, elimination theory was forgotten during the middle of the 20th century until it was renewed by singularity theory and computational algebraic geometry.
20th century
B. L. van der Waerden, Oscar Zariski and André Weil developed a foundation for algebraic geometry based on contemporary commutative algebra, including valuation theory and the theory of ideals. One of the goals was to give a rigorous framework for proving the results of the Italian school of algebraic geometry. In particular, this school used systematically the notion of generic point without any precise definition, which was first given by these authors during the 1930s.
In the 1950s and 1960s, Jean-Pierre Serre and Alexander Grothendieck recast the foundations making use of sheaf theory. Later, from about 1960, and largely led by Grothendieck, the idea of schemes was worked out, in conjunction with a very refined apparatus of homological techniques. After a decade of rapid development the field stabilized in the 1970s, and new applications were made, both to number theory and to more classical geometric questions on algebraic varieties, singularities, moduli, and formal moduli.
An important class of varieties, not easily understood directly from their defining equations, are the abelian varieties, which are the projective varieties whose points form an abelian group. The prototypical examples are the elliptic curves, which have a rich theory. They were instrumental in the proof of Fermat's Last Theorem and are also used in elliptic-curve cryptography.
In parallel with the abstract trend of the algebraic geometry, which is concerned with general statements about varieties, methods for effective computation with concretely-given varieties have also been developed, which lead to the new area of computational algebraic geometry. One of the founding methods of this area is the theory of Gröbner bases, introduced by Bruno Buchberger in 1965. Another founding method, more specially devoted to real algebraic geometry, is the cylindrical algebraic decomposition, introduced by George E. Collins in 1973.
See also: derived algebraic geometry.
Analytic geometry
An analytic variety over the field of real or complex numbers is defined locally as the set of common solutions of several equations involving analytic functions. It is analogous to the concept of algebraic variety in that it carries a structure sheaf of analytic functions instead of regular functions. Any complex manifold is a complex analytic variety. Since analytic varieties may have singular points, not all complex analytic varieties are manifolds. Over a non-archimedean field analytic geometry is studied via rigid analytic spaces.
Modern analytic geometry over the field of complex numbers is closely related to complex algebraic geometry, as has been shown by Jean-Pierre Serre in his paper GAGA, the name of which is French for Algebraic geometry and analytic geometry. The GAGA results over the field of complex numbers may be extended to rigid analytic spaces over non-archimedean fields.
Applications
Algebraic geometry now finds applications in statistics, control theory, robotics, error-correcting codes, phylogenetics and geometric modelling. There are also connections to string theory, game theory, graph matchings, solitons and integer programming.
See also
Glossary of classical algebraic geometry
Important publications in algebraic geometry
List of algebraic surfaces
Noncommutative algebraic geometry
Notes
References
Sources
Further reading
Some classic textbooks that predate schemes
Modern textbooks that do not use the language of schemes
Textbooks in computational algebraic geometry
Textbooks and references for schemes
External links
Foundations of Algebraic Geometry by Ravi Vakil, 808 pp.
Algebraic geometry entry on PlanetMath
English translation of the van der Waerden textbook
The Stacks Project, an open source textbook and reference work on algebraic stacks and algebraic geometry
Adjectives Project, an online database for searching examples of schemes and morphisms based on their properties | Algebraic geometry | [
"Mathematics"
] | 7,744 | [
"Fields of abstract algebra",
"Algebraic geometry"
] |
2,011 | https://en.wikipedia.org/wiki/Comparison%20of%20American%20and%20British%20English | The English language was introduced to the Americas by the arrival of the British, beginning in the late 16th and early 17th centuries. The language also spread to numerous other parts of the world as a result of British trade and settlement and the spread of the former British Empire, which, by 1921, included 470–570 million people, about a quarter of the world's population. In England, Wales, Ireland and especially parts of Scotland there are differing varieties of the English language, so the term 'British English' is an oversimplification. Likewise, spoken American English varies widely across the country. Written forms of British and American English as found in newspapers and textbooks vary little in their essential features, with only occasional noticeable differences.
Over the past 400 years, the forms of the language used in the Americas—especially in the United States—and that used in the United Kingdom have diverged in a few minor ways, leading to the versions now often referred to as American English and British English. Differences between the two include pronunciation, grammar, vocabulary (lexis), spelling, punctuation, idioms, and formatting of dates and numbers. However, the differences in written and most spoken grammar structure tend to be much fewer than in other aspects of the language in terms of mutual intelligibility. A few words have completely different meanings in the two versions or are even unknown or not used in one of the versions. One particular contribution towards integrating these differences came from Noah Webster, who wrote the first American dictionary (published 1828) with the intention of showing that people in the United States spoke a different dialect from those spoken in the UK, much like a regional accent.
This divergence between American English and British English has provided opportunities for humorous comment: e.g. in fiction George Bernard Shaw says that the United States and United Kingdom are "two countries divided by a common language"; and Oscar Wilde says that "We have really everything in common with America nowadays, except, of course, the language" (The Canterville Ghost, 1888). Henry Sweet incorrectly predicted in 1877 that within a century American English, Australian English and British English would be mutually unintelligible (A Handbook of Phonetics). Perhaps increased worldwide communication through radio, television, and the Internet has tended to reduce regional variation. This can lead to some variations becoming extinct (for instance the wireless being progressively superseded by the radio) or the acceptance of wide variations as "perfectly good English" everywhere.
Although spoken American and British English are generally mutually intelligible, there are occasional differences which may cause embarrassment—for example, in American English a rubber is usually interpreted as a condom rather than an eraser.
Word derivation and compounds
Directional suffix -ward(s): British forwards, towards, rightwards, etc.; American forward, toward, rightward. In both varieties distribution varies somewhat: afterwards, towards, and backwards are not unusual in America; while in the United Kingdom upward and rightward are the more common options, as is forward, which is standard in phrasal verbs such as look forward to. The forms with -s may be used as adverbs (or preposition towards) but rarely as adjectives: in the UK, as in America, one says "an upward motion". The Oxford English Dictionary in 1897 suggested a semantic distinction for adverbs, with -wards having a more definite directional sense than -ward; subsequent authorities such as Fowler have disputed this contention.
American English (AmE) freely adds the suffix -s to day, night, evening, weekend, Monday, etc. to form adverbs denoting repeated or customary action: I used to stay out evenings; the library is closed on Saturdays. This usage has its roots in Old English but many of these constructions are now regarded as American (for example, the OED labels nights "now chiefly N. Amer. colloq." in constructions such as to sleep nights, but to work nights is standard in British English).
In British English (BrE), the agentive -er suffix is commonly attached to football to refer to one who plays the sport (also cricket; often netball; occasionally basketball and volleyball). AmE usually uses football player. Where the sport's name is usable as a verb, the suffixation is standard in both varieties: for example, golfer, bowler (in ten-pin bowling and in lawn bowls), and shooter. AmE appears sometimes to use the form baller as slang for a basketball player, as in the video game NBA Ballers. However, this is derived from slang use of to ball as a verb meaning to play basketball.
English writers everywhere occasionally make new compound words from common phrases; for example, health care is now being replaced by healthcare on both sides of the Atlantic. However, AmE has made certain words in this fashion that are still treated as phrases in BrE.
In compound nouns of the form <verb><noun>, sometimes AmE prefers the bare infinitive where BrE prefers the gerund. Examples include (AmE first): jump rope/skipping rope; racecar/racing car; rowboat/rowing boat; sailboat/sailing boat; file cabinet/filing cabinet; dial tone/dialling tone; drainboard/draining board.
Generally AmE has a tendency to drop inflectional suffixes, thus preferring clipped forms: compare cookbook v. cookery book; Smith, age 40 v. Smith, aged 40; skim milk v. skimmed milk; dollhouse v. dolls' house; barber shop v. barber's shop.
Singular attributives in one country may be plural in the other, and vice versa. For example, the UK has a drugs problem, while the United States has a drug problem (although the singular usage is also commonly heard in the UK); Americans read the sports section of a newspaper; the British are more likely to read the sport section. However, BrE maths is singular, like physics, just as AmE math is: both are abbreviations of mathematics.
Some British English words come from French roots, while American English finds its words from other places, e.g. AmE eggplant and zucchini are aubergine and courgette in BrE.
Similarly, American English has occasionally replaced more traditional English words with their Spanish counterparts. This is especially common in regions historically affected by Spanish settlement (such as the American Southwest and Florida) as well as other areas that have since experienced strong Hispanic migration (such as urban areas). Examples of these include grocery markets' preference in the U.S. for Spanish names such as cilantro and manzanilla over coriander and camomile respectively.
Pronunciation
Several pronunciation patterns contrast American and British English accents. The following lists a few common ones.
Most American accents are rhotic, preserving the historical phoneme in all contexts, while most British accents of England and Wales are non-rhotic, only preserving this sound before vowels but dropping it in all other contexts; thus, farmer rhymes with llama for Brits but not Americans. American accents tend to raise the tongue whenever the phoneme (in words like ) occurs before the consonants and . British accents distinguish the vowel sounds in , , and , while American accents merge the and vowels together, and about 50% of Americans additionally merge the vowel with the previous two, so for example odd, façade, and thawed can all rhyme. Many regional and informal accents of England, but none in North America, exhibit H-dropping. Words like bitter and bidder are pronounced the same in North America, but not England, due to a phenomenon called flapping involving and between vowels. British accents pronounce between vowels in other ways than Americans, including with a glottal stop or with an aspirated .
Vocabulary
The familiarity of speakers with words and phrases from different regions varies, and the difficulty of discerning an unfamiliar definition also depends on the context and the term. As expressions spread with telecommunications, they are often but not always understood as foreign to the speaker's dialect, and words from other dialects may carry connotations with regard to register, social status, origin, and intelligence.
Words and phrases with different meanings
Words such as bill and biscuit are used regularly in both AmE and BrE but can mean different things in each form. The word "bill" has several meanings, most of which are shared between AmE and BrE. However, in AmE "bill" often refers to a piece of paper money (as in a "dollar bill") which in BrE is more commonly referred to as a note. In AmE it can also refer to the visor of a cap, though this is by no means common. In AmE a biscuit (from the French "twice baked" as in biscotto) is a soft bready product that is known in BrE as a scone or a specifically hard, sweet biscuit. Meanwhile, a BrE biscuit incorporates both dessert biscuits and AmE cookies (from the Dutch 'little cake').
As chronicled by Winston Churchill, the opposite meanings of the verb to table created a misunderstanding during a meeting of the Allied forces; in BrE to table an item on an agenda means to open it up for discussion whereas in AmE, it means to remove it from discussion, or at times, to suspend or delay discussion; e.g. Let's table that topic for later. Similarly, the word moot (and moot point) in BrE means 'remains open to debate' whereas in AmE, it means 'of no practical significance', irrelevant.
The word "football" in BrE refers to association football, also known in the US as soccer. In AmE, "football" means American football. The standard AmE term "soccer", a contraction of "association (football)", is actually of British origin, derived from the ratification of different codes of football in the 19th century, and was a fairly unremarkable usage (possibly marked for class) in BrE until later; in Britain it became perceived as an Americanism. In non-American and non-Canadian contexts, particularly in sports news from outside the United States and Canada, American (or US branches of foreign) news agencies and media companies also use "football" to mean "soccer", especially in direct quotes.
Similarly, the word "hockey" in BrE often refers to field hockey and in AmE, "hockey" usually means ice hockey.
Words with completely different meanings are relatively few; most of the time there are either (1) words with one or more shared meanings and one or more meanings unique to one variety (for example, bathroom and toilet) or (2) words the meanings of which are actually common to both BrE and AmE but that show differences in frequency, connotation or denotation (for example, smart, clever, mad).
Some differences in usage and meaning can cause confusion or embarrassment. For example, the word fanny is a slang word for vulva in BrE but means buttocks in AmE, hence the AmE phrase fanny pack is bum bag in BrE. In AmE the word pissed means being annoyed or angry whereas in BrE it is a coarse word for being drunk (in both varieties, pissed off means irritated).
Similarly, in AmE the word pants is the common word for the BrE trousers and (in AmE) knickers refers to a variety of half-length trousers (though most AmE users would use the term "shorts" rather than knickers), while the majority of BrE speakers would understand pants to mean underpants and knickers to mean female underpants.
Sometimes the confusion is more subtle. In AmE the word quite used as a qualifier is generally a reinforcement, though it is somewhat uncommon in actual colloquial American use today and carries an air of formality: for example, "I'm quite hungry" is a very polite way to say "I'm very hungry". In BrE quite (which is much more common in conversation) may have this meaning, as in "quite right" or "quite mad", but it more commonly means "somewhat", so that in BrE "I'm quite hungry" can mean "I'm somewhat hungry". This divergence of use can lead to misunderstanding.
Different terms in different dialects
Most speakers of American English are aware of some uniquely British terms. It is generally very easy to guess what some words, such as BrE "driving licence", mean, the AmE equivalent being "driver's license". However, use of many other British words such as naff (slang but commonly used to mean "not very good") are unheard of in American English.
Speakers of BrE usually find it easy to understand most common AmE terms, such as "sidewalk (pavement or footpath)", "gas (gasoline/petrol)", "counterclockwise (anticlockwise)" or "elevator (lift)", thanks in large part to considerable exposure to American popular culture and literature. Terms heard less often, especially when rare or absent in American popular culture, such as "copacetic (very satisfactory)", are unlikely to be understood by most BrE speakers.
Other examples:
In the UK the word whilst is commonly used as a conjunction (as an alternative to while, especially prevalent in some dialects). whilst tends to appear in non-temporal senses, as when used to point out a contrast. In AmE while is used in both contexts, with whilst being much more uncommon. Other words with the -st ending are also found even in AmE as much as in BrE, despite being old-fashioned or an affectation (e.g., unbeknownst, midst). Historically, the word against falls into this category also, and is standard in both varieties.
In the UK generally the use of fall to mean "autumn" is obsolete. Although found often from Elizabethan literature to Victorian literature, the seasonal use of fall remains easily understandable to BrE speakers only because it is so commonly used that way in the U.S.
In the UK the term period for a full stop is not used; in AmE the term full stop is rarely, if ever, used for the punctuation mark and commonly not understood whatsoever. For example, British Prime Minister Tony Blair said, "Terrorism is wrong, full stop", whereas in AmE, the equivalent sentence is "Terrorism is wrong, period." The use of period as an interjection meaning "and nothing else; end of discussion" is beginning to be used in colloquial British English, though sometimes without conscious reference to punctuation.
In the US, the word line is used to refer to a line of people, vehicles, or other objects, while in the UK queue refers to that meaning. In the US, the word queue is most commonly used to refer to the computing sense of a data structure in which objects are added to one end and removed from the other. In the US, the equivalent terms to "queue up" and "wait in queue" are "line up" or "get in line" and "wait in line." The equivalent term to "jumping the queue" is "cutting in line."
Holiday greetings
It is increasingly common for Americans to say "Happy holidays", referring to all, or at least multiple, winter (in the Northern hemisphere) or summer (in the Southern hemisphere) holidays (Christmas, Hanukkah, Kwanzaa, etc.) especially when one's religious observances are not known; the phrase is rarely heard in the UK. In the UK, the phrases "holiday season" and "holiday period" refer to the period in the summer when most people take time off from work, and travel; AmE does not use holiday in this sense, instead using vacation for recreational excursions.
In AmE, the prevailing Christmas greeting is "Merry Christmas", which is the traditional English Christmas greeting, as found in the English Christmas carol "We Wish You a Merry Christmas", and which appears several times in Charles Dickens' A Christmas Carol. In BrE, "Happy Christmas" is a common alternative to "Merry Christmas".
Idiosyncratic differences
Omission of "and" and "on"
Generally in British English, numbers with a value over one hundred have the word "and" inserted before the last two digits. For example, the number 115, when written in words or spoken aloud, would be "One hundred and fifteen", in British English. In American English, numbers are typically said or written in words in the same way, however if the word "and" is omitted ("One hundred fifteen"), this is also considered acceptable (in BrE this would be considered grammatically incorrect).
Likewise, in the US, the word "on" can be left out when referring to events occurring on any particular day of the week. The US possibility "The Cowboys won the game Sunday" would have the equivalent in the UK of "Sheffield United won the match on Sunday."
Figures of speech
Both BrE and AmE use the expression "I couldn't care less", to mean that the speaker does not care at all. Some Americans use "I could care less" to mean the same thing. This variant is frequently derided as sloppy, as the literal meaning of the words is that the speaker does care to some extent.
In both areas, saying, "I don't mind" often means, "I'm not annoyed" (for example, by someone's smoking), while "I don't care" often means, "The matter is trivial or boring". However, in answering a question such as "Tea or coffee?", if either alternative is equally acceptable an American may answer, "I don't care", while a British person may answer, "I don't mind". Either can sound odd, confusing, or rude, to those accustomed to the other variant.
"To be all set in both BrE and AmE can mean "to be prepared or ready", though it appears to be more common in AmE. It can also have an additional meaning in AmE of "to be finished or done", for example, a customer at a restaurant telling a waiter "I'm all set. I'll take the check."
Equivalent idioms
A number of English idioms that have essentially the same meaning show lexical differences between the British and the American version; for instance:
In the US, a "carpet" typically refers to a fitted carpet, rather than a rug.
Social and cultural differences
Lexical items that reflect separate social and cultural development.
Education
Primary and secondary school
The US has a more uniform nationwide system of terms than does the UK, where terminology and structure varies among constituent countries, but the division by grades varies somewhat among the states and even among local school districts. For example, elementary school often includes kindergarten and may include sixth grade, with middle school including only two grades or extending to ninth grade.
In the UK, the US equivalent of a high school is often referred to as a "secondary school" regardless of whether it is state funded or private. US Secondary education also includes middle school or junior high school, a two- or three-year transitional school between elementary school and high school. "Middle school" is sometimes used in the UK as a synonym for the younger junior school, covering the second half of the primary curriculum, current years four to six in some areas. However, in Dorset (South England), it is used to describe the second school in the three-tier system, which is normally from year 5 to year 8. In other regions, such as Evesham and the surrounding area in Worcestershire, the second tier goes from year 6 to year 8, and both starting secondary school in year nine. In Kirklees, West Yorkshire, in the villages of the Dearne Valley there is a three tier system: first schools year reception to year five, middle school (Scissett/Kirkburton Middle School) year 6 to year 8, and high school year 9 to year 13.
A public school has opposite meanings in the two countries. In American English this is a government-owned institution open to all students, supported by public funding. The British English use of the term is in the context of "private" education: to be educated privately with a tutor. In England and Wales the term strictly refers to an ill-defined group of prestigious private independent schools funded by students' fees, although it is often more loosely used to refer to any independent school. Independent schools are also known as "private schools", and the latter is the term used in Scotland and Northern Ireland for all such fee-funded schools. Strictly, the term public school is not used in Scotland and Northern Ireland in the same sense as in England, but nevertheless Gordonstoun, the Scottish private school, is sometimes referred to as a public school, as are some other Scottish private schools. Government-funded schools in Scotland and Northern Ireland are properly referred to as "state schools" but are sometimes confusingly referred to as "public schools" (with the same meaning as in the US), and in the US, where most public schools are administered by local governments, a state school typically refers to a college or university run by one of the U.S. states.
Speakers in both the United States and the United Kingdom use several additional terms for specific types of secondary school. A US prep school or preparatory school is an independent school funded by tuition fees; the same term is used in the UK for a private school for pupils under 13, designed to prepare them for fee-paying public schools. In the US, Catholic schools cover costs through tuition and have affiliations with a religious institution, most often a Catholic church or diocese. In England, where the state-funded education system grew from parish schools arranged by the local established church, the Church of England (C of E, or CE), and many schools, especially primary schools (up to age 11) retain a church connection and are known as church schools, CE schools or CE (aided) schools. There are also faith schools associated with the Roman Catholic Church and other major faiths, with a mixture of funding arrangements. In Scotland, Catholic schools are generally operated as government-funded state schools for Catholic communities, particularly in large cities such as Glasgow.
In the US, a magnet school receives government funding and has special admission requirements: in some cases pupils gain admission through superior performance on admission tests, while other magnet schools admit students through a lottery. The UK has city academies, which are independent privately sponsored schools run with public funding and which can select up to 10% of pupils by aptitude. Moreover, in the UK 36 local education authorities retain selection by ability at 11. They maintain grammar schools (state funded secondary schools), which admit pupils according to performance in an examination (known as the 11+) and comprehensive schools that take pupils of all abilities. Grammar schools select the most academically able 10% to 23% of those who sit the exam. Students who fail the exam go to a secondary modern school, sometimes called a "high school", or increasingly an "academy". In areas where there are no grammar schools the comprehensives likewise may term themselves high schools or academies. Nationally only 6% of pupils attend grammar schools, mainly in four distinct counties. Some private schools are called "grammar schools", chiefly those that were grammar schools long before the advent of state education.
University
In the UK a university student is said to "study", to "read" or, informally, simply to "do" a subject. In the recent past the expression 'to read a subject' was more common at the older universities such as Oxford and Cambridge. In the US a student studies or majors in a subject (although a student's major, concentration or, less commonly, emphasis is also used in US colleges or universities to refer to the major subject of study). To major in something refers to the student's principal course of study; to study may refer to any class being taken.
BrE:
AmE:
At university level in BrE, each module is taught or facilitated by a lecturer or tutor; professor is the job-title of a senior academic (in AmE, at some universities, the equivalent of the BrE lecturer is instructor, especially when the teacher has a lesser degree or no university degree, though the usage may become confusing according to whether the subject being taught is considered technical or not; it is also different from adjunct instructor/professor). In AmE each class is generally taught by a professor (although some US tertiary educational institutions follow the BrE usage), while the position of lecturer is occasionally given to individuals hired on a temporary basis to teach one or more classes and who may or may not have a doctoral degree.
The word course in American use typically refers to the study of a restricted topic or individual subject (for example, "a course in Early Medieval England", "a course in integral calculus") over a limited period of time (such as a semester or term) and is equivalent to a module or sometimes unit at a British university. In the UK, a course of study or simply course is likely to refer to the entire curriculum, which may extend over several years and be made up of any number of modules, hence it is also practically synonymous to a degree programme. A few university-specific exceptions exist: for example, at Cambridge the word paper is used to refer to a module, while the whole course of study is called tripos.
A dissertation in AmE refers to the final written product of a doctoral student to meet the requirement of that curriculum. In BrE, the same word refers to the final written product of a student in an undergraduate or taught master's programme. A dissertation in the AmE sense would be a thesis in BrE, though dissertation is also used.
Another source of confusion is the different usage of the word college. (See a full international discussion of the various meanings at college.) In the US, it refers to a post-high school institution that grants either associate's or bachelor's degrees, and in the UK, it refers to any post-secondary institution that is not a university (including sixth form college after the name in secondary education for years 12 and 13, the sixth form) where intermediary courses such as A levels or NVQs can be taken and GCSE courses can be retaken. College may sometimes be used in the UK or in Commonwealth countries as part of the name of a secondary or high school (for example, Dubai College). In the case of the universities of Oxford, Cambridge, Aberdeen, London, Lancaster, Durham, Kent and York, all members are also members of a college which is part of the university, for example, one is a member of King's College, Cambridge and hence of the university.
In both the US and UK college can refer to some division within a university that comprises related academic departments such as the "college of business and economics" though in the UK "faculty" is more often used. Institutions in the US that offer two to four years of post-high school education often have the word college as part of their name, while those offering more advanced degrees are called a university. (There are exceptions: Boston College, Dartmouth College and the College of William & Mary are examples of colleges that offer advanced degrees, while Vincennes University is an unusual example of a "university" that offers only associate degrees in the vast majority of its academic programmes). American students who pursue a bachelor's degree (four years of higher education) or an associate degree (two years of higher education) are college students regardless of whether they attend a college or a university and refer to their educational institutions informally as colleges. A student who pursues a master's degree or a doctorate degree in the arts and sciences is in AmE a graduate student; in BrE a postgraduate student although graduate student is also sometimes used. Students of advanced professional programmes are known by their field (business student, law student, medical student). Some universities also have a residential college system, the details of which may vary but generally involve common living and dining spaces as well as college-planned activities. Nonetheless, when it comes to the level of education, AmE generally uses the word college (e.g., going to college) whereas BrE generally uses the word university (e.g., going to university) regardless of the institution's official designation/status in both countries.
In the context of higher education, the word school is used slightly differently in BrE and AmE. In BrE, except for the University of London, the word school is used to refer to an academic department in a university. In AmE, the word school is used to refer to a collection of related academic departments and is headed by a dean. When it refers to a division of a university, school is practically synonymous to a college.
"Professor" has different meanings in BrE and AmE. In BrE it is the highest academic rank, followed by reader, senior lecturer and lecturer. In AmE "professor" refers to academic staff of all ranks, with (full) professor (largely equivalent to the UK meaning) followed by associate professor and assistant professor.
"Tuition" has traditionally had separate meaning in each variation. In BrE it is the educational content transferred from teacher to student at a university. In AmE it is the money (the fees) paid to receive that education (BrE: tuition fees).
General terms
In both the US and the UK, a student takes an exam, but in BrE a student can also be said to sit an exam. When preparing for an exam students revise (BrE)/review (AmE) what they have studied; the BrE idiom to revise for has the equivalent to review for in AmE.
Examinations are supervised by invigilators in the UK and proctors (or (exam) supervisors) in the US (a proctor in the UK is an official responsible for student discipline at the University of Oxford or Cambridge). In the UK a teacher first sets and then administers exam, while in the US, a teacher first writes, makes, prepares, etc. and then gives an exam. With the same basic meaning of the latter idea but with a more formal or official connotation, a teacher in the US may also administer or proctor an exam.
BrE:
AmE:
In BrE, students are awarded marks as credit for requirements (e.g., tests, projects) while in AmE, students are awarded points or "grades" for the same. Similarly, in BrE, a candidate's work is being marked, while in AmE it is said to be graded to determine what mark or grade is given.
There is additionally a difference between American and British usage in the word school. In British usage "school" by itself refers only to primary (elementary) and secondary (high) schools and to sixth forms attached to secondary schools—if one "goes to school", this type of institution is implied. By contrast an American student at a university may be "in/at school", "coming/going to school", etc. US and British law students and medical students both commonly speak in terms of going to "law school" and "med[ical] school", respectively. However, the word school is used in BrE in the context of higher education to describe a division grouping together several related subjects within a university, for example a "School of European Languages" containing departments for each language and also in the term "art school". It is also the name of some of the constituent colleges of the University of London, for example, School of Oriental and African Studies, London School of Economics.
Among high-school and college students in the United States, the words freshman (or the gender-neutral terms first year or sometimes freshie), sophomore, junior and senior refer to the first, second, third and fourth years respectively. It is important that the context of either high school or college first be established or else it must be stated directly (that is, She is a high-school freshman. He is a college junior.). Many institutes in both countries also use the term first-year as a gender-neutral replacement for freshman, although in the US this is recent usage, formerly referring only to those in the first year as a graduate student. One exception is the University of Virginia; since its founding in 1819 the terms "first-year", "second-year", "third-year", and "fourth-year" have been used to describe undergraduate university students. At the United States service academies, at least those operated by the federal government directly, a different terminology is used, namely "fourth class", "third class", "second class" and "first class" (the order of numbering being the reverse of the number of years in attendance). In the UK first-year university students are sometimes called freshers early in the academic year; however, there are no specific names for those in other years nor for school pupils; “freshers’ week” or simply “freshers” is colloquially, but increasingly commonly, used to refer to the first few weeks of the academic year, typically when students get to know the university's campus, join extra-curricular clubs and associations, and even going out for the night for drinking and to go to night clubs. Graduate and professional students in the United States are known by their year of study, such as a "second-year medical student" or a "fifth-year doctoral candidate." Law students are often referred to as "1L", "2L" or "3L" rather than “nth-year law students"; similarly, medical students are frequently referred to as "M1", "M2", "M3" or "M4".
While anyone in the US who finishes studying at any educational institution by passing relevant examinations is said to graduate and to be a graduate, in the UK only degree and above level students can graduate. Student itself has a wider meaning in AmE, meaning any person of any age studying any subject at any level (including those not doing so at an educational institution, such as a "piano student" taking private lessons in a home), whereas in BrE it tends to be used for people studying at a post-secondary educational institution and the term pupil is more widely used for a young person at primary or secondary school, though the use of "student" for secondary school pupils in the UK is increasingly used, particularly for "sixth form" (years 12 and 13).
The names of individual institutions can be confusing. There are several high schools with the word "university" in their names in the United States that are not affiliated with any post-secondary institutions and cannot grant degrees, and there is one public high school, Central High School of Philadelphia, that does grant bachelor's degrees to the top 10% of graduating seniors. British secondary schools occasionally have the word "college" in their names.
When it comes to the admissions process, applicants are usually asked to solicit letters of reference or reference forms from referees in BrE. In AmE, these are called letters of recommendation or recommendation forms. Consequently, the writers of these letters are known as referees and recommenders, respectively by country. In AmE, the word referee is nearly always understood to refer to an umpire of a sporting match.
In the context of education, for AmE, the word staff mainly refers to school personnel who are neither administrators nor have teaching loads or academic responsibilities; personnel who have academic responsibilities are referred to as members of their institution's faculty. In BrE, the word staff refers to both academic and non-academic school personnel. As mentioned previously, the term faculty in BrE refers more to a collection of related academic departments.
Government and politics
In the UK, political candidates stand for election, while in the US, they run for office. There is virtually no crossover between BrE and AmE in the use of these terms. Additionally, the document which contains a party's positions/principles is referred to as a party platform in AmE, whereas it is commonly known as a party manifesto in BrE. (In AmE, using the term manifesto may connote that the party is an extremist or radical association). The term general election is used slightly differently in British and American English. In BrE, it refers exclusively to a nationwide parliamentary election and is differentiated from local elections (mayoral and council) and by-elections; whereas in AmE, it refers to a final election for any government position in the US, where the term is differentiated from the term primary (an election that determines a party's candidate for the position in question). Additionally, a by-election in BrE is called a special election in AmE.
In AmE, the term swing state, swing county, swing district is used to denote a jurisdiction/constituency where results are expected to be close but crucial to the overall outcome of the general election. In BrE, the term marginal constituency is more often used for the same and swing is more commonly used to refer to how much one party has gained (or lost) an advantage over another compared to the previous election.
In the UK, the term government only refers to what is commonly known in America as the executive branch or the particular administration.
A local government in the UK is generically referred to as the "council," whereas in the United States, a local government will be generically referred to as the "City" (or county, village, etc., depending on what kind of entity the government serves).
Business and finance
In financial statements, what is referred to in AmE as revenue or sales is known in BrE as turnover. In AmE, having "high turnover" in a business context would generally carry negative implications, though the precise meaning would differ by industry.
A bankrupt firm goes into administration or liquidation in BrE; in AmE it goes bankrupt, or files for Chapter 7 (liquidation) or Chapter 11 (reorganisation), both of which refer to the legal authority under which bankruptcy is commenced. An insolvent individual or partnership goes bankrupt in both BrE and AmE.
If a finance company takes possession of a mortgaged property from a debtor, it is called foreclosure in AmE and repossession in BrE. In some limited scenarios, repossession may be used in AmE, but it is much less common compared to foreclosure. One common exception in AmE is for automobiles, which are always said to be repossessed. Indeed, an agent who collects these cars for the bank is colloquially known in AmE as a repo man.
Employment and recruitment
In BrE, the term curriculum vitae (commonly abbreviated to CV) is used to describe the document prepared by applicants containing their credentials required for a job. In AmE, the term résumé is more commonly used, with CV primarily used in academic or research contexts, and is usually more comprehensive than a résumé.
Insurance
AmE distinguishes between coverage as a noun and cover as a verb; an American seeks to buy enough insurance coverage in order to adequately cover a particular risk. BrE uses the word "cover" for both the noun and verb forms.
Transport
AmE speakers refer to transportation and BrE speakers to transport. (Transportation in the UK has traditionally meant the punishment of criminals by deporting them to an overseas penal colony.) In AmE, the word transport is usually used only as a verb, seldom as a noun or adjective except in reference to certain special objects, such as a tape transport or a military transport (e.g., a troop transport, a kind of vehicle, not an act of transporting).
Road transport
Differences in terminology are especially obvious in the context of roads. The British term dual carriageway, in American parlance, would be divided highway or perhaps, simply highway. The central reservation on a motorway or dual carriageway in the UK would be the median or center divide on a freeway, expressway, highway or parkway in the US. The one-way lanes that make it possible to enter and leave such roads at an intermediate point without disrupting the flow of traffic are known as slip roads in the UK but in the US, they are typically known as ramps and both further distinguish between on-ramps or on-slips (for entering onto a highway/carriageway) and off-ramps or exit-slips (for leaving a highway/carriageway). When American engineers speak of slip roads, they are referring to a street that runs alongside the main road (separated by a berm) to allow off-the-highway access to the premises that are there; however, the term frontage road is more commonly used, as this term is the equivalent of service road in the UK. However, it is not uncommon for an American to use service road as well instead of frontage road.
In the UK, the term outside lane refers to the higher-speed overtaking lane (passing lane in the US) closest to the middle of the road, while inside lane refers to the lane closer to the edge of the road. In the US, outside lane is used only in the context of a turn, in which case it depends in which direction the road is turning (i.e., if the road bends right, the left lane is the "outside lane", but if the road bends left, it is the right lane). Both also refer to slow and fast lanes (even though all actual traffic speeds may be at or around the legal speed limit).
In the UK drink driving refers to driving after having consumed alcoholic beverages, while in the US, the term is drunk driving. The legal term in the US is driving while intoxicated (DWI) or driving under the influence (of alcohol) (DUI). The equivalent legal phrase in the UK is drunk in charge of a motor vehicle (DIC) or more commonly driving with excess alcohol.
In the UK, a hire car is the US equivalent of a rental car. The term "hire car" can be especially misleading for those in the US, where the term "hire" is generally only applied to the employment of people and the term "rent" is applied to the temporary custody of goods. To an American, "hire car" would imply that the car has been brought into the employment of a company as if it were a person, which would sound nonsensical.
In the UK, a saloon is a vehicle that is equivalent to the American sedan. This is particularly confusing to Americans, because in the US the term saloon is used in only one context: describing an old bar (UK pub) in the American West (a Western saloon). Coupé is used by both to refer to a two-door car, but is usually pronounced with two syllables in the UK (coo-pay) and one syllable in the US (coop).
In the UK, van may refer to a small lorry (UK), whereas in the US, van is only understood to be a very small, boxy truck (US) (such as a moving van) or a long passenger automobile with several rows of seats (such as a minivan). A large, long vehicle used for cargo transport would nearly always be called a truck in the US, though alternate terms such as eighteen-wheeler may be occasionally heard (regardless of the actual number of tires (UK tyres) on the truck).
In the UK, a silencer is the equivalent to the US muffler. In the US, the word silencer has only one meaning: an attachment on the barrel of a gun designed to decrease the volume of the gunshot to either ear-safe levels or at least lower levels depending on the caliber; although they are popularly believed to completely hide the sound of the gunshot.
Specific auto parts and transport terms have different names in the two dialects, for example:
Rail transport
There are also differences in terminology in the context of rail transport. The best known is railway in the UK and railroad in North America, but there are several others. A railway station in the UK is a railroad station in the US, while train station is used in both; trains have drivers (often called engine drivers) in the UK, while in America trains are driven by engineers; trains have guards in the UK and conductors in the US, though the latter is also common in the UK; a place where two tracks meet is called a set of points in the UK and a switch in the US; and a place where a road crosses a railway line at ground level is called a level crossing in the UK and a grade crossing or railroad crossing in America. In the UK, the term sleeper is used for the devices that bear the weight of the rails and are known as ties or crossties in the United States. In a rail context, sleeper (more often, sleeper car) would be understood in the US as a rail car with sleeping quarters for its passengers. The British term platform in the sense "The train is at Platform 1" would be known in the US by the term track, and used in the phrase "The train is on Track 1". The American term for the British return journey is round trip. The British term brake van or guard's van is a caboose in the US. The American English phrase "All aboard" when boarding a train is rarely used in the UK, and when the train reaches its final stop, in the UK the phrase used by rail personnel is "All change" while in the US it is "All out", though such announcements are uncommon in both regions.
For sub-surface rail networks, while underground is commonly used in the UK, only the London Underground actually carries this name: the UK's only other such system, the smaller Glasgow Subway, was in fact the first to be called "subway". Nevertheless, both subway and metro are now more common in the US, varying by city: in Washington D.C., for example, metro is used, while in New York City subway is preferred. Another variation is the T in Boston.
Television
Traditionally, a show on British television would have referred to a light-entertainment programme (AmE program) with one or more performers and a participative audience, whereas in American television, the term is used for any type of program. British English traditionally referred to other types of programme by their type, such as drama, serial etc., but the term show has now taken on the general American meaning. In American television the episodes of a program first broadcast in a particular year constitute a season, the entire run of the program—which may span several seasons—is called a series. In British television, on the other hand, the word series may apply to the episodes of a programme in one particular year, for example, "The 1998 series of Grange Hill, as well as to the entire run. However, the entire run may occasionally be referred to as a "show".
The term telecast, meaning television broadcast and uncommon even in the US, is not used in British English. A television program(me) would be broadcast, aired or shown in both the UK and US.
Telecommunications
A long-distance call is a "trunk call" in British English, but is a "toll call" in American English, though neither term is well known among younger people. The distinction is a result of historical differences in the way local service was billed; the Bell System traditionally flat-rated local calls in all but a few markets, endowing local service by charging higher rates, or tolls, for intercity calls, allowing local calls to appear to be free. British Telecom (and the British 'Post Office Telecommunications' before it) charged for all calls, local and long distance, so labelling one class of call as "toll" would have been meaningless.
Similarly, a toll-free number in America is a freephone number in the UK. The term "freefone" is a BT trademark.
Rivers
In British English, the name of a river is usually placed after the word (River Thames) however there are a small number of exceptions such as Wick River. This matches the naming of lakes (e.g. Lake Superior, Loch Ness) and mountains (e.g. Mont Blanc, Mount St. Helens). In American English, the name is placed before the word (Hudson River).
Grammar
Subject-verb agreement
In American English (AmE), collective nouns are almost always singular in construction: the committee was unable to agree. However, when a speaker wishes to emphasize that the individuals are acting separately, a plural pronoun may be employed with a singular or plural verb: the team takes their seats, rather than the team takes its seats. Such a sentence would most likely be recast as the team members take their seats. Despite exceptions such as usage in The New York Times, the names of sports teams are usually treated as plurals even if the form of the name is singular.
In British English (BrE), collective nouns can take either singular (formal agreement) or plural (notional agreement) verb forms, according to whether the emphasis is on the body as a whole or on the individual members respectively; compare a committee was appointed with the committee were unable to agree. The term the Government always takes a plural verb in British civil service convention, perhaps to emphasize the principle of cabinet collective responsibility. Compare also the following lines of Elvis Costello's song "Oliver's Army": Oliver's Army is here to stay / Oliver's Army are on their way . Some of these nouns, for example staff, actually combine with plural verbs most of the time.
The difference occurs for all nouns of multitude, both general terms such as team and company and proper nouns (for example where a place name is used to refer to a sports team). For instance,
Proper nouns that are plural in form take a plural verb in both AmE and BrE; for example, The Beatles are a well-known band; The Diamondbacks are the champions, with one major exception: in American English, the United States is almost universally used with a singular verb. Although the construction the United States are was more common early in the history of the country, as the singular federal government exercised more authority and a singular national identity developed (especially following the American Civil War), it became standard to treat the United States as a singular noun.
Style
Use of that and which in restrictive and non-restrictive relative clauses
Generally, a non-restrictive relative clause (also called non-defining or supplementary) is one containing information that is supplementary, i.e. does not change the meaning of the rest of the sentence, while a restrictive relative clause (also called defining or integrated) contains information essential to the meaning of the sentence, effectively limiting the modified noun phrase to a subset that is defined by the relative clause.
An example of a restrictive clause is "The dog that bit the man was brown."
An example of a non-restrictive clause is "The dog, which bit the man, was brown."
In the former, "that bit the man" identifies which dog the statement is about.
In the latter, "which bit the man" provides supplementary information about a known dog.
A non-restrictive relative clause is typically set off by commas, whereas a restrictive relative clause is not, but this is not a rule that is universally observed. In speech, this is also reflected in the intonation.
Writers commonly use which to introduce a non-restrictive clause, and that to introduce a restrictive clause. That is rarely used to introduce a non-restrictive relative clause in prose. Which and that are both commonly used to introduce a restrictive clause; a study in 1977 reported that about 75% of occurrences of which were in restrictive clauses.
H. W. Fowler, in A Dictionary of Modern English Usage of 1926, followed others in suggesting that it would be preferable to use which as the non-restrictive (what he calls "non-defining") pronoun and that as the restrictive (what he calls defining) pronoun, but he also stated that this rule was observed neither by most writers nor by the best writers. He implied that his suggested usage was more common in American English. Fowler notes that his recommended usage presents problems, in particular that that must be the first word of the clause, which means, for instance, that which cannot be replaced by that when it immediately follows a preposition (e.g. "the basic unit from which matter is constructed") – though this would not prevent a stranded preposition (e.g. "the basic unit that matter is constructed from").
Style guides by American prescriptivists, such as Bryan Garner, typically insist, for stylistic reasons, that that be used for restrictive relative clauses and which be used for non-restrictive clauses, referring to the use of which in restrictive clauses as a "mistake". According to the 2015 edition of Fowler's Dictionary of Modern English Usage, "In AmE which is 'not generally used in restrictive clauses, and that fact is then interpreted as the absolute rule that only that may introduce a restrictive clause', whereas in BrE 'either that or which may be used in restrictive clauses', but many British people 'believe that that is obligatory.
Subjunctive
The subjunctive mood is more common in colloquial American English than in colloquial British English.
Writing
Spelling
Before the early 18th century there was no standard for English spelling. Different standards became noticeable after the publishing of influential dictionaries. For the most part current BrE spellings follow those of Samuel Johnson's Dictionary of the English Language (1755), while AmE spellings follow those of Noah Webster's An American Dictionary of the English Language (1828). In the United Kingdom, the influences of those who preferred the French spellings of certain words proved decisive. In many cases AmE spelling deviated from mainstream British spelling; on the other hand it has also often retained older forms. Many of the now characteristic AmE spellings were made popular, although often not created, by Noah Webster. Webster chose already-existing alternative spellings "on such grounds as simplicity, analogy or etymology". Webster did attempt to introduce some reformed spellings, as did the Simplified Spelling Board in the early 20th century, but most were not adopted. Later spelling changes in the UK had little effect on present-day US spelling, and vice versa.
Punctuation
Full stops and periods in abbreviations
There have been some trends of transatlantic difference in use of periods in some abbreviations. These are discussed at Abbreviation § Periods (full stops) and spaces. Unit symbols such as kg and Hz are never punctuated.
Parentheses/brackets
In British English, "( )" marks are often referred to as brackets, whereas "[ ]" are called square brackets and "{ }" are called curly brackets. In formal British English and in American English "( )" marks are parentheses (singular: parenthesis), "[ ]" are called brackets or square brackets, and "{ }" can be called either curly brackets or braces. Despite the different names, these marks are used in the same way in both varieties.
Quoting
British and American English differ in the preferred quotation mark style, including the placement of commas and periods. In American English, " and ' are called quotation marks, whereas in British English, " and ' are referred to as either inverted commas or speech marks. Additionally, in American English direct speech typically uses the double quote mark ( " ), whereas in British English it is common to use the inverted comma ( ' ).
Commas in headlines
American newspapers commonly use a comma as a shorthand for "and" in headlines. For example, The Washington Post had the headline "A TRUE CONSERVATIVE: For McCain, Bush Has Both Praise, Advice."
Numerical expressions
There are many differences in the writing and speaking of English numerals, most of which are matters of style, with the notable exception of different definitions for billion.
The two countries have different conventions for floor numbering. The UK uses a mixture of the metric system and Imperial units, where in the US, United States customary units are dominant in everyday life with a few fields using the metric system.
Monetary amounts
Monetary amounts in the range of one to two major currency units are often spoken differently. In AmE one may say a dollar fifty or a pound eighty, whereas in BrE these amounts would be expressed one dollar fifty and one pound eighty. For amounts over a dollar an American will generally either drop denominations or give both dollars and cents, as in two-twenty or two dollars and twenty cents for $2.20. An American would not say two dollars twenty. On the other hand, in BrE, two-twenty or two pounds twenty would be most common.
It is more common to hear a British-English speaker say one thousand two hundred dollars than a thousand and two hundred dollars, although the latter construct is common in AmE. In British English, the "and" comes after the hundreds (one thousand, two hundred and thirty dollars). The term twelve hundred dollars, popular in AmE, is frequently used in BrE but only for exact multiples of 100 up to 1,900. Speakers of BrE very rarely hear amounts over 1,900 expressed in hundreds, for example, twenty-three hundred. In AmE it would not be unusual to refer to a high, uneven figure such as 2,307 as twenty-three hundred and seven.
In BrE, particularly in television or radio advertisements, integers can be pronounced individually in the expression of amounts. For example, on sale for £399 might be expressed on sale for three nine nine, though the full three hundred and ninety-nine pounds is at least as common. An American advertiser would almost always say on sale for three ninety-nine, with context distinguishing $399 from $3.99. In British English the latter pronunciation implies a value in pounds and pence, so three ninety-nine would be understood as £3.99.
In spoken BrE the word pound is sometimes colloquially used for the plural as well. For example, three pound forty and twenty pound a week are both heard in British English. Some other currencies do not change in the plural; yen and rand being examples. This is in addition to normal adjectival use, as in a twenty-pound-a-week pay-rise (US raise). The euro most often takes a regular plural -s in practice despite the EU dictum that it should remain invariable in formal contexts; the invariable usage is more common in Ireland, where it is the official currency.
In BrE the use of p instead of pence is common in spoken usage. Each of the following has equal legitimacy: 3 pounds 12 p; 3 pounds and 12 p; 3 pounds 12 pence; 3 pounds and 12 pence; as well as just 8 p or 8 pence. In everyday usage the amount is simply read as figures (£3.50 = three pounds fifty) as in AmE.
AmE uses words such as nickel, dime, and quarter for small coins. In BrE the usual usage is a 10-pence piece or a 10p piece or simply a 10p, for any coin below £1, pound coin and two-pound coin. BrE did have specific words for a number of coins before decimalisation. Formal coin names such as half crown (2/6) and florin (2/-), as well as slang or familiar names such as bob (1/-) and tanner (6d) for pre-decimalisation coins are still familiar to older BrE speakers but they are not used for modern coins. In older terms like two-bob bit (2/-) and thrupenny bit (3d), the word bit had common usage before decimalisation similar to that of piece today.
In order to make explicit the amount in words on a check (BrE cheque), Americans write three and (using this solidus construction or with a horizontal division line): they do not need to write the word dollars as it is usually already printed on the check. On a cheque UK residents would write three pounds and 24 pence, three pounds ‒ 24, or three pounds ‒ 24p since the currency unit is not preprinted. To make unauthorised amendment difficult, it is useful to have an expression terminator even when a whole number of dollars/pounds is in use: thus, Americans would write three and or three and on a three-dollar check (so that it cannot easily be changed to, for example, three million), and UK residents would write three pounds only.
Dates
Dates are usually written differently in the short (numerical) form. Christmas Day 2000, for example, is 25/12/00 or 25.12.00 in the UK and 12/25/00 in the US, although the formats 25/12/2000, 25.12.2000, and 12/25/2000 are now more common then they were before Y2K. Occasionally other formats are encountered, such as the ISO 8601 2000-12-25, popular among programmers, scientists and others seeking to avoid ambiguity, and to make alphanumerical order coincide with chronological order. The difference in short-form date order can lead to misunderstanding, especially when using software or equipment that uses the foreign format. For example, 06/04/05 could mean either June 4, 2005 (if read as US format), 6 April 2005 (if seen as in UK format) or even 2006 April 5 if taken to be an older ISO 8601-style format where 2-digit years were allowed.
When using the name of the month rather than the number to write a date in the UK, the recent standard style is for the day to precede the month, e. g., 21 April. Month preceding date is almost invariably the style in the US, and was common in the UK until the late twentieth century. British usage normally changes the day from an integer to an ordinal, i.e., 21st instead of 21. In speech, "of" and "the" are used in the UK, as in "the 21st of April". In written language, the words "the" and "of" may be and are usually dropped, i.e., 21 April. The US would say this as "April 21st", and this form is still common in the UK. One of the few exceptions in American English is saying "the Fourth of July" as a shorthand for the United States Independence Day. In the US military the British forms are used, but the day is read cardinally, while among some speakers of New England and Southern American English varieties and who come from those regions but live elsewhere, those forms are common, even in formal contexts.
Phrases such as the following are common in the UK but are generally unknown in the US: "A week today", "a week tomorrow", "a week (on) Tuesday" and "Tuesday week"; these all refer to a day which is more than a week into the future. ("A fortnight Friday" and "Friday fortnight" refer to a day two weeks after the coming Friday). "A week on Tuesday" and "a fortnight on Friday" could refer either to a day in the past ("it's a week on Tuesday, you need to get another one") or in the future ("see you a week on Tuesday"), depending on context. In the US the standard construction is "a week from today", "a week from tomorrow", etc. BrE speakers may also say "Thursday last" or "Thursday gone" where AmE would prefer "last Thursday". "I'll see you (on) Thursday coming" or "let's meet this coming Thursday" in BrE refer to a meeting later this week, while "not until Thursday next" would refer to next week. In BrE there is also common use of the term 'Thursday after next' or 'week after next' meaning 2 weeks in the future and 'Thursday before last' and 'week before last' meaning 2 weeks in the past, but not when referring to times more than 2 weeks been or gone or when using the terms tomorrow today or yesterday then in BrE you would say '5 weeks on Tuesday' or '2 weeks yesterday'.
Time
The 24-hour clock (18:00, 18.00 or 1800) is considered normal in the UK and Europe in many applications including air, rail and bus timetables; it is largely unused in the US outside military, police, aviation and medical applications. As a result, many Americans refer to the 24-hour clock as military time. Some British English style guides recommend the full stop (.) when telling time, compared to American English which uses colons (:) (i.e., 11:15 PM/pm/p.m. or 23:15 for AmE and 11.15 pm or 23.15 for BrE). Usually in the military (and sometimes in the police, aviation and medical) applications on both sides of the Atlantic 0800 and 1800 are read as (oh/zero) eight hundred and eighteen hundred hours respectively. Even in the UK, hundred follows twenty, twenty-one, twenty-two and twenty-three when reading 2000, 2100, 2200 and 2300 according to those applications.
Fifteen minutes after the hour is called quarter past in British usage and a quarter after or, less commonly, a quarter past in American usage. Fifteen minutes before the hour is usually called quarter to in British usage and a quarter of, a quarter to or a quarter 'til in American usage; the form a quarter to is associated with parts of the Northern United States, while a quarter 'til or till is found chiefly in the Appalachian region. Thirty minutes after the hour is commonly called half past in both BrE and AmE; half after used to be more common in the US. In informal British speech, the preposition is sometimes omitted, so that 5:30 may be referred to as half five; this construction is entirely foreign to US speakers, who would possibly interpret half five as 4:30 (halfway to 5:00) rather than 5:30. The AmE formations top of the hour and bottom of the hour are not used in BrE. Forms such as eleven forty are common in both varieties. To be simple and direct in telling time, no terms relating to fifteen or thirty minutes before/after the hour are used; rather the time is told exactly as for example nine fifteen, ten forty-five.
Sports percentages
In sports statistics, certain percentages such as those for winning or win–loss records and saves in field or ice hockey and association football are almost always expressed as a decimal proportion to three places in AmE and are usually read aloud as if they are whole numbers, e.g. (0).500 or five hundred, hence the phrase "games/matches over five hundred", whereas in BrE they are also expressed but as true percentages instead, after multiplying the decimal by 100%, that is, 50% or "fifty per cent" and "games/matches over 50%" or "...50 per cent". However, "games/matches over 50%" or "...50 percent" is also found in AmE, albeit sporadically, e.g., hitting percentages in volleyball.
The American practice of expressing so-called percentages in sports statistics as decimals originated with baseball's batting averages, developed by English-born statistician and historian Henry Chadwick.
See also
American and British English grammatical differences
American and British English pronunciation differences
American and British English spelling differences
British and American keyboards
List of dialects of the English language
Lists of words having different meanings in American and British English
Explanatory notes
Citations
General and cited sources
Algeo, John (2006). British or American English?. Cambridge: Cambridge University Press. .
Hargraves, Orin (2003). Mighty Fine Words and Smashing Expressions. Oxford: Oxford University Press. .
McArthur, Tom (2002). The Oxford Guide to World English. Oxford: Oxford University Press. .
Murphy, Lynne (2018). The Prodigal Tongue: The Love-Hate Relationship Between British and American English. London. Oneworld Publications. .
Peters, Pam (2004). The Cambridge Guide to English Usage. Cambridge: Cambridge University Press. .
Trudgill, Peter and Jean Hannah (2002). International English: A Guide to the Varieties of Standard English, 4th ed. London: Arnold. .
Further reading
External links
Word substitution list, by the Ubuntu English (United Kingdom) Translators team
Linguistics Issues List of American, Canadian and British spelling differences
Map of US English dialects
The Septic's Companion: A British Slang Dictionary
American English, is it really different?
British English-American English Vocabulary Quiz
Language comparison between countries
Comparison of forms of English
Internationalization and localization
United Kingdom–United States relations | Comparison of American and British English | [
"Technology"
] | 14,315 | [
"Natural language and computing",
"Internationalization and localization"
] |
2,018 | https://en.wikipedia.org/wiki/A.%20J.%20Ayer | Sir Alfred Jules "Freddie" Ayer ( ; 29 October 1910 – 27 June 1989) was an English philosopher known for his promotion of logical positivism, particularly in his books Language, Truth, and Logic (1936) and The Problem of Knowledge (1956).
Ayer was educated at Eton College and the University of Oxford, after which he studied the philosophy of logical positivism at the University of Vienna. From 1933 to 1940 he lectured on philosophy at Christ Church, Oxford.
During the Second World War Ayer was a Special Operations Executive and MI6 agent.
Ayer was Grote Professor of the Philosophy of Mind and Logic at University College London from 1946 until 1959, after which he returned to Oxford to become Wykeham Professor of Logic at New College. He was president of the Aristotelian Society from 1951 to 1952 and knighted in 1970. He was known for his advocacy of humanism, and was the second president of the British Humanist Association (now known as Humanists UK).
Ayer was president of the Homosexual Law Reform Society for a time; he remarked, "as a notorious heterosexual I could never be accused of feathering my own nest."
Life
Ayer was born in St John's Wood, in north west London, to Jules Louis Cyprien Ayer and Reine (née Citroen), wealthy parents from continental Europe. His mother was from the Dutch-Jewish family that founded the Citroën car company in France; his father was a Swiss Calvinist financier who worked for the Rothschild family, including for their bank and as secretary to Alfred Rothschild.
Ayer was educated at Ascham St Vincent's School, a former boarding preparatory school for boys in the seaside town of Eastbourne in Sussex, where he started boarding at the relatively early age of seven for reasons to do with the First World War, and at Eton College, where he was a King's Scholar. At Eton Ayer first became known for his characteristic bravado and precocity. Though primarily interested in his intellectual pursuits, he was very keen on sports, particularly rugby, and reputedly played the Eton Wall Game very well. In the final examinations at Eton, Ayer came second in his year, and first in classics. In his final year, as a member of Eton's senior council, he unsuccessfully campaigned for the abolition of corporal punishment at the school. He won a classics scholarship to Christ Church, Oxford. He graduated with a BA with first-class honours.
After graduating from Oxford, Ayer spent a year in Vienna, returned to England and published his first book, Language, Truth and Logic, in 1936. This first exposition in English of logical positivism as newly developed by the Vienna Circle, made Ayer at age 26 the enfant terrible of British philosophy. As a newly famous intellectual, he played a prominent role in the Oxford by-election campaign of 1938. Ayer campaigned first for the Labour candidate Patrick Gordon Walker, and then for the joint Labour-Liberal "Independent Progressive" candidate Sandie Lindsay, who ran on an anti-appeasement platform against the Conservative candidate, Quintin Hogg, who ran as the appeasement candidate. The by-election, held on 27 October 1938, was quite close, with Hogg winning narrowly.
In the Second World War, Ayer served as an officer in the Welsh Guards, chiefly in intelligence (Special Operations Executive (SOE) and MI6). He was commissioned as a second lieutenant into the Welsh Guards from the Officer Cadet Training Unit on 21 September 1940.
After the war, Ayer briefly returned to the University of Oxford where he became a fellow and Dean of Wadham College. He then taught philosophy at University College London from 1946 until 1959, during which time he started to appear on radio and television. He was an extrovert and social mixer who liked dancing and attending clubs in London and New York. He was also obsessed with sport: he had played rugby for Eton, and was a noted cricketer and a keen supporter of Tottenham Hotspur football team, where he was for many years a season ticket holder. For an academic, Ayer was an unusually well-connected figure in his time, with close links to 'high society' and the establishment. Presiding over Oxford high-tables, he is often described as charming, but could also be intimidating.
Ayer was married four times to three women. His first marriage was from 1932 to 1941, to (Grace Isabel) Renée, with whom he had a sonallegedly the son of Ayer's friend and colleague Stuart Hampshireand a daughter. Renée subsequently married Hampshire. In 1960, Ayer married Alberta Constance (Dee) Wells, with whom he had one son. That marriage was dissolved in 1983, and the same year, Ayer married Vanessa Salmon, the former wife of politician Nigel Lawson. She died in 1985, and in 1989 Ayer remarried Wells, who survived him. He also had a daughter with Hollywood columnist Sheilah Graham Westbrook.
In 1950, Ayer attended the founding meeting of the Congress for Cultural Freedom in West Berlin, though he later said he went only because of the offer of a "free trip". He gave a speech on why John Stuart Mill's conceptions of liberty and freedom were still valid in the 20th century. Together with the historian Hugh Trevor-Roper, Ayer fought against Arthur Koestler and Franz Borkenau, arguing that they were far too dogmatic and extreme in their anti-communism, in fact proposing illiberal measures in the defence of liberty. Adding to the tension was the location of the congress in West Berlin, together with the fact that the Korean War began on 25 June 1950, the fourth day of the congress, giving a feeling that the world was on the brink of war.
From 1959 to his retirement in 1978, Ayer held the Wykeham Chair, Professor of Logic at Oxford. He was knighted in 1970. After his retirement, Ayer taught or lectured several times in the United States, including as a visiting professor at Bard College in 1987. At a party that same year held by fashion designer Fernando Sanchez, Ayer confronted Mike Tyson, who was forcing himself upon the then little-known model Naomi Campbell. When Ayer demanded that Tyson stop, Tyson reportedly asked, "Do you know who the fuck I am? I'm the heavyweight champion of the world", to which Ayer replied, "And I am the former Wykeham Professor of Logic. We are both pre-eminent in our field. I suggest that we talk about this like rational men". Ayer and Tyson then began to talk, allowing Campbell to slip out. Ayer was also involved in politics, including anti-Vietnam War activism, supporting the Labour Party (and later the Social Democratic Party), chairing the Campaign Against Racial Discrimination in Sport, and serving as president of the Homosexual Law Reform Society.
In 1988, a year before his death, Ayer wrote an article titled "What I saw when I was dead", describing an unusual near-death experience after his heart stopped for four minutes as he choked on smoked salmon. Of the experience, he first said that it "slightly weakened my conviction that my genuine death ... will be the end of me, though I continue to hope that it will be." A few weeks later, he revised this, saying, "what I should have said is that my experiences have weakened, not my belief that there is no life after death, but my inflexible attitude towards that belief".
Ayer died on 27 June 1989. From 1980 to 1989 he lived at 51 York Street, Marylebone, where a memorial plaque was unveiled on 19 November 1995.
Philosophical ideas
In Language, Truth and Logic (1936), Ayer presents the verification principle as the only valid basis for philosophy. Unless logical or empirical verification is possible, statements like "God exists" or "charity is good" are not true or untrue but meaningless, and may thus be excluded or ignored. Religious language in particular is unverifiable and as such literally nonsense. He also criticises C. A. Mace's opinion that metaphysics is a form of intellectual poetry. The stance that a belief in God denotes no verifiable hypothesis is sometimes referred to as igtheism (for example, by Paul Kurtz). In later years, Ayer reiterated that he did not believe in God and began to call himself an atheist. He followed in the footsteps of Bertrand Russell by debating religion with the Jesuit scholar Frederick Copleston.
Ayer's version of emotivism divides "the ordinary system of ethics" into four classes:
"Propositions that express definitions of ethical terms, or judgements about the legitimacy or possibility of certain definitions"
"Propositions describing the phenomena of moral experience, and their causes"
"Exhortations to moral virtue"
"Actual ethical judgements"
He focuses on propositions of the first classmoral judgementssaying that those of the second class belong to science, those of the third are mere commands, and those of the fourth (which are considered normative ethics as opposed to meta-ethics) are too concrete for ethical philosophy.
Ayer argues that moral judgements cannot be translated into non-ethical, empirical terms and thus cannot be verified; in this he agrees with ethical intuitionists. But he differs from intuitionists by discarding appeals to intuition of non-empirical moral truths as "worthless" since the intuition of one person often contradicts that of another. Instead, Ayer concludes that ethical concepts are "mere pseudo-concepts":
Between 1945 and 1947, together with Russell and George Orwell, Ayer contributed a series of articles to Polemic, a short-lived British Magazine of Philosophy, Psychology, and Aesthetics edited by the ex-Communist Humphrey Slater.
Ayer was closely associated with the British humanist movement. He was an Honorary Associate of the Rationalist Press Association from 1947 until his death. He was elected a Foreign Honorary Member of the American Academy of Arts and Sciences in 1963. In 1965, he became the first president of the Agnostics' Adoption Society and in the same year succeeded Julian Huxley as president of the British Humanist Association, a post he held until 1970. In 1968 he edited The Humanist Outlook, a collection of essays on the meaning of humanism. He was one of the signers of the Humanist Manifesto.
Works
Ayer is best known for popularising the verification principle, in particular through his presentation of it in Language, Truth, and Logic. The principle was at the time at the heart of the debates of the so-called Vienna Circle, which Ayer had visited as a young guest. Others, including the circle's leading light, Moritz Schlick, were already writing papers on the issue. Ayer's formulation was that a sentence can be meaningful only if it has verifiable empirical import; otherwise, it is either "analytical" if tautologous or "metaphysical" (i.e. meaningless, or "literally senseless"). He started to work on the book at the age of 23 and it was published when he was 26. Ayer's philosophical ideas were deeply influenced by those of the Vienna Circle and David Hume. His clear, vibrant and polemical exposition of them makes Language, Truth and Logic essential reading on the tenets of logical empiricism; the book is regarded as a classic of 20th-century analytic philosophy and is widely read in philosophy courses around the world. In it, Ayer also proposes that the distinction between a conscious man and an unconscious machine resolves itself into a distinction between "different types of perceptible behaviour", an argument that anticipates the Turing test published in 1950 to test a machine's capability to demonstrate intelligence.
Ayer wrote two books on the philosopher Bertrand Russell, Russell and Moore: The Analytic Heritage (1971) and Russell (1972). He also wrote an introductory book on the philosophy of David Hume and a short biography of Voltaire.
Ayer was a strong critic of the German philosopher Martin Heidegger. As a logical positivist, Ayer was in conflict with Heidegger's vast, overarching theories of existence. Ayer considered them completely unverifiable through empirical demonstration and logical analysis, and this sort of philosophy an unfortunate strain in modern thought. He considered Heidegger the worst example of such philosophy, which Ayer believed entirely useless. In Philosophy in the Twentieth Century, Ayer accuses Heidegger of "surprising ignorance" or "unscrupulous distortion" and "what can fairly be described as charlatanism."
In 1972–73, Ayer gave the Gifford Lectures at the University of St Andrews, later published as The Central Questions of Philosophy. In the book's preface, he defends his selection to hold the lectureship on the basis that Lord Gifford wished to promote "natural theology, in the widest sense of that term", and that non-believers are allowed to give the lectures if they are "able reverent men, true thinkers, sincere lovers of and earnest inquirers after truth". He still believed in the viewpoint he shared with the logical positivists: that large parts of what was traditionally called philosophyincluding metaphysics, theology and aestheticswere not matters that could be judged true or false, and that it was thus meaningless to discuss them.
In The Concept of a Person and Other Essays (1963), Ayer heavily criticised Wittgenstein's private language argument.
Ayer's sense-data theory in Foundations of Empirical Knowledge was famously criticised by fellow Oxonian J. L. Austin in Sense and Sensibilia, a landmark 1950s work of ordinary language philosophy. Ayer responded in the essay "Has Austin Refuted the Sense-datum Theory?", which can be found in his Metaphysics and Common Sense (1969).
Awards
Ayer was awarded a knighthood as Knight Bachelor in the London Gazette on 1 January 1970.
Collections
Ayer's biographer, Ben Rogers, deposited 7 boxes of research material accumulated through the writing process at University College London in 2007. The material was donated in collaboration with Ayer's family.
Selected publications
1936, Language, Truth, and Logic, London: Gollancz., 2nd ed., with new introduction (1946)
1936, "Causation and free will", The Aryan Path.
1940, The Foundations of Empirical Knowledge, London: Macmillan.
1954, Philosophical Essays, London: Macmillan. (Essays on freedom, phenomenalism, basic propositions, utilitarianism, other minds, the past, ontology.)
1957, "The conception of probability as a logical relation", in S. Korner, ed., Observation and Interpretation in the Philosophy of Physics, New York: Dover Publications.
1956, The Problem of Knowledge, London: Macmillan.
1957, "Logical Positivism - A Debate" (with F. C. Copleston) in: Edwards, Paul, Pap, Arthur (eds.), A Modern Introduction to Philosophy; readings from classical and contemporary sources
1963, The Concept of a Person and Other Essays, London: Macmillan. (Essays on truth, privacy and private languages, laws of nature, the concept of a person, probability.)
1967, "Has Austin Refuted the Sense-Datum Theory?" Synthese vol. XVIII, pp. 117–140. (Reprinted in Ayer 1969).
1968, The Origins of Pragmatism, London: Macmillan.
1969, Metaphysics and Common Sense, London: Macmillan. (Essays on knowledge, man as a subject for science, chance, philosophy and politics, existentialism, metaphysics, and a reply to Austin on sense-data theory [Ayer 1967].)
1971, Russell and Moore: The Analytical Heritage, London: Macmillan.
1972, Probability and Evidence, London: Macmillan.
1972, Russell, London: Fontana Modern Masters.
1973, The Central Questions of Philosophy, London: Weidenfeld.
1977, Part of My Life, London: Collins.
1979, "Replies", in G. F. Macdonald, ed., Perception and Identity: Essays Presented to A. J. Ayer, With His Replies, London: Macmillan; Ithaca, N.Y.: Cornell University Press.
1980, Hume, Oxford: Oxford University Press
1982, Philosophy in the Twentieth Century, London: Weidenfeld.
1984, Freedom and Morality and Other Essays, Oxford: Clarendon Press.
1984, More of My Life, London: Collins.
1986, Ludwig Wittgenstein, London: Penguin.
1986, Voltaire, New York: Random House.
1988, Thomas Paine, London: Secker & Warburg.
1990, The Meaning of Life and Other Essays, Weidenfeld & Nicolson.
1991, "A Defense of Empiricism" in: Griffiths, A. Phillips (ed.), A. J. Ayer: Memorial Essays (Royal Institute of Philosophy Supplements). Cambridge University Press.
1992, "Intellectual Autobiography" and Replies in: Lewis Edwin Hahn (ed.), The Philosophy of A.J. Ayer (The Library of Living Philosophers Volume XXI), Open Court Publishing Co.
*For more complete publication details see "The Philosophical Works of A. J. Ayer" (1979) and "Bibliography of the writings of A.J. Ayer" (1992).
See also
A priori knowledge
List of British philosophers
References
Footnotes
Works cited
Ayer, A.J. (1989). "That undiscovered country", New Humanist, Vol. 104 (1), May, pp. 10–13.
Rogers, Ben (1999). A.J. Ayer: A Life. New York: Grove Press. . (Chapter one and a review by Hilary Spurling, The New York Times, 24 December 2000.)
Further reading
Jim Holt, "Positive Thinking" (review of Karl Sigmund, Exact Thinking in Demented Times: The Vienna Circle and the Epic Quest for the Foundations of Science, Basic Books, 449 pp.), The New York Review of Books, vol. LXIV, no. 20 (21 December 2017), pp. 74–76.
Ted Honderich, Ayer's Philosophy and its Greatness.
Anthony Quinton, Alfred Jules Ayer. Proceedings of the British Academy, 94 (1996), pp. 255–282.
Graham Macdonald, Alfred Jules Ayer, Stanford Encyclopedia of Philosophy, 7 May 2005.
External links
"Logical Positivism" (video) Men of Ideas interview with Bryan Magee (1978)
"Frege, Russell, and Modern Logic" (video) The Great Philosophers interview with Bryan Magee (1987)
Ayer's Elizabeth Rathbone Lecture on Philosophy & Politics
Ayer entry in the Stanford Encyclopedia of Philosophy
A.J. Ayer: Out of time by Alex Callinicos
Appearance on Desert Island Discs – 3 August 1984
Ayer (Rogers) Papers at University College London
1910 births
1989 deaths
20th-century atheists
20th-century English non-fiction writers
Academics of University College London
Alumni of Christ Church, Oxford
Analytic philosophers
Aristotelian philosophers
Atheism in the United Kingdom
Atheist philosophers
Bard College faculty
British Army personnel of World War II
British people of Dutch-Jewish descent
British people of Swiss descent
British critics of religions
British Special Operations Executive personnel
Empiricists
English atheists
English humanists
English logicians
English people of Dutch-Jewish descent
English people of Swiss descent
20th-century English philosophers
British epistemologists
Fellows of Christ Church, Oxford
Fellows of the American Academy of Arts and Sciences
Fellows of the British Academy
Jewish atheists
Jewish humanists
Jewish philosophers
Knights Bachelor
Linguistic turn
Logical positivism
Logicians
Ontologists
People educated at Eton College
People from St John's Wood
British philosophers of culture
British philosophers of education
Philosophers of history
British philosophers of language
British philosophers of logic
British philosophers of mind
British philosophers of religion
British philosophers of science
Philosophers of technology
Philosophy writers
English political philosophers
Presidents of the Aristotelian Society
Presidents of Humanists UK
Vienna Circle
Welsh Guards officers
Wykeham Professors of Logic
English LGBTQ rights activists | A. J. Ayer | [
"Mathematics"
] | 4,157 | [
"Mathematical logic",
"Logical positivism"
] |
2,021 | https://en.wikipedia.org/wiki/Atle%20Selberg | Atle Selberg (14 June 1917 – 6 August 2007) was a Norwegian mathematician known for his work in analytic number theory and the theory of automorphic forms, and in particular for bringing them into relation with spectral theory. He was awarded the Fields Medal in 1950 and an honorary Abel Prize in 2002.
Early years
Selberg was born in Langesund, Norway, the son of teacher Anna Kristina Selberg and mathematician Ole Michael Ludvigsen Selberg. Two of his three brothers, Sigmund and Henrik, were also mathematicians. His other brother, Arne, was a professor of engineering.
While he was still at school he was influenced by the work of Srinivasa Ramanujan and he found an exact analytical formula for the partition function as suggested by the works of Ramanujan; however, this result was first published by Hans Rademacher.
He studied at the University of Oslo and completed his PhD in 1943.
World War II
During World War II, Selberg worked in isolation due to the German occupation of Norway. After the war, his accomplishments became known, including a proof that a positive proportion of the zeros of the Riemann zeta function lie on the line .
During the war, he fought against the German invasion of Norway, and was imprisoned several times.
Post-war in Norway
After the war, he turned to sieve theory, a previously neglected topic which Selberg's work brought into prominence. In a 1947 paper he introduced the Selberg sieve, a method well adapted in particular to providing auxiliary upper bounds, and which contributed to Chen's theorem, among other important results.
In 1948 Selberg submitted two papers in Annals of Mathematics in which he proved by elementary means the theorems for primes in arithmetic progression and the density of primes. This challenged the widely held view of his time that certain theorems are only obtainable with the advanced methods of complex analysis. Both results were based on his work on the asymptotic formula
where
for primes . He established this result by elementary means in March 1948, and by July of that year, Selberg and Paul Erdős each obtained elementary proofs of the prime number theorem, both using the asymptotic formula above as a starting point. Circumstances leading up to the proofs, as well as publication disagreements, led to a bitter dispute between the two mathematicians.
For his fundamental accomplishments during the 1940s, Selberg received the 1950 Fields Medal.
Institute for Advanced Study
Selberg moved to the United States and worked as an associate professor at Syracuse University and later settled at the Institute for Advanced Study in Princeton, New Jersey in the 1950s, where he remained until his death. During the 1950s he worked on introducing spectral theory into number theory, culminating in his development of the Selberg trace formula, the most famous and influential of his results. In its simplest form, this establishes a duality between the lengths of closed geodesics on a compact Riemann surface and the eigenvalues of the Laplacian, which is analogous to the duality between the prime numbers and the zeros of the zeta function.
He generally worked alone. His only coauthor was Sarvadaman Chowla.
Selberg was awarded the 1986 Wolf Prize in Mathematics. He was also awarded an honorary Abel Prize in 2002, its founding year, before the awarding of the regular prizes began.
Selberg received many distinctions for his work, in addition to the Fields Medal, the Wolf Prize and the Gunnerus Medal. He was elected to the Norwegian Academy of Science and Letters, the Royal Danish Academy of Sciences and Letters and the American Academy of Arts and Sciences.
In 1972, he was awarded an honorary degree, doctor philos. honoris causa, at the Norwegian Institute of Technology, later part of Norwegian University of Science and Technology.
His first wife, Hedvig, died in 1995. With her, Selberg had two children: Ingrid Selberg (married to playwright Mustapha Matura) and Lars Selberg. In 2003 Atle Selberg married Betty Frances ("Mickey") Compton (born in 1929).
He died at home in Princeton, New Jersey on 6 August 2007 of heart failure. Upon his death he was survived by his widow, daughter, son, and four grandchildren.
Selected publications
Selberg's collected works were published in two volumes. The first volume contains 41 articles, and the second volume contains three additional articles, in addition to Selberg's lectures on sieves.
Description at M.I.T. Press Bookstore
Description at M.I.T. Press Bookstore
References
Further reading
Albers, Donald J. and Alexanderson, Gerald L. (2011), Fascinating Mathematical People: interviews and memoirs, "Atle Selberg", pp 254–73, Princeton University Press, .
Interview with Selberg
External links
Atle Selberg archive webpage
Obituary at Institute for Advanced Study
Obituary in The Times
Atle Selbergs private archive exists at NTNU University Library
1917 births
2007 deaths
20th-century American mathematicians
21st-century American mathematicians
Fields Medalists
Institute for Advanced Study faculty
Members of the Royal Danish Academy of Sciences and Letters
Members of the Norwegian Academy of Science and Letters
Norwegian emigrants to the United States
Norwegian mathematicians
Number theorists
People from Bamble
University of Oslo alumni
Wolf Prize in Mathematics laureates
Members of the Royal Swedish Academy of Sciences | Atle Selberg | [
"Mathematics"
] | 1,089 | [
"Number theorists",
"Number theory"
] |
2,024 | https://en.wikipedia.org/wiki/Amber%20Road | The Amber Road was an ancient trade route for the transfer of amber from coastal areas of the North Sea and the Baltic Sea to the Mediterranean Sea. Prehistoric trade routes between Northern and Southern Europe were defined by the amber trade.
As an important commodity, sometimes dubbed "the gold of the north", amber was transported from the North Sea and Baltic Sea coasts overland by way of the Vistula and Dnieper rivers to Italy, Greece, the Black Sea, Syria and Egypt over a period of thousands of years.
Antiquity
The oldest trade in amber started from Sicily. The Sicilian amber trade was directed to Greece, North Africa and Spain. Sicilian amber was also discovered in Mycenae by the archaeologist Heinrich Schliemann, and it appeared in sites in southern Spain and Portugal. Its distribution is similar to that of ivory, so it is possible that amber from Sicily reached the Iberian Peninsula through contacts with North Africa. After a decline in the consumption and trade of amber at the beginning of the Bronze Age, around 2000 BC, the influence of Baltic amber gradually took the place of Sicilian amber throughout the Iberian Peninsula from around 1000 BC. The new evidence comes from various archaeological and geological locations on the Iberian Peninsula.
From at least the 16th century BC, amber was moved from Northern Europe to the Mediterranean area. The breast ornament of the Egyptian Pharaoh Tutankhamen ( BC) contains large Baltic amber beads. Schliemann found Baltic amber beads at Mycenae, as shown by spectroscopic investigation. The quantity of amber in the Royal Tomb of Qatna, Syria, is unparalleled among known second millennium BC sites in the Levant and the Ancient Near East. Amber was sent from the North Sea to the Temple of Apollo at Delphi as an offering. From the Black Sea, trade could continue to Asia along the Silk Road, another ancient trade route.
In Roman times, a main route ran south from the Baltic coast (modern Lithuania), the entire north–south length of modern-day Poland (likely through the Iron Age settlement of Biskupin), through the land of the Boii (modern Czech Republic and Slovakia) to the head of the Adriatic Sea (Aquileia by the modern Gulf of Venice). Other commodities were exported to the Romans along with amber, such as animal fur and skin, honey, and wax, in exchange for Roman glass, brass, gold, and non-ferrous metals such as tin and copper imported into the early Baltic region. As this road was a lucrative trade route connecting the Baltic Sea to the Mediterranean Sea, Roman military fortifications were constructed along the route to protect merchants and traders from Germanic raids.
The Old Prussian towns of Kaup and Truso on the Baltic were the starting points of the route to the south. In Scandinavia the amber road probably gave rise to the thriving Nordic Bronze Age culture, bringing influences from the Mediterranean Sea to the northernmost countries of Europe.
Kaliningrad Oblast is occasionally referred to in Russian as , which means "the amber region" (see Kaliningrad Regional Amber Museum).
Known roads by country
Estonia
Old coastal Amber road route goes along E67 highway from Reiu in Häädemeeste Parish of Pärnumaa South, where it continues as 331 local road between Rannametsa and Ikla villages.
Poland
The shortest (and possibly oldest) road avoids alpine areas and led from the Baltic coastline (nowadays Lithuania and Poland), through Biskupin, Milicz, Wrocław, the Kłodzko Valley (less often through the Moravian Gate), crossed the Danube near Carnuntum in the Noricum province, headed southwest past Poetovio, Celeia, Emona, Nauportus, and reached Patavium and Aquileia at the Adriatic coast. One of the oldest directions of the last stage of the Amber Road to the south of the Danube, noted in the myth about the Argonauts, used the rivers Sava and Kupa, ending with a short continental road from Nauportus to Tarsatica in Rijeka on the coast of the Adriatic.
Germany
Several roads connected the North Sea and Baltic Sea, especially the city of Hamburg to the Brenner Pass, proceeding southwards to Brindisi (nowadays Italy) and Ambracia (nowadays Greece).
Switzerland
The Swiss region indicates a number of alpine roads, concentrating around the capital city Bern and probably originating from the banks of the Rhône and Rhine.
The Netherlands
A small section, including Baarn, Barneveld, Amersfoort and Amerongen, connected the North Sea with the Lower Rhine.
Belgium
A small section led southwards from Antwerp and Bruges to the towns Braine-l'Alleud and Braine-le-Comte, both originally named "Brennia-Brenna". The route continued by following the Meuse towards Bern in Switzerland.
Southern France and Spain
Routes connected amber finding locations at Ambares (near Bordeaux), leading to Béarn and the Pyrenees. Routes connecting the amber finding locations in northern Spain and in the Pyrenees were a trading route to the Mediterranean Sea.
Mongolia
Sources of archaeological finds suggest that routes may also have connected Mongolia to eastern Europe during the Kitan/Liao Period.
Modern usage
There is a tourist route stretching along the Baltic coast from Kaliningrad to Latvia called "Amber Road".
"Amber Road" sites are:
Mizgiris Amber Gallery-Museum in Nida;
Amber Bay in Juodkrantė;
Lithuania Minor History Museum;
Amber collection place in Karklė, Lithuania;
Palanga Amber Museum in Palanga;
Open amber workshop in Palanga;
Amber museum in Gdańsk; and
Samogitian Alka in Šventoji.
Amber deposit from Partynice - Dating from the 1st century BC amber deposit found in Wrocław. It is the world's largest archaeological find of amber, estimated at 1,240–1,760 kg. Currently it is in the Archaeological Museum in Wrocław.
In Poland, the north–south motorway A1 is officially named Amber Highway.
EV9 The Amber Route is a long-distance cycling route between Gdańsk, Poland and Pula, Croatia which follows the course of the Amber Road.
The modern Baltic–Adriatic Corridor connects the two seas along routes that roughly follow the Amber Road.
References
External links
OWTRAD-scientific description of the amber road in Poland
Old World Traditional Trade Routes (OWTRAD) Project
Joannes Richter – "Die Bernsteinroute bei Backnang" (pdf file)
Amber
History of Europe
Transport in Prussia
Prehistoric Lithuania
Prehistoric Poland
Trade routes
Southern France | Amber Road | [
"Physics"
] | 1,337 | [
"Amorphous solids",
"Unsolved problems in physics",
"Amber"
] |
2,027 | https://en.wikipedia.org/wiki/Andrew%20Wiles | Sir Andrew John Wiles (born 11 April 1953) is an English mathematician and a Royal Society Research Professor at the University of Oxford, specialising in number theory. He is best known for proving Fermat's Last Theorem, for which he was awarded the 2016 Abel Prize and the 2017 Copley Medal and for which he was appointed a Knight Commander of the Order of the British Empire in 2000. In 2018, Wiles was appointed the first Regius Professor of Mathematics at Oxford. Wiles is also a 1997 MacArthur Fellow.
Wiles was born in Cambridge to theologian Maurice Frank Wiles and Patricia Wiles. While spending much of his childhood in Nigeria, Wiles developed an interest in mathematics and in Fermat's Last Theorem in particular. After moving to Oxford and graduating from there in 1974, he worked on unifying Galois representations, elliptic curves and modular forms, starting with Barry Mazur's generalizations of Iwasawa theory. In the early 1980s, Wiles spent a few years at the University of Cambridge before moving to Princeton University, where he worked on expanding out and applying Hilbert modular forms. In 1986, upon reading Ken Ribet's seminal work on Fermat's Last Theorem, Wiles set out to prove the modularity theorem for semistable elliptic curves, which implied Fermat's Last Theorem. By 1993, he had been able to convince a knowledgeable colleague that he had a proof of Fermat's Last Theorem, though a flaw was subsequently discovered. After an insight on 19 September 1994, Wiles and his student Richard Taylor were able to circumvent the flaw, and published the results in 1995, to widespread acclaim.
In proving Fermat's Last Theorem, Wiles developed new tools for mathematicians to begin unifying disparate ideas and theorems. His former student Taylor along with three other mathematicians were able to prove the full modularity theorem by 2000, using Wiles' work. Upon receiving the Abel Prize in 2016, Wiles reflected on his legacy, expressing his belief that he did not just prove Fermat's Last Theorem, but pushed the whole of mathematics as a field towards the Langlands program of unifying number theory.
Education and early life
Wiles was born on 11 April 1953 in Cambridge, England, the son of Maurice Frank Wiles (1923–2005) and Patricia Wiles (née Mowll). From 1952 to 1955, his father worked as the chaplain at Ridley Hall, Cambridge, and later became the Regius Professor of Divinity at the University of Oxford.
Wiles began his formal schooling in Nigeria, while living there as a very young boy with his parents. However, according to letters written by his parents, for at least the first several months after he was supposed to be attending classes, he refused to go. From that fact, Wiles himself concluded that in his earliest years, he was not enthusiastic about spending time in academic institutions. In an interview with Nadia Hasnaoui in 2021, he said he trusted the letters, yet he could not remember a time when he did not enjoy solving mathematical problems.
Wiles attended King's College School, Cambridge, and The Leys School, Cambridge. Wiles told WGBH-TV in 1999 that he came across Fermat's Last Theorem on his way home from school when he was 10 years old. He stopped at his local library where he found a book The Last Problem, by Eric Temple Bell, about the theorem. Fascinated by the existence of a theorem that was so easy to state that he, a ten-year-old, could understand it, but that no one had proven, he decided to be the first person to prove it. However, he soon realised that his knowledge was too limited, so he abandoned his childhood dream until it was brought back to his attention at the age of 33 by Ken Ribet's 1986 proof of the epsilon conjecture, which Gerhard Frey had previously linked to Fermat's equation.
Early career
In 1974, Wiles earned his bachelor's degree in mathematics at Merton College, Oxford. Wiles's graduate research was guided by John Coates, beginning in the summer of 1975. Together they worked on the arithmetic of elliptic curves with complex multiplication by the methods of Iwasawa theory. He further worked with Barry Mazur on the main conjecture of Iwasawa theory over the rational numbers, and soon afterward, he generalised this result to totally real fields.
In 1980, Wiles earned a PhD while at Clare College, Cambridge. After a stay at the Institute for Advanced Study in Princeton, New Jersey, in 1981, Wiles became a Professor of Mathematics at Princeton University.
In 1985–86, Wiles was a Guggenheim Fellow at the Institut des Hautes Études Scientifiques near Paris and at the .
In 1989, Wiles was elected to the Royal Society. At that point according to his election certificate, he had been working "on the construction of ℓ-adic representations attached to Hilbert modular forms, and has applied these to prove the 'main conjecture' for cyclotomic extensions of totally real fields".
Proof of Fermat's Last Theorem
From 1988 to 1990, Wiles was a Royal Society Research Professor at the University of Oxford, and then he returned to Princeton.
From 1994 to 2009, Wiles was a Eugene Higgins Professor at Princeton.
Starting in mid-1986, based on successive progress of the previous few years of Gerhard Frey, Jean-Pierre Serre and Ken Ribet, it became clear that Fermat's Last Theorem (the statement that no three positive integers , , and satisfy the equation for any integer value of greater than ) could be proven as a corollary of a limited form of the modularity theorem (unproven at the time and then known as the "Taniyama–Shimura–Weil conjecture"). The modularity theorem involved elliptic curves, which was also Wiles's own specialist area, and stated that all such curves have a modular form associated with them. These curves can be thought of as mathematical objects resembling solutions for a torus' surface, and if Fermat's Last Theorem were false and solutions existed, "a peculiar curve would result". A proof of the theorem therefore would involve showing that such a curve would not exist.
The conjecture was seen by contemporary mathematicians as important, but extraordinarily difficult or perhaps impossible to prove. For example, Wiles's ex-supervisor John Coates stated that it seemed "impossible to actually prove", and Ken Ribet considered himself "one of the vast majority of people who believed [it] was completely inaccessible", adding that "Andrew Wiles was probably one of the few people on earth who had the audacity to dream that you can actually go and prove [it]."
Despite this, Wiles, with his from-childhood fascination with Fermat's Last Theorem, decided to undertake the challenge of proving the conjecture, at least to the extent needed for Frey's curve. He dedicated all of his research time to this problem for over six years in near-total secrecy, covering up his efforts by releasing prior work in small segments as separate papers and confiding only in his wife.
Wiles' research involved creating a proof by contradiction of Fermat's Last Theorem, which Ribet in his 1986 work had found to have an elliptic curve and thus an associated modular form if true. Starting by assuming that the theorem was incorrect, Wiles then contradicted the Taniyama–Shimura–Weil conjecture as formulated under that assumption, with Ribet's theorem (which stated that if were a prime number, no such elliptic curve could have a modular form, so no odd prime counterexample to Fermat's equation could exist). Wiles also proved that the conjecture applied to the special case known as the semistable elliptic curves to which Fermat's equation was tied. In other words, Wiles had found that the Taniyama–Shimura–Weil conjecture was true in the case of Fermat's equation, and Ribet's finding (that the conjecture holding for semistable elliptic curves could mean Fermat's Last Theorem is true) prevailed, thus proving Fermat's Last Theorem.
In June 1993, he presented his proof to the public for the first time at a conference in Cambridge. Gina Kolata of The New York Times summed up the presentation as follows:
In August 1993, it was discovered that the proof contained a flaw in several areas, related to properties of the Selmer group and use of a tool called an Euler system. Wiles tried and failed for over a year to repair his proof. According to Wiles, the crucial idea for circumventing—rather than closing—this area came to him on 19 September 1994, when he was on the verge of giving up. The circumvention used Galois representations to replace elliptic curves, reduced the problem to a class number formula and solved it, among other matters, all using Victor Kolyvagin's ideas as a basis for fixing Matthias Flach's approach with Iwasawa theory. Together with his former student Richard Taylor, Wiles published a second paper which contained the circumvention and thus completed the proof. Both papers were published in May 1995 in a dedicated issue of the Annals of Mathematics.
Later career
In 2011, Wiles rejoined the University of Oxford as Royal Society Research Professor.
In May 2018, Wiles was appointed Regius Professor of Mathematics at Oxford, the first in the university's history.
Legacy
Wiles' work has been used in many fields of mathematics. Notably, in 1999, three of his former students, Richard Taylor, Brian Conrad, and Fred Diamond, working with Christophe Breuil, built upon Wiles' proof to prove the full modularity theorem. Wiles's doctoral students have also included Manjul Bhargava (2014 winner of the Fields Medal), Ehud de Shalit, Ritabrata Munshi (winner of the SSB Prize and ICTP Ramanujan Prize), Karl Rubin (son of Vera Rubin), Christopher Skinner, and Vinayak Vatsal (2007 winner of the Coxeter–James Prize).
In 2016, upon receiving the Abel Prize, Wiles said about his proof of Fermat's Last Theorem, "The methods that solved it opened up a new way of attacking one of the big webs of conjectures of contemporary mathematics called the Langlands Program, which as a grand vision tries to unify different branches of mathematics. It’s given us a new way to look at that".
Awards and honours
Wiles's proof of Fermat's Last Theorem has stood up to the scrutiny of the world's other mathematical experts. Wiles was interviewed for an episode of the BBC documentary series Horizon about Fermat's Last Theorem. This was broadcast as an episode of the PBS science television series Nova with the title "The Proof". His work and life are also described in great detail in Simon Singh's popular book Fermat's Last Theorem.
In 1988, Wiles was awarded the Junior Whitehead Prize of the London Mathematical Society (1988). In 1989, he was elected a Fellow of the Royal Society (FRS)
In 1994, Wiles was elected member of the American Academy of Arts and Sciences. Upon completing his proof of Fermat's Last Theorem in 1995, he was awarded the Schock Prize, Fermat Prize, and Wolf Prize in Mathematics that year. Wiles was elected a Foreign Associate of the National Academy of Sciences and won an NAS Award in Mathematics from the National Academy of Sciences, the Royal Medal, and the Ostrowski Prize in 1996. He won the American Mathematical Society's Cole Prize, a MacArthur Fellowship, and the Wolfskehl Prize in 1997, and was elected member of the American Philosophical Society that year.
In 1998, Wiles was awarded a silver plaque from the International Mathematical Union recognising his achievements, in place of the Fields Medal, which is restricted to those under the age of 40 (Wiles was 41 when he proved the theorem in 1994). That same year, he was awarded the King Faisal Prize along with the Clay Research Award in 1999, the year the asteroid 9999 Wiles was named after him.
In 2000, he was awarded Knight Commander of the Order of the British Empire (2000) In 2004 Wiles won the Premio Pitagora. In 2005, he won the Shaw Prize.
The building at the University of Oxford housing the Mathematical Institute was named after Wiles in 2016. Later that year he won the Abel Prize. In 2017, Wiles won the Copley Medal. In 2019, he won the De Morgan Medal.
See also
André Weil
References
External links
Profile from Oxford
Profile from Princeton
1953 births
Living people
20th-century English mathematicians
21st-century English mathematicians
Abel Prize laureates
Alumni of Clare College, Cambridge
Alumni of King's College, Cambridge
Alumni of Merton College, Oxford
Clay Research Award recipients
Fellows of Merton College, Oxford
Fellows of the Royal Society
Fermat's Last Theorem
Foreign associates of the National Academy of Sciences
Institute for Advanced Study visiting scholars
Knights Commander of the Order of the British Empire
MacArthur Fellows
Members of the American Philosophical Society
Members of the French Academy of Sciences
British number theorists
People educated at The Leys School
People from Cambridge
Princeton University faculty
Recipients of the Copley Medal
Regius Professors of Mathematics (University of Oxford)
Rolf Schock Prize laureates
Royal Medal winners
Trustees of the Institute for Advanced Study
Whitehead Prize winners
Wolf Prize in Mathematics laureates | Andrew Wiles | [
"Mathematics"
] | 2,816 | [
"Theorems in number theory",
"Fermat's Last Theorem"
] |
2,039 | https://en.wikipedia.org/wiki/Avionics | Avionics (a portmanteau of aviation and electronics) are the electronic systems used on aircraft. Avionic systems include communications, navigation, the display and management of multiple systems, and the hundreds of systems that are fitted to aircraft to perform individual functions. These can be as simple as a searchlight for a police helicopter or as complicated as the tactical system for an airborne early warning platform.
History
The term "avionics" was coined in 1949 by Philip J. Klass, senior editor at Aviation Week & Space Technology magazine as a portmanteau of "aviation electronics".
Radio communication was first used in aircraft just prior to World War I. The first airborne radios were in zeppelins, but the military sparked development of light radio sets that could be carried by heavier-than-air craft, so that aerial reconnaissance biplanes could report their observations immediately in case they were shot down. The first experimental radio transmission from an airplane was conducted by the U.S. Navy in August 1910. The first aircraft radios transmitted by radiotelegraphy. They required a two-seat aircraft with a second crewman who operated a telegraph key to spell out messages in Morse code. During World War I, AM voice two way radio sets were made possible in 1917 (see TM (triode)) by the development of the triode vacuum tube, which were simple enough that the pilot in a single seat aircraft could use it while flying.
Radar, the central technology used today in aircraft navigation and air traffic control, was developed by several nations, mainly in secret, as an air defense system in the 1930s during the runup to World War II. Many modern avionics have their origins in World War II wartime developments. For example, autopilot systems that are commonplace today began as specialized systems to help bomber planes fly steadily enough to hit precision targets from high altitudes. Britain's 1940 decision to share its radar technology with its U.S. ally, particularly the magnetron vacuum tube, in the famous Tizard Mission, significantly shortened the war. Modern avionics is a substantial portion of military aircraft spending. Aircraft like the F-15E and the now retired F-14 have roughly 20 percent of their budget spent on avionics. Most modern helicopters now have budget splits of 60/40 in favour of avionics.
The civilian market has also seen a growth in cost of avionics. Flight control systems (fly-by-wire) and new navigation needs brought on by tighter airspaces, have pushed up development costs. The major change has been the recent boom in consumer flying. As more people begin to use planes as their primary method of transportation, more elaborate methods of controlling aircraft safely in these high restrictive airspaces have been invented.
Modern avionics
Avionics plays a heavy role in modernization initiatives like the Federal Aviation Administration's (FAA) Next Generation Air Transportation System project in the United States and the Single European Sky ATM Research (SESAR) initiative in Europe. The Joint Planning and Development Office put forth a roadmap for avionics in six areas:
Published Routes and Procedures – Improved navigation and routing
Negotiated Trajectories – Adding data communications to create preferred routes dynamically
Delegated Separation – Enhanced situational awareness in the air and on the ground
LowVisibility/CeilingApproach/Departure – Allowing operations with weather constraints with less ground infrastructure
Surface Operations – To increase safety in approach and departure
ATM Efficiencies – Improving the air traffic management (ATM) process
Market
The Aircraft Electronics Association reports $1.73 billion avionics sales for the first three quarters of 2017 in business and general aviation, a 4.1% yearly improvement: 73.5% came from North America, forward-fit represented 42.3% while 57.7% were retrofits as the U.S. deadline of January 1, 2020 for mandatory ADS-B out approach.
Aircraft avionics
The cockpit or, in larger aircraft, under the cockpit of an aircraft or in a movable nosecone, is a typical location for avionic bay equipment, including control, monitoring, communication, navigation, weather, and anti-collision systems. The majority of aircraft power their avionics using 14- or 28‑volt DC electrical systems; however, larger, more sophisticated aircraft (such as airliners or military combat aircraft) have AC systems operating at 115 volts 400 Hz, AC. There are several major vendors of flight avionics, including The Boeing Company, Panasonic Avionics Corporation, Honeywell (which now owns Bendix/King), Universal Avionics Systems Corporation, Rockwell Collins (now Collins Aerospace), Thales Group, GE Aviation Systems, Garmin, Raytheon, Parker Hannifin, UTC Aerospace Systems (now Collins Aerospace), Selex ES (now Leonardo), Shadin Avionics, and Avidyne Corporation.
International standards for avionics equipment are prepared by the Airlines Electronic Engineering Committee (AEEC) and published by ARINC.
Avionics Installation
Avionics installation is a critical aspect of modern aviation, ensuring that aircraft are equipped with the necessary electronic systems for safe and efficient operation. These systems encompass a wide range of functions, including communication, navigation, monitoring, flight control, and weather detection. Avionics installations are performed on all types of aircraft, from small general aviation planes to large commercial jets and military aircraft.
Installation Process
The installation of avionics requires a combination of technical expertise, precision, and adherence to stringent regulatory standards. The process typically involves:
Planning and Design: Before installation, the avionics shop works closely with the aircraft owner to determine the required systems based on the aircraft type, intended use, and regulatory requirements. Custom instrument panels are often designed to accommodate the new systems.
Wiring and Integration: Avionics systems are integrated into the aircraft’s electrical and control systems, with wiring often requiring laser marking for durability and identification. Shops use detailed schematics to ensure correct installation.
Testing and Calibration: After installation, each system must be thoroughly tested and calibrated to ensure proper function. This includes ground testing, flight testing, and system alignment with regulatory standards such as those set by the FAA.
Certification: Once the systems are installed and tested, the avionics shop completes the necessary certifications. In the U.S., this often involves compliance with FAA Part 91.411 and 91.413 for IFR (Instrument Flight Rules) operations, as well as RVSM (Reduced Vertical Separation Minimum) certification.
Regulatory Standards
Avionics installation is governed by strict regulatory frameworks to ensure the safety and reliability of aircraft systems. In the United States, the Federal Aviation Administration (FAA) sets the standards for avionics installations. These include guidelines for:
System Performance: Avionics systems must meet performance benchmarks as defined by the FAA, ensuring they function correctly in all phases of flight.
Certification: Shops performing installations must be FAA-certified, and their technicians often hold certifications such as the General Radiotelephone Operator License (GROL).
Inspections: Aircraft equipped with newly installed avionics systems must undergo rigorous inspections before being cleared for flight, including both ground and flight tests.
Advancements in Avionics Technology
The field of avionics has seen rapid technological advancements in recent years, leading to more integrated and automated systems. Key trends include:
Glass Cockpits: Traditional analog gauges are being replaced by fully integrated glass cockpit displays, providing pilots with a centralized view of all flight parameters.
NextGen Technologies: ADS-B and satellite-based navigation are part of the FAA’s NextGen initiative, aimed at modernizing air traffic control and improving the efficiency of the national airspace.
Autonomous Systems: Advances in artificial intelligence and machine learning are paving the way for more autonomous aircraft systems, enhancing safety and reducing pilot workload.
Communications
Communications connect the flight deck to the ground and the flight deck to the passengers. On‑board communications are provided by public-address systems and aircraft intercoms.
The VHF aviation communication system works on the airband of 118.000 MHz to 136.975 MHz. Each channel is spaced from the adjacent ones by 8.33 kHz in Europe, 25 kHz elsewhere. VHF is also used for line of sight communication such as aircraft-to-aircraft and aircraft-to-ATC. Amplitude modulation (AM) is used, and the conversation is performed in simplex mode. Aircraft communication can also take place using HF (especially for trans-oceanic flights) or satellite communication.
Navigation
Air navigation is the determination of position and direction on or above the surface of the Earth. Avionics can use satellite navigation systems (such as GPS and WAAS), inertial navigation system (INS), ground-based radio navigation systems (such as VOR or LORAN), or any combination thereof. Some navigation systems such as GPS calculate the position automatically and display it to the flight crew on moving map displays. Older ground-based Navigation systems such as VOR or LORAN requires a pilot or navigator to plot the intersection of signals on a paper map to determine an aircraft's location; modern systems calculate the position automatically and display it to the flight crew on moving map displays.
Monitoring
The first hints of glass cockpits emerged in the 1970s when flight-worthy cathode-ray tube (CRT) screens began to replace electromechanical displays, gauges and instruments. A "glass" cockpit refers to the use of computer monitors instead of gauges and other analog displays. Aircraft were getting progressively more displays, dials and information dashboards that eventually competed for space and pilot attention. In the 1970s, the average aircraft had more than 100 cockpit instruments and controls.
Glass cockpits started to come into being with the Gulfstream G‑IV private jet in 1985. One of the key challenges in glass cockpits is to balance how much control is automated and how much the pilot should do manually. Generally they try to automate flight operations while keeping the pilot constantly informed.
Aircraft flight-control system
Aircraft have means of automatically controlling flight. Autopilot was first invented by Lawrence Sperry during World War I to fly bomber planes steady enough to hit accurate targets from 25,000 feet. When it was first adopted by the U.S. military, a Honeywell engineer sat in the back seat with bolt cutters to disconnect the autopilot in case of emergency. Nowadays most commercial planes are equipped with aircraft flight control systems in order to reduce pilot error and workload at landing or takeoff.
The first simple commercial auto-pilots were used to control heading and altitude and had limited authority on things like thrust and flight control surfaces. In helicopters, auto-stabilization was used in a similar way. The first systems were electromechanical. The advent of fly-by-wire and electro-actuated flight surfaces (rather than the traditional hydraulic) has increased safety. As with displays and instruments, critical devices that were electro-mechanical had a finite life. With safety critical systems, the software is very strictly tested.
Fuel Systems
Fuel Quantity Indication System (FQIS) monitors the amount of fuel aboard. Using various sensors, such as capacitance tubes, temperature sensors, densitometers & level sensors, the FQIS computer calculates the mass of fuel remaining on board.
Fuel Control and Monitoring System (FCMS) reports fuel remaining on board in a similar manner, but, by controlling pumps & valves, also manages fuel transfers around various tanks.
Refuelling control to upload to a certain total mass of fuel and distribute it automatically.
Transfers during flight to the tanks that feed the engines. E.G. from fuselage to wing tanks
Centre of gravity control transfers from the tail (trim) tanks forward to the wings as fuel is expended
Maintaining fuel in the wing tips (to alleviate wing bending due to lift in flight) & transferring to the main tanks after landing
Controlling fuel jettison during an emergency to reduce the aircraft weight.
Collision-avoidance systems
To supplement air traffic control, most large transport aircraft and many smaller ones use a traffic alert and collision avoidance system (TCAS), which can detect the location of nearby aircraft, and provide instructions for avoiding a midair collision. Smaller aircraft may use simpler traffic alerting systems such as TPAS, which are passive (they do not actively interrogate the transponders of other aircraft) and do not provide advisories for conflict resolution.
To help avoid controlled flight into terrain (CFIT), aircraft use systems such as ground-proximity warning systems (GPWS), which use radar altimeters as a key element. One of the major weaknesses of GPWS is the lack of "look-ahead" information, because it only provides altitude above terrain "look-down". In order to overcome this weakness, modern aircraft use a terrain awareness warning system (TAWS).
Flight recorders
Commercial aircraft cockpit data recorders, commonly known as "black boxes", store flight information and audio from the cockpit. They are often recovered from an aircraft after a crash to determine control settings and other parameters during the incident.
Weather systems
Weather systems such as weather radar (typically Arinc 708 on commercial aircraft) and lightning detectors are important for aircraft flying at night or in instrument meteorological conditions, where it is not possible for pilots to see the weather ahead. Heavy precipitation (as sensed by radar) or severe turbulence (as sensed by lightning activity) are both indications of strong convective activity and severe turbulence, and weather systems allow pilots to deviate around these areas.
Lightning detectors like the Stormscope or Strikefinder have become inexpensive enough that they are practical for light aircraft. In addition to radar and lightning detection, observations and extended radar pictures (such as NEXRAD) are now available through satellite data connections, allowing pilots to see weather conditions far beyond the range of their own in-flight systems. Modern displays allow weather information to be integrated with moving maps, terrain, and traffic onto a single screen, greatly simplifying navigation.
Modern weather systems also include wind shear and turbulence detection and terrain and traffic warning systems. In‑plane weather avionics are especially popular in Africa, India, and other countries where air-travel is a growing market, but ground support is not as well developed.
Aircraft management systems
There has been a progression towards centralized control of the multiple complex systems fitted to aircraft, including engine monitoring and management. Health and usage monitoring systems (HUMS) are integrated with aircraft management computers to give maintainers early warnings of parts that will need replacement.
The integrated modular avionics concept proposes an integrated architecture with application software portable across an assembly of common hardware modules. It has been used in fourth generation jet fighters and the latest generation of airliners.
Mission or tactical avionics
Military aircraft have been designed either to deliver a weapon or to be the eyes and ears of other weapon systems. The vast array of sensors available to the military is used for whatever tactical means required. As with aircraft management, the bigger sensor platforms (like the E‑3D, JSTARS, ASTOR, Nimrod MRA4, Merlin HM Mk 1) have mission-management computers.
Police and EMS aircraft also carry sophisticated tactical sensors.
Military communications
While aircraft communications provide the backbone for safe flight, the tactical systems are designed to withstand the rigors of the battle field. UHF, VHF Tactical (30–88 MHz) and SatCom systems combined with ECCM methods, and cryptography secure the communications. Data links such as Link 11, 16, 22 and BOWMAN, JTRS and even TETRA provide the means of transmitting data (such as images, targeting information etc.).
Radar
Airborne radar was one of the first tactical sensors. The benefit of altitude providing range has meant a significant focus on airborne radar technologies. Radars include airborne early warning (AEW), anti-submarine warfare (ASW), and even weather radar (Arinc 708) and ground tracking/proximity radar.
The military uses radar in fast jets to help pilots fly at low levels. While the civil market has had weather radar for a while, there are strict rules about using it to navigate the aircraft.
Sonar
Dipping sonar fitted to a range of military helicopters allows the helicopter to protect shipping assets from submarines or surface threats. Maritime support aircraft can drop active and passive sonar devices (sonobuoys) and these are also used to determine the location of enemy submarines.
Electro-optics
Electro-optic systems include devices such as the head-up display (HUD), forward looking infrared (FLIR), infrared search and track and other passive infrared devices (Passive infrared sensor). These are all used to provide imagery and information to the flight crew. This imagery is used for everything from search and rescue to navigational aids and target acquisition.
ESM/DAS
Electronic support measures and defensive aids systems are used extensively to gather information about threats or possible threats. They can be used to launch devices (in some cases automatically) to counter direct threats against the aircraft. They are also used to determine the state of a threat and identify it.
Aircraft networks
The avionics systems in military, commercial and advanced models of civilian aircraft are interconnected using an avionics databus. Common avionics databus protocols, with their primary application, include:
Aircraft Data Network (ADN): Ethernet derivative for Commercial Aircraft
Avionics Full-Duplex Switched Ethernet (AFDX): Specific implementation of ARINC 664 (ADN) for Commercial Aircraft
ARINC 429: Generic Medium-Speed Data Sharing for Private and Commercial Aircraft
ARINC 664: See ADN above
ARINC 629: Commercial Aircraft (Boeing 777)
ARINC 708: Weather Radar for Commercial Aircraft
ARINC 717: Flight Data Recorder for Commercial Aircraft
ARINC 825: CAN bus for commercial aircraft (for example Boeing 787 and Airbus A350)
Commercial Standard Digital Bus
IEEE 1394b: Military Aircraft
MIL-STD-1553: Military Aircraft
MIL-STD-1760: Military Aircraft
TTP – Time-Triggered Protocol: Boeing 787, Airbus A380, Fly-By-Wire Actuation Platforms from Parker Aerospace
See also
Astrionics, similar, for spacecraft
Acronyms and abbreviations in avionics
Avionics software
Emergency locator beacon
Emergency position-indicating radiobeacon station
Integrated modular avionics
Notes
Further reading
Avionics: Development and Implementation by Cary R. Spitzer (Hardcover – December 15, 2006)
Principles of Avionics, 4th Edition by Albert Helfrick, Len Buckwalter, and Avionics Communications Inc. (Paperback – July 1, 2007)
Avionics Training: Systems, Installation, and Troubleshooting by Len Buckwalter (Paperback – June 30, 2005)
Avionics Made Simple, by Mouhamed Abdulla, Ph.D.; Jaroslav V. Svoboda, Ph.D. and Luis Rodrigues, Ph.D. (Coursepack – Dec. 2005 - ).
External links
Avionics in Commercial Aircraft
Aircraft Electronics Association (AEA)
Pilot's Guide to Avionics
The Avionic Systems Standardisation Committee
Space Shuttle Avionics
Aviation Today Avionics magazine
RAES Avionics homepage
Aircraft instruments
Spacecraft components
Electronic engineering | Avionics | [
"Technology",
"Engineering"
] | 3,946 | [
"Computer engineering",
"Avionics",
"Measuring instruments",
"Electronic engineering",
"Aircraft instruments",
"Electrical engineering"
] |
2,042 | https://en.wikipedia.org/wiki/Alexander%20Grothendieck | Alexander Grothendieck, later Alexandre Grothendieck in French (; ; ; 28 March 1928 – 13 November 2014), was a German-born French mathematician who became the leading figure in the creation of modern algebraic geometry. His research extended the scope of the field and added elements of commutative algebra, homological algebra, sheaf theory, and category theory to its foundations, while his so-called "relative" perspective led to revolutionary advances in many areas of pure mathematics. He is considered by many to be the greatest mathematician of the twentieth century.
Grothendieck began his productive and public career as a mathematician in 1949. In 1958, he was appointed a research professor at the Institut des hautes études scientifiques (IHÉS) and remained there until 1970, when, driven by personal and political convictions, he left following a dispute over military funding. He received the Fields Medal in 1966 for advances in algebraic geometry, homological algebra, and K-theory. He later became professor at the University of Montpellier and, while still producing relevant mathematical work, he withdrew from the mathematical community and devoted himself to political and religious pursuits (first Buddhism and later, a more Catholic Christian vision). In 1991, he moved to the French village of Lasserre in the Pyrenees, where he lived in seclusion, still working on mathematics and his philosophical and religious thoughts until his death in 2014.
Life
Family and childhood
Grothendieck was born in Berlin to anarchist parents. His father, Alexander "Sascha" Schapiro (also known as Alexander Tanaroff), had Hasidic Jewish roots and had been imprisoned in Russia before moving to Germany in 1922, while his mother, Johanna "Hanka" Grothendieck, came from a Protestant German family in Hamburg and worked as a journalist. As teenagers, both of his parents had broken away from their early backgrounds. At the time of his birth, Grothendieck's mother was married to the journalist Johannes Raddatz and initially, his birth name was recorded as "Alexander Raddatz." That marriage was dissolved in 1929 and Schapiro acknowledged his paternity, but never married Hanka Grothendieck. Grothendieck had a maternal sibling, his half sister Maidi.
Grothendieck lived with his parents in Berlin until the end of 1933, when his father moved to Paris to evade Nazism. His mother followed soon thereafter. Grothendieck was left in the care of Wilhelm Heydorn, a Lutheran pastor and teacher in Hamburg. According to Winfried Scharlau, during this time, his parents took part in the Spanish Civil War as non-combatant auxiliaries. However, others state that Schapiro fought in the anarchist militia.
World War II
In May 1939, Grothendieck was put on a train in Hamburg for France. Shortly afterward his father was interned in Le Vernet. He and his mother were then interned in various camps from 1940 to 1942 as "undesirable dangerous foreigners." The first camp was the Rieucros Camp, where his mother contracted the tuberculosis that would eventually cause her death in 1957. While there, Grothendieck managed to attend the local school, at Mendel. Once, he managed to escape from the camp, intending to assassinate Hitler. Later, his mother Hanka was transferred to the Gurs internment camp for the remainder of World War II. Grothendieck was permitted to live separated from his mother.
In the village of Le Chambon-sur-Lignon, he was sheltered and hidden in local boarding houses or pensions, although he occasionally had to seek refuge in the woods during Nazi raids, surviving at times without food or water for several days.
His father was arrested under the Vichy anti-Jewish legislation, and sent to the Drancy internment camp, and then handed over by the French Vichy government to the Germans to be sent to be murdered at the Auschwitz concentration camp in 1942.
In Le Chambon, Grothendieck attended the Collège Cévenol (now known as the Le Collège-Lycée Cévenol International), a unique secondary school founded in 1938 by local Protestant pacifists and anti-war activists. Many of the refugee children hidden in Le Chambon attended Collège Cévenol, and it was at this school that Grothendieck apparently first became fascinated with mathematics.
In 1990, for risking their lives to rescue Jews, the entire village was recognized as "Righteous Among the Nations".
Studies and contact with research mathematics
After the war, the young Grothendieck studied mathematics in France, initially at the University of Montpellier where at first he did not perform well, failing such classes as astronomy. Working on his own, he rediscovered the Lebesgue measure. After three years of increasingly independent studies there, he went to continue his studies in Paris in 1948.
Initially, Grothendieck attended Henri Cartan's Seminar at , but he lacked the necessary background to follow the high-powered seminar. On the advice of Cartan and André Weil, he moved to the University of Nancy where two leading experts were working on Grothendieck's area of interest, topological vector spaces: Jean Dieudonné and Laurent Schwartz. The latter had recently won a Fields Medal. Dieudonné and Schwartz showed the new student their latest paper La dualité dans les espaces () et (); it ended with a list of 14 open questions, relevant for locally convex spaces. Grothendieck introduced new mathematical methods that enabled him to solve all of these problems within a few months.
In Nancy, he wrote his dissertation under those two professors on functional analysis, from 1950 to 1953. At this time he was a leading expert in the theory of topological vector spaces. In 1953 he moved to the University of São Paulo in Brazil, where he immigrated by means of a Nansen passport, given that he had refused to take French nationality (as that would have entailed military service against his convictions). He stayed in São Paulo (apart from a lengthy visit in France from October 1953 to March 1954) until the end of 1954. His published work from the time spent in Brazil is still in the theory of topological vector spaces; it is there that he completed his last major work on that topic (on "metric" theory of Banach spaces).
Grothendieck moved to Lawrence, Kansas at the beginning of 1955, and there he set his old subject aside in order to work in algebraic topology and homological algebra, and increasingly in algebraic geometry. It was in Lawrence that Grothendieck developed his theory of abelian categories and the reformulation of sheaf cohomology based on them, leading to the very influential "Tôhoku paper".
In 1957 he was invited to visit Harvard University by Oscar Zariski, but the offer fell through when he refused to sign a pledge promising not to work to overthrow the United States government—a refusal which, he was warned, threatened to land him in prison. The prospect of prison did not worry him, so long as he could have access to books.
Comparing Grothendieck during his Nancy years to the -trained students at that time (Pierre Samuel, Roger Godement, René Thom, Jacques Dixmier, Jean Cerf, Yvonne Bruhat, Jean-Pierre Serre, and Bernard Malgrange), Leila Schneps said:
His first works on topological vector spaces in 1953 have been successfully applied to physics and computer science, culminating in a relation between Grothendieck inequality and the Einstein–Podolsky–Rosen paradox in quantum physics.
IHÉS years
In 1958, Grothendieck was installed at the Institut des hautes études scientifiques (IHÉS), a new privately funded research institute that, in effect, had been created for Jean Dieudonné and Grothendieck. Grothendieck attracted attention by an intense and highly productive activity of seminars there (de facto working groups drafting into foundational work some of the ablest French and other mathematicians of the younger generation). Grothendieck practically ceased publication of papers through the conventional, learned journal route. However, he was able to play a dominant role in mathematics for approximately a decade, gathering a strong school.
Officially during this time, he had as students Michel Demazure (who worked on SGA3, on group schemes), (relative schemes and classifying topos), Luc Illusie (cotangent complex), Michel Raynaud, Michèle Raynaud, Jean-Louis Verdier (co-founder of the derived category theory), and Pierre Deligne. Collaborators on the SGA projects also included Michael Artin (étale cohomology), Nick Katz (monodromy theory, and Lefschetz pencils). Jean Giraud worked out torsor theory extensions of nonabelian cohomology there as well. Many others such as David Mumford, Robin Hartshorne, Barry Mazur and C.P. Ramanujam were also involved.
"Golden Age"
Alexander Grothendieck's work during what is described as the "Golden Age" period at the IHÉS established several unifying themes in algebraic geometry, number theory, topology, category theory, and complex analysis. His first (pre-IHÉS) discovery in algebraic geometry was the Grothendieck–Hirzebruch–Riemann–Roch theorem, a generalisation of the Hirzebruch–Riemann–Roch theorem proved algebraically; in this context he also introduced K-theory. Then, following the programme he outlined in his talk at the 1958 International Congress of Mathematicians, he introduced the theory of schemes, developing it in detail in his Éléments de géométrie algébrique (EGA) and providing the new more flexible and general foundations for algebraic geometry that has been adopted in the field since that time. He went on to introduce the étale cohomology theory of schemes, providing the key tools for proving the Weil conjectures, as well as crystalline cohomology and algebraic de Rham cohomology to complement it. Closely linked to these cohomology theories, he originated topos theory as a generalisation of topology (relevant also in categorical logic). He also provided, by means of a categorical Galois theory, an algebraic definition of fundamental groups of schemes giving birth to the now famous étale fundamental group and he then conjectured the existence of a further generalization of it, which is now known as the fundamental group scheme. As a framework for his coherent duality theory, he also introduced derived categories, which were further developed by Verdier.
The results of his work on these and other topics were published in the EGA and in less polished form in the notes of the Séminaire de géométrie algébrique (SGA) that he directed at the IHÉS.
Political activism
Grothendieck's political views were radical and pacifistic. He strongly opposed both United States intervention in Vietnam and Soviet military expansionism. To protest against the Vietnam War, he gave lectures on category theory in the forests surrounding Hanoi while the city was being bombed. In 1966, he had declined to attend the International Congress of Mathematicians (ICM) in Moscow, where he was to receive the Fields Medal. He retired from scientific life around 1970 after he had found out that IHÉS was partly funded by the military. He returned to academia a few years later as a professor at the University of Montpellier.
While the issue of military funding was perhaps the most obvious explanation for Grothendieck's departure from the IHÉS, those who knew him say that the causes of the rupture ran more deeply. Pierre Cartier, a visiteur de longue durée ("long-term guest") at the IHÉS, wrote a piece about Grothendieck for a special volume published on the occasion of the IHÉS's fortieth anniversary. In that publication, Cartier notes that as the son of an antimilitary anarchist and one who grew up among the disenfranchised, Grothendieck always had a deep compassion for the poor and the downtrodden. As Cartier puts it, Grothendieck came to find Bures-sur-Yvette as "une cage dorée" ("a gilded cage"). While Grothendieck was at the IHÉS, opposition to the Vietnam War was heating up, and Cartier suggests that this also reinforced Grothendieck's distaste at having become a mandarin of the scientific world. In addition, after several years at the IHÉS, Grothendieck seemed to cast about for new intellectual interests. By the late 1960s, he had started to become interested in scientific areas outside mathematics. David Ruelle, a physicist who joined the IHÉS faculty in 1964, said that Grothendieck came to talk to him a few times about physics. Biology interested Grothendieck much more than physics, and he organized some seminars on biological topics.
In 1970, Grothendieck, with two other mathematicians, Claude Chevalley and Pierre Samuel, created a political group entitled Survivre—the name later changed to Survivre et vivre. The group published a bulletin and was dedicated to antimilitary and ecological issues. It also developed strong criticism of the indiscriminate use of science and technology. Grothendieck devoted the next three years to this group and served as the main editor of its bulletin.
Although Grothendieck continued with mathematical enquiries, his standard mathematical career mostly ended when he left the IHÉS. After leaving the IHÉS, Grothendieck became a temporary professor at Collège de France for two years. He then became a professor at the University of Montpellier, where he became increasingly estranged from the mathematical community. He formally retired in 1988, a few years after having accepted a research position at the CNRS.
Manuscripts written in the 1980s
While not publishing mathematical research in conventional ways during the 1980s, he produced several influential manuscripts with limited distribution, with both mathematical and biographical content.
Produced during 1980 and 1981, La Longue Marche à travers la théorie de Galois (The Long March Through Galois Theory) is a 1600-page handwritten manuscript containing many of the ideas that led to the Esquisse d'un programme. It also includes a study of Teichmüller theory.
In 1983, stimulated by correspondence with Ronald Brown and Tim Porter at Bangor University, Grothendieck wrote a 600-page manuscript entitled Pursuing Stacks. It began with a letter addressed to Daniel Quillen. This letter and successive parts were distributed from Bangor (see External links below). Within these, in an informal, diary-like manner, Grothendieck explained and developed his ideas on the relationship between algebraic homotopy theory and algebraic geometry and prospects for a noncommutative theory of stacks. The manuscript, which is being edited for publication by G. Maltsiniotis, later led to another of his monumental works, Les Dérivateurs. Written in 1991, this latter opus of approximately 2000 pages, further developed the homotopical ideas begun in Pursuing Stacks. Much of this work anticipated the subsequent development during the mid-1990s of the motivic homotopy theory of Fabien Morel and Vladimir Voevodsky.
In 1984, Grothendieck wrote the proposal Esquisse d'un Programme ("Sketch of a Programme") for a position at the Centre National de la Recherche Scientifique (CNRS). It describes new ideas for studying the moduli space of complex curves. Although Grothendieck never published his work in this area, the proposal inspired other mathematicians to work in the area by becoming the source of dessin d'enfant theory and anabelian geometry. Later, it was published in two-volumes and entitled Geometric Galois Actions (Cambridge University Press, 1997).
During this period, Grothendieck also gave his consent to publishing some of his drafts for EGA on Bertini-type theorems (EGA V, published in Ulam Quarterly in 1992–1993 and later made available on the Grothendieck Circle web site in 2004).
In the extensive autobiographical work, Récoltes et Semailles ('Harvests and Sowings', 1986), Grothendieck describes his approach to mathematics and his experiences in the mathematical community, a community that initially accepted him in an open and welcoming manner, but which he progressively perceived to be governed by competition and status. He complains about what he saw as the "burial" of his work and betrayal by his former students and colleagues after he had left the community. Récoltes et Semailles was finally published in 2022 by Gallimard and, thanks to French science historian Alain Herreman, is also available on the Internet. An English translation by Leila Schneps will be published by MIT Press in 2025. A partial English translation can be found on the Internet. A Japanese translation of the whole book in four volumes was completed by Tsuji Yuichi (1938–2002), a friend of Grothendieck from the Survivre period. The first three volumes (corresponding to Parts 0 to III of the book) were published between 1989 and 1993, while the fourth volume (Part IV) was completed and, although unpublished, copies of it as a typed manuscript are circulated. Grothendieck helped with the translation and wrote a preface for it, in which he called Tsuji his "first true collaborator". Parts of Récoltes et Semailles have been translated into Spanish, as well as into a Russian translation that was published in Moscow.
In 1988, Grothendieck declined the Crafoord Prize with an open letter to the media. He wrote that he and other established mathematicians had no need for additional financial support and criticized what he saw as the declining ethics of the scientific community that was characterized by outright scientific theft that he believed had become commonplace and tolerated. The letter also expressed his belief that totally unforeseen events before the end of the century would lead to an unprecedented collapse of civilization. Grothendieck added however that his views were "in no way meant as a criticism of the Royal Academy's aims in the administration of its funds" and he added, "I regret the inconvenience that my refusal to accept the Crafoord prize may have caused you and the Royal Academy."
La Clef des Songes, a 315-page manuscript written in 1987, is Grothendieck's account of how his consideration of the source of dreams led him to conclude that a deity exists. As part of the notes to this manuscript, Grothendieck described the life and the work of 18 "mutants", people whom he admired as visionaries far ahead of their time and heralding a new age. The only mathematician on his list was Bernhard Riemann. Influenced by the Catholic mystic Marthe Robin who was claimed to have survived on the Holy Eucharist alone, Grothendieck almost starved himself to death in 1988. His growing preoccupation with spiritual matters was also evident in a letter entitled Lettre de la Bonne Nouvelle sent to 250 friends in January 1990. In it, he described his encounters with a deity and announced that a "New Age" would commence on 14 October 1996.
The Grothendieck Festschrift, published in 1990, was a three-volume collection of research papers to mark his sixtieth birthday in 1988.
More than 20,000 pages of Grothendieck's mathematical and other writings are held at the University of Montpellier and remain unpublished. They have been digitized for preservation and are freely available in open access through the Institut Montpelliérain Alexander Grothendieck portal.
Retirement into reclusion and death
In 1991, Grothendieck moved to a new address that he did not share with his previous contacts in the mathematical community. Very few people visited him afterward. Local villagers helped sustain him with a more varied diet after he tried to live on a staple of dandelion soup. At some point, Leila Schneps and located him, then carried on a brief correspondence. Thus they became among "the last members of the mathematical establishment to come into contact with him". After his death, it was revealed that he lived alone in a house in Lasserre, Ariège, a small village at the foot of the Pyrenees.
In January 2010, Grothendieck wrote the letter entitled "Déclaration d'intention de non-publication" to Luc Illusie, claiming that all materials published in his absence had been published without his permission. He asked that none of his work be reproduced in whole or in part and that copies of this work be removed from libraries. He characterized a website devoted to his work as "an abomination". His dictate may have been reversed in 2010.
In September 2014, almost totally deaf and blind, he asked a neighbour to buy him a revolver so he could kill himself. On 13 November 2014, aged 86, Grothendieck died in the hospital of Saint-Lizier or Saint-Girons, Ariège.
Citizenship
Grothendieck was born in Weimar Germany. In 1938, aged ten, he moved to France as a refugee. Records of his nationality were destroyed in the fall of Nazi Germany in 1945 and he did not apply for French citizenship after the war. Thus, he became a stateless person for at least the majority of his working life and he traveled on a Nansen passport. Part of his reluctance to hold French nationality is attributed to not wishing to serve in the French military, particularly due to the Algerian War (1954–62). He eventually applied for French citizenship in the early 1980s, after he was well past the age that would have required him to do military service.
Family
Grothendieck was very close to his mother, to whom he dedicated his dissertation. She died in 1957 from tuberculosis that she contracted in camps for displaced persons.
He had five children: a son with his landlady during his time in Nancy; three children, Johanna (1959), Alexander (1961), and Mathieu (1965) with his wife Mireille Dufour; and one child with Justine Skalba, with whom he lived in a commune in the early 1970s.
Mathematical work
Grothendieck's early mathematical work was in functional analysis. Between 1949 and 1953 he worked on his doctoral thesis in this subject at Nancy, supervised by Jean Dieudonné and Laurent Schwartz. His key contributions include topological tensor products of topological vector spaces, the theory of nuclear spaces as foundational for Schwartz distributions, and the application of Lp spaces in studying linear maps between topological vector spaces. In a few years, he had become a leading authority on this area of functional analysis—to the extent that Dieudonné compares his impact in this field to that of Banach.
It is, however, in algebraic geometry and related fields where Grothendieck did his most important and influential work. From approximately 1955 he started to work on sheaf theory and homological algebra, producing the influential "Tôhoku paper" (Sur quelques points d'algèbre homologique, published in the Tohoku Mathematical Journal in 1957) where he introduced abelian categories and applied their theory to show that sheaf cohomology may be defined as certain derived functors in this context.
Homological methods and sheaf theory had already been introduced in algebraic geometry by Jean-Pierre Serre and others, after sheaves had been defined by Jean Leray. Grothendieck took them to a higher level of abstraction and turned them into a key organising principle of his theory. He shifted attention from the study of individual varieties to his relative point of view (pairs of varieties related by a morphism), allowing a broad generalization of many classical theorems. The first major application was the relative version of Serre's theorem showing that the cohomology of a coherent sheaf on a complete variety is finite-dimensional; Grothendieck's theorem shows that the higher direct images of coherent sheaves under a proper map are coherent; this reduces to Serre's theorem over a one-point space.
In 1956, he applied the same thinking to the Riemann–Roch theorem, which recently had been generalized to any dimension by Hirzebruch. The Grothendieck–Riemann–Roch theorem was announced by Grothendieck at the initial Mathematische Arbeitstagung in Bonn, in 1957. It appeared in print in a paper written by Armand Borel with Serre. This result was his first work in algebraic geometry. Grothendieck went on to plan and execute a programme for rebuilding the foundations of algebraic geometry, which at the time were in a state of flux and under discussion in Claude Chevalley's seminar. He outlined his programme in his talk at the 1958 International Congress of Mathematicians.
His foundational work on algebraic geometry is at a higher level of abstraction than all prior versions. He adapted the use of non-closed generic points, which led to the theory of schemes. Grothendieck also pioneered the systematic use of nilpotents. As 'functions' these can take only the value 0, but they carry infinitesimal information, in purely algebraic settings. His theory of schemes has become established as the best universal foundation for this field, because of its expressiveness as well as its technical depth. In that setting one can use birational geometry, techniques from number theory, Galois theory, commutative algebra, and close analogues of the methods of algebraic topology, all in an integrated way.
Grothendieck is noted for his mastery of abstract approaches to mathematics and his perfectionism in matters of formulation and presentation. Relatively little of his work after 1960 was published by the conventional route of the learned journal, circulating initially in duplicated volumes of seminar notes; his influence was to a considerable extent personal. His influence spilled over into many other branches of mathematics, for example the contemporary theory of D-modules. Although lauded as "the Einstein of mathematics", his work also provoked adverse reactions, with many mathematicians seeking out more concrete areas and problems.
EGA, SGA, FGA
The bulk of Grothendieck's published work is collected in the monumental, yet incomplete, Éléments de géométrie algébrique (EGA) and Séminaire de géométrie algébrique (SGA). The collection Fondements de la Géometrie Algébrique (FGA), which gathers together talks given in the Séminaire Bourbaki, also contains important material.
Grothendieck's work includes the invention of the étale and l-adic cohomology theories, which explain an observation made by André Weil that argued for a connection between the topological characteristics of a variety and its diophantine (number theoretic) properties. For example, the number of solutions of an equation over a finite field reflects the topological nature of its solutions over the complex numbers. Weil had realized that to prove such a connection, one needed a new cohomology theory, but neither he nor any other expert saw how to accomplish this until such a theory was expressed by Grothendieck.
This program culminated in the proofs of the Weil conjectures, the last of which was settled by Grothendieck's student Pierre Deligne in the early 1970s after Grothendieck had largely withdrawn from mathematics.
Major mathematical contributions
In Grothendieck's retrospective Récoltes et Semailles, he identified twelve of his contributions that he believed qualified as "great ideas". In chronological order, they are:
Topological tensor products and nuclear spaces
"Continuous" and "discrete" duality (derived categories, "six operations")
Yoga of the Grothendieck–Riemann–Roch theorem K-theory relation with intersection theory
Schemes
Topoi
Étale cohomology and l-adic cohomology
Motives and the motivic Galois group (Grothendieck ⊗-categories)
Crystals and crystalline cohomology, yoga of "de Rham coefficients", "Hodge coefficients"...
"Topological algebra": ∞-stacks, derivators; cohomological formalism of topoi as inspiration for a new homotopical algebra
Tame topology
Yoga of anabelian algebraic geometry, Galois–Teichmüller theory
"Schematic" or "arithmetic" point of view for regular polyhedra and regular configurations of all kinds
Here the term yoga denotes a kind of "meta-theory" that may be used heuristically; Michel Raynaud writes the other terms "Ariadne's thread" and "philosophy" as effective equivalents.
Grothendieck wrote that, of these themes, the largest in scope was topoi, as they synthesized algebraic geometry, topology, and arithmetic. The theme that had been most extensively developed was schemes, which were the framework "par excellence" for eight of the other themes (all but 1, 5, and 12). Grothendieck wrote that the first and last themes, topological tensor products and regular configurations, were of more modest size than the others. Topological tensor products had played the role of a tool rather than of a source of inspiration for further developments; but he expected that regular configurations could not be exhausted within the lifetime of a mathematician who devoted oneself to it. He believed that the deepest themes were motives, anabelian geometry, and Galois–Teichmüller theory.
Influence
Grothendieck is considered by many to be the greatest mathematician of the twentieth century. In an obituary David Mumford and John Tate wrote:
Although mathematics became more and more abstract and general throughout the 20th century, it was Alexander Grothendieck who was the greatest master of this trend. His unique skill was to eliminate all unnecessary hypotheses and burrow into an area so deeply that its inner patterns on the most abstract level revealed themselves–and then, like a magician, show how the solution of old problems fell out in straightforward ways now that their real nature had been revealed.
By the 1970s, Grothendieck's work was seen as influential, not only in algebraic geometry and the allied fields of sheaf theory and homological algebra, but influenced logic, in the field of categorical logic.
According to mathematician Ravi Vakil, "Whole fields of mathematics speak the language that he set up. We live in this big structure that he built. We take it for granted—the architect is gone". In the same article, Colin McLarty said, "Lots of people today live in Grothendieck's house, unaware that it's Grothendieck's house."
Geometry
Grothendieck approached algebraic geometry by clarifying the foundations of the field, and by developing mathematical tools intended to prove a number of notable conjectures. Algebraic geometry has traditionally meant the understanding of geometric objects, such as algebraic curves and surfaces, through the study of the algebraic equations for those objects. Properties of algebraic equations are in turn studied using the techniques of ring theory. In this approach, the properties of a geometric object are related to the properties of an associated ring. The space (e.g., real, complex, or projective) in which the object is defined, is extrinsic to the object, while the ring is intrinsic.
Grothendieck laid a new foundation for algebraic geometry by making intrinsic spaces ("spectra") and associated rings the primary objects of study. To that end, he developed the theory of schemes that informally can be thought of as topological spaces on which a commutative ring is associated to every open subset of the space. Schemes have become the basic objects of study for practitioners of modern algebraic geometry. Their use as a foundation allowed geometry to absorb technical advances from other fields.
His generalization of the classical Riemann–Roch theorem related topological properties of complex algebraic curves to their algebraic structure and now bears his name, being called "the Grothendieck–Hirzebruch–Riemann–Roch theorem". The tools he developed to prove this theorem started the study of algebraic and topological K-theory, which explores the topological properties of objects by associating them with rings. After direct contact with Grothendieck's ideas at the Bonn Arbeitstagung, topological K-theory was founded by Michael Atiyah and Friedrich Hirzebruch.
Cohomology theories
Grothendieck's construction of new cohomology theories, which use algebraic techniques to study topological objects, has influenced the development of algebraic number theory, algebraic topology, and representation theory. As part of this project, his creation of topos theory, a category-theoretic generalization of point-set topology, has influenced the fields of set theory and mathematical logic.
The Weil conjectures were formulated in the later 1940s as a set of mathematical problems in arithmetic geometry. They describe properties of analytic invariants, called local zeta functions, of the number of points on an algebraic curve or variety of higher dimension. Grothendieck's discovery of the ℓ-adic étale cohomology, the first example of a Weil cohomology theory, opened the way for a proof of the Weil conjectures, ultimately completed in the 1970s by his student Pierre Deligne. Grothendieck's large-scale approach has been called a "visionary program". The ℓ-adic cohomology then became a fundamental tool for number theorists, with applications to the Langlands program.
Grothendieck's conjectural theory of motives was intended to be the "ℓ-adic" theory but without the choice of "ℓ", a prime number. It did not provide the intended route to the Weil conjectures, but has been behind modern developments in algebraic K-theory, motivic homotopy theory, and motivic integration. This theory, Daniel Quillen's work, and Grothendieck's theory of Chern classes, are considered the background to the theory of algebraic cobordism, another algebraic analogue of topological ideas.
Category theory
Grothendieck's emphasis on the role of universal properties across varied mathematical structures brought category theory into the mainstream as an organizing principle for mathematics in general. Among its uses, category theory creates a common language for describing similar structures and techniques seen in many different mathematical systems. His notion of abelian category is now the basic object of study in homological algebra. The emergence of a separate mathematical discipline of category theory has been attributed to Grothendieck's influence, although unintentional.
In popular culture
Colonel Lágrimas (Colonel Tears in English), a novel by Puerto Rican–Costa Rican writer Carlos Fonseca is about Grothendieck.
The Benjamín Labatut book When We Cease to Understand the World dedicates one chapter to the work and life of Grothendieck, introducing his story by reference to the Japanese mathematician Shinichi Mochizuki. The book is a lightly fictionalized account of the world of scientific inquiry and was a finalist for the National Book Award.
In Cormac McCarthy's The Passenger and its sequel Stella Maris, a main character is a student of Grothendieck's.
The "Istituto Grothendieck" has been created in his honor.
Publications
See also
∞-groupoid
λ-ring
AB5 category
Abelian category
Accessible category
Algebraic geometry
Algebraic stack
Barsotti–Tate group
Chern class
Descent (mathematics)
Dévissage
Dunford–Pettis property
Excellent ring
Formally smooth map
Fundamental group scheme
K-theory
Hilbert scheme
Homotopy hypothesis
List of things named after Alexander Grothendieck
Nakai conjecture
Nuclear operator
Nuclear space
Parafactorial local ring
Projective tensor product
Quasi-finite morphism
Quot scheme
Scheme (mathematics)
Section conjecture
Semistable abelian variety
Sheaf cohomology
Stack (mathematics)
Standard conjectures on algebraic cycles
Sketch of a program
Tannakian formalism
Theorem of absolute purity
Theorem on formal functions
Ultrabornological space
Weil conjectures
Vector bundles on algebraic curves
Zariski's main theorem
Notes
References
Sources and further reading
English translation of .
English translation:
First part of planned four-volume biography.
English version.
A review of the German edition
Third part of planned four-volume biography; crowd-financed translation into English.
First 4 chapters from the incomplete second part of planned four-volume biography.
External links
Centre for Grothendieckian Studies (CSG) is a research centre of the Grothendieck Institute, with a dedicated mission to honour the memory of Alexander Grothendieck.
Séminaire Grothendieck is a peripatetic seminar on Grothendieck view not just on mathematics
Grothendieck Circle, collection of mathematical and biographical information, photos, links to his writings
The origins of 'Pursuing Stacks': This is an account of how 'Pursuing Stacks' was written in response to a correspondence in English with Ronnie Brown and Tim Porter at Bangor, which continued until 1991. See also Alexander Grothendieck: some recollections.
Récoltes et Semailles
"Récoltes et Semailles", Spanish translation
"La Clef des Songes", French originals and Spanish translations
English summary of "La Clef des Songes"
Video of a lecture with photos from Grothendieck's life, given by Winfried Scharlau at IHES in 2009
Can one explain schemes to biologists —biographical sketch of Grothendieck by David Mumford & John Tate
Archives Grothendieck
"Who Is Alexander Grothendieck?, Winfried Scharlau, Notices of the AMS 55(8), 2008.
"Alexander Grothendieck: A Country Known Only by Name, Pierre Cartier, Notices of the AMS 62(4), 2015.
Alexandre Grothendieck 1928–2014, Part 1, Notices of the AMS 63(3), 2016.
Les-archives-insaisissables-d-alexandre-grothendieck
Kutateladze S.S. Rebellious Genius: In Memory of Alexander Grothendieck
Alexandre-Grothendieck-une-mathematique-en-cathedrale-gothique
Les-archives-insaisissables-d-alexandre-grothendieck
1928 births
2014 deaths
20th-century French mathematicians
Algebraic geometers
Algebraists
Emigrants from Nazi Germany to France
Fields Medalists
French pacifists
Functional analysts
German people of Russian-Jewish descent
Nancy-Université alumni
Nicolas Bourbaki
Operator theorists
Scientists from Berlin
Stateless people | Alexander Grothendieck | [
"Mathematics"
] | 8,107 | [
"Algebra",
"Algebraists"
] |
2,061 | https://en.wikipedia.org/wiki/Automatic%20number%20announcement%20circuit | An automatic number announcement circuit (ANAC) is a component of a central office of a telephone company that provides a service to installation and service technicians to determine the telephone number of a telephone line. The facility has a telephone number that may be called to listen to an automatic announcement that includes the caller's telephone number. The ANAC facility is useful primarily during the installation of landline telephones to quickly identify one of multiple wire pairs in a bundle or at a termination point.
Operation
By connecting a test telephone set, a technician calls the local telephone number of the automatic number announcement service. This call is connected to equipment at the central office that uses automatic equipment to announce the telephone number of the line calling in. The main purpose of this system is to allow telephone company technicians to identify the telephone line they are connected to.
Automatic number announcement systems are based on automatic number identification. They are intended for use by phone company technicians, the ANAC system bypasses customer features, such as unlisted numbers, caller ID blocking, and outgoing call blocking. Installers of multi-line business services where outgoing calls from all lines display the company's main number on call display can use ANAC to identify a specific line in the system, even if CID displays every line as "line one".
Most ANAC systems are provider-specific in each wire center, while others are regional or state-/province- or area-code-wide. No official lists of ANAC numbers are published, as telephone companies guard against abuse that would interfere with availability for installers.
Exchange prefixes for testing
The North American Numbering Plan reserves the exchange (central office) prefixes 958 and 959 for plant testing purposes. Code 959 with three or four additional digits is dedicated for access to office test lines in local exchange carrier and interoffice carrier central offices. The specifications define several test features for line conditions, such as quiet line and busy line, and test tones transmitted to callers. Telephone numbers are assigned for ring back to test the ringer when installing telephone sets, milliwatt tone (a number simply answers with a continuous test tone) and a loop around (which connects a call to another inbound call to the same or another test number).
ANAC services are typically installed in the 958 range, which is intended for communications between central offices. In some area codes, multiple additional prefixes may be reserved for test purposes. Many area codes reserved 999; 320 was also formerly reserved in Bell Canada territory.
Other carrier-specific North American test numbers include 555-XXXX numbers (such as 555–0311 on Rogers Communications in Canada) or vertical service codes, such as *99 on Cablevision/Optimum Voice in the United States. MCI Inc. (United States) provides ANI information by dialing 800-444-4444.
Telephone numbers
Plant testing telephone numbers are carrier-specific, there is no comprehensive list of telephone numbers for ANAC services. In some communities, test numbers change relatively often. In others, a major incumbent carrier might assign a single number which provides test functions on its network across an entire numbering plan area, throughout an entire province or state, or system-wide.
Some telecommunication carriers maintain toll-free numbers for ANAC facilities. Some national toll-free numbers provide automatic number identification by speaking the telephone number of the caller, but these are not intended for use in identifying the customer's own phone number. They are used for the agent in a call center to confirm the telephone a customer is calling from, so that the customer's account information can be displayed as a "screen pop" for the next available customer service representative.
See also
Plant test number
Ringback number
References
Telephone numbers
Telephony signals | Automatic number announcement circuit | [
"Mathematics"
] | 762 | [
"Mathematical objects",
"Numbers",
"Telephone numbers"
] |
2,080 | https://en.wikipedia.org/wiki/A%20Fire%20Upon%20the%20Deep | A Fire Upon the Deep is a 1992 science fiction novel by American writer Vernor Vinge. It is a space opera involving superhuman intelligences, aliens, variable physics, space battles, love, betrayal, genocide, and a communication medium resembling Usenet. A Fire Upon the Deep won the Hugo Award in 1993, sharing it with Doomsday Book by Connie Willis.
Besides the normal print book editions, the novel was also included on a CD-ROM sold by ClariNet Communications along with the other nominees for the 1993 Hugo awards. The CD-ROM edition included numerous annotations by Vinge on his thoughts and intentions about different parts of the book, and was later released as a standalone e-book. It has a loose prequel, A Deepness in the Sky, from 1999, and a direct sequel, The Children of the Sky, from 2012.
Setting
The novel is set in various locations in the Milky Way. The galaxy is divided into four concentric volumes called the "Zones of Thought"; it is not clear to the novel's characters whether this is a natural phenomenon or an artificially produced one, but it seems to roughly correspond with galactic-scale stellar density and a Beyond region is mentioned in the Sculptor Galaxy as well. The Zones reflect fundamental differences in basic physical laws, and one of the main consequences is their effect on intelligence, both biological and artificial. Artificial intelligence and automation is most directly affected, in that advanced hardware and software from the Beyond or the Transcend will work less and less well as a ship "descends" towards the Unthinking Depths. But even biological intelligence is affected to a lesser degree. The four zones are spoken of in terms of "low" to "high" as follows:
The Unthinking Depths are the innermost zone, surrounding the Galactic Center. In it, only minimal forms of intelligence, biological or otherwise, are possible. This means that any ship straying into the Depths will be stranded, effectively permanently. Even if the crew did not die immediately—and some forms of life native to "higher" Zones would likely do so—they would be rendered incapable of even human intelligence, leaving them unable to operate their ship in any meaningful way.
Surrounding the Depths is the Slow Zone. The Earth (called "Old Earth") is located in this Zone, and humanity is said to have originated there, although Earth plays no significant role in the story. Biological intelligence is possible in "the Slowness", but not true, sentient, artificial intelligence. Automation is not intelligent enough to calculate the jumps required for faster than light travel in the Slow Zone, but they may escape by performing an immediate reverse jump to where they arrived from if the Slowness is detected, and navigation systems watch for this and store the information required for a return to the start point during each jump. All ships which find themselves in the Slow Zone are restricted to sub-light speeds if an immediate reverse jump back out is impossible. Faster-than-light communication is impossible into or out of the Slow Zone. As the boundaries of the Zones are unknown and subject to change, accidental entry to the Slow Zone is a major interstellar navigational hazard at the "Bottom" of the Beyond. Starships which operate near the Beyond/Slow Zone border often have an auxiliary Bussard ramjet drive, so that, if they accidentally stray into the Slow Zone (thus disabling any FTL drive), they will at least have a backup (sub-light) drive to push them back "up" to the Beyond. Such ships also tend to include "coldsleep" equipment, as it is likely that any such return will still take many subjective lifetimes for most species.
The next outermost layer is the Beyond, within which artificial intelligence, faster-than-light travel, and faster-than-light communication are possible. A few human civilizations exist in the Beyond, all descended from a single ethnic Norwegian group which managed to travel from the Slow Zone to the Beyond (presumably at sub-light speeds) and thence spread using FTL travel. The original settlement of this group is known as Nyjora; other human settlements in the Beyond include Straumli Realm and Sjandra Kei. In the Beyond, FTL travel is accomplished by making many small "jumps" across intervening space, and the efficiency of the drive increases the farther a ship travels from the galactic core. This reflects increases in both drive efficiency and the ship's automation's increased capacity as one moves outward, enabling the computation of longer and longer jumps. The Beyond is not a homogeneous zone—many references are made to, e.g., the "High Beyond" or the "Bottom of the Beyond", depending on distance to the galactic core. These terms seem to refer to differences in the Zone itself, not just relative distance from the Core, but there are no obvious Zone boundaries within the Beyond the way there are between the Slow Zone and the Beyond, or between the Beyond and the Transcend. Whereas a ship that crosses from the Beyond to the Slow Zone or vice versa will experience a dramatic change in its capabilities, a ship in the Beyond which moves farther from the Core will experience a gradual increase in efficiency (assuming it has the technology to make use of it) until another major shift at the boundary to the Transcend. The Beyond is populated by a very large number of interstellar and intergalactic civilizations which are linked by a faster-than-light communication network, "the Net", sometimes cynically called the "Net of a Million Lies". The Net does connect with the Transcend, on the off-chance that one of the "Powers" that live there deigns to communicate, but has no connections with the Slow Zone, as FTL communication is impossible into or out of that Zone. In the novel, the Net is depicted as working much like the Usenet network in the early 1990s, with transcripts of messages containing header and footer information as one would find in such forums.
The outermost layer, containing the galactic halo, is the Transcend, within which incomprehensible, superintelligent beings dwell. When a "Beyonder" civilization reaches the point of technological singularity, it is said to "Transcend", becoming a "Power". Such Powers always seem to relocate to the Transcend, seemingly necessarily, where they become engaged in affairs which remain entirely mysterious to those that remain in the Beyond.
One of the characters in the book, Ravna, uses this analogy to explain the relation between the zones:
Plot
An expedition from Straumli Realm, a young human civilization in the high Beyond, investigates a newly discovered five-billion-year-old data archive in the low Transcend that offers the possibility of unimaginable riches. The expedition's facility, High Lab, is gradually and secretly compromised by an initially dormant superintelligence within the archive later known as the Blight. However, shortly before the Blight's final "flowering", two self-aware entities created similarly to the Blight plot to aid the humans before the Blight can gain its full powers.
Finally recognizing the danger, the researchers at High Lab attempt to flee in two ships, one carrying the adults and the second carrying the children in "coldsleep boxes". The Blight discovers that the first ship lists a data storage device in its cargo manifest; assuming it contains information that could harm it, the Blight destroys the ship. The second ship escapes.
The ship lands on a distant planet with a medieval-level civilization of dog-like creatures, dubbed "Tines", who live in packs as group minds. Upon landing, however, the two surviving adults, husband and wife, are ambushed and killed by Tine fanatics known as Flenserists, in whose realm they have landed. The Flenserists capture a young boy named Jefri Olsndot and his wounded sister, Johanna. Johanna is rescued by a Tine named Pilgrim who witnessed the ambush and taken to a neighboring kingdom ruled by a brilliant Tine named Woodcarver. Steel, the Flenserists' leader, tells Jefri that Johanna and their parents were killed by Woodcarver and exploits him in order to develop advanced technology (such as cannon and radio communication), while Johanna and the knowledge stored in her "dataset" device help Woodcarver rapidly develop as well. A highly placed Flenserist spy keeps Steel informed of Woodcarver's progress.
A distress signal from the sleeper ship eventually reaches "Relay", a major information/service provider for the galactic communications network. A benign transcendent being named "the Old One" contacts Relay, seeking information about the Blight and the humans who released it, and reconstitutes a human man named Pham Nuwen from the wreckage of a spaceship to act as its agent, using his doubt of his own memory's veracity to keep him under its control. Ravna Bergsndot, the only human Relay employee, traces the sleeper ship's signal to the Tines' world and persuades her employer to investigate what it took from High Lab, contracting the merchant vessel Out of Band II, owned by two sentient plant "Skroderiders", Blueshell and Greenstalk, to transport her and Pham there.
Before the mission is launched, the Blight launches a surprise attack on Relay and kills Old One. As Old One dies, it downloads what information it can into Pham to defeat the Blight, and Pham, Ravna and the Skroderiders barely escape Relay's destruction in the Out of Band II.
The Blight expands, taking over races and "rewriting" their people to become its agents, murdering several other Powers, and seizing other archives in the Beyond, looking for what was taken. It finally realizes where the danger truly lies and sends a hastily assembled fleet in pursuit.
The humans arrive at the Tines' homeworld first and ally with Woodcarver to defeat the Flenserists and rescue Jefri. Pham initiates Countermeasure, which extends the Slow Zone outward by thousands of light years, enveloping and killing the Blight at the cost of wrecking thousands of civilizations and causing trillions of deaths. The humans are stranded on the Tines world, now in the depths of the Slow Zone. Activating Countermeasure is fatal to Pham, but before he dies, the ghost of Old One within his mind reveals to him that, although his body is a reconstruction, his memories are real. (Vinge expands on Pham's backstory in A Deepness in the Sky.)
Intelligent species
Aprahanti
A race of humanoids with colorful butterfly-like wings who attempt to use the chaos wrought by the Blight to reestablish their waning hegemony. Despite their attractive, delicate appearance, the Aprahanti are an extremely fearsome and vicious species.
Blight
An ancient, malevolent super-intelligent entity which strives to constantly expand and can easily manipulate electronics and organic beings.
Dirokimes
An older race which originally inhabited Sjandra Kei before the arrival of humanity. They work with the humans.
Humans
All humans in the novel (except Pham) are descended from Nyjoran stock. Their ancestors were "Tuvo-Norsk" asteroid miners from Old Earth's solar system, which is noted as being on the other side of the galaxy in the Slow Zone. (Nyjora sounds similar to New Norwegian "New Earth".) One of the major human habitations is Sjandra Kei, three systems comprising roughly 28 billion individuals. Their main language is Samnorsk, the Norwegian term for a hypothetical unification of the Bokmål and Nynorsk forms of the language. (Vinge indicates in the book's dedication that several key ideas in it came to him while at a conference in Tromsø, Norway.)
Skroders/Riders/Skroderiders
A race of plantlike beings with fronds that serve as arms. The Riders have little native capacity for short-term memory. They are one of the longest-existing species; five billion years ago, someone gave them six-wheeled mechanical constructs ("skrodes") to move around and to provide short-term memory that made it easier for them to retain information well enough to become long-term memory in the vegetable "rider". It is later revealed that their "benefactor" is the Blight, and it is able to easily corrupt and remotely operate the Riders via their skrodes.
Tines
A race of group minds: each person is a "pack" of 4–8 doglike members, which communicate within the pack using very short-range ultrasonic waves from drumlike organs called "tympana".
Each "soul" can survive and evolve by adding members to replace those who die, potentially for hundreds of years, as Woodcarver does.
Related works
Vinge first used the concepts of "Zones of Thought" in a 1988 novella The Blabber, which occurs after Fire. Vinge's novel A Deepness in the Sky (1999) is a prequel to A Fire Upon the Deep set 20,000 years earlier and featuring Pham Nuwen. Vinge's The Children of the Sky, "a near-term sequel to A Fire Upon the Deep, set ten years later, was released in October 2011.
Vinge's former wife, Joan D. Vinge, has also written stories in the Zones of Thought universe, based on his notes. These include "The Outcasts of Heaven Belt", "Legacy", and (as of 2008) a planned novel featuring Pham Nuwen.
Title
Vinge's original title for the novel was "Among the Tines"; its final title was suggested by his editors.
Awards and nominations
A Fire Upon the Deep shared the 1993 Hugo Award for Best Novel with Doomsday Book. The book was nominated for the Nebula Award for Best Novel of 1992, the 1993 John W. Campbell Memorial Award for Best Science Fiction Novel, and the 1993 Locus Award for Best Science Fiction Novel.
Critical reactions
Jo Walton wrote: "Any one of the ideas in A Fire Upon the Deep would have kept an ordinary writer going for years. For me it's the book that does everything right, the example of what science fiction does when it works. ... A Fire Upon the Deep remains a favourite and a delight to re-read, absorbing even when I know exactly what's coming."
References
External links
A Fire Upon the Deep at Worlds Without End
The book with Vinge's commentaries
1992 American novels
Hugo Award for Best Novel–winning works
Transhumanist books
Usenet
Novels by Vernor Vinge
1992 science fiction novels
Tor Books books
Fiction about artificial intelligence
Fiction about malware
Fiction about nanotechnology
Fiction about consciousness transfer
Apocalyptic fiction
Novels about technological singularity | A Fire Upon the Deep | [
"Materials_science"
] | 3,068 | [
"Fiction about nanotechnology",
"Nanotechnology"
] |
2,082 | https://en.wikipedia.org/wiki/Aeronautics | Aeronautics is the science or art involved with the study, design, and manufacturing of air flight-capable machines, and the techniques of operating aircraft and rockets within the atmosphere.
While the term originally referred solely to operating the aircraft, it has since been expanded to include technology, business, and other aspects related to aircraft.
The term "aviation" is sometimes used interchangeably with aeronautics, although "aeronautics" includes lighter-than-air craft such as airships, and includes ballistic vehicles while "aviation" technically does not.
A significant part of aeronautical science is a branch of dynamics called aerodynamics, which deals with the motion of air and the way that it interacts with objects in motion, such as an aircraft.
History
Early ideas
Attempts to fly without any real aeronautical understanding have been made from the earliest times, typically by constructing wings and jumping from a tower with crippling or lethal results.
Wiser investigators sought to gain some rational understanding through the study of bird flight. Medieval Islamic Golden Age scientists such as Abbas ibn Firnas also made such studies. The founders of modern aeronautics, Leonardo da Vinci in the Renaissance and Cayley in 1799, both began their investigations with studies of bird flight.
Man-carrying kites are believed to have been used extensively in ancient China. In 1282 the Italian explorer Marco Polo described the Chinese techniques then current. The Chinese also constructed small hot air balloons, or lanterns, and rotary-wing toys.
An early European to provide any scientific discussion of flight was Roger Bacon, who described principles of operation for the lighter-than-air balloon and the flapping-wing ornithopter, which he envisaged would be constructed in the future. The lifting medium for his balloon would be an "aether" whose composition he did not know.
In the late fifteenth century, Leonardo da Vinci followed up his study of birds with designs for some of the earliest flying machines, including the flapping-wing ornithopter and the rotating-wing helicopter. Although his designs were rational, they were not based on particularly good science. Many of his designs, such as a four-person screw-type helicopter, have severe flaws. He did at least understand that "An object offers as much resistance to the air as the air does to the object." (Newton would not publish the Third law of motion until 1687.) His analysis led to the realisation that manpower alone was not sufficient for sustained flight, and his later designs included a mechanical power source such as a spring. Da Vinci's work was lost after his death and did not reappear until it had been overtaken by the work of George Cayley.
Balloon flight
The modern era of lighter-than-air flight began early in the 17th century with Galileo's experiments in which he showed that air has weight. Around 1650 Cyrano de Bergerac wrote some fantasy novels in which he described the principle of ascent using a substance (dew) he supposed to be lighter than air, and descending by releasing a controlled amount of the substance. Francesco Lana de Terzi measured the pressure of air at sea level and in 1670 proposed the first scientifically credible lifting medium in the form of hollow metal spheres from which all the air had been pumped out. These would be lighter than the displaced air and able to lift an airship. His proposed methods of controlling height are still in use today; by carrying ballast which may be dropped overboard to gain height, and by venting the lifting containers to lose height. In practice de Terzi's spheres would have collapsed under air pressure, and further developments had to wait for more practicable lifting gases.
From the mid-18th century the Montgolfier brothers in France began experimenting with balloons. Their balloons were made of paper, and early experiments using steam as the lifting gas were short-lived due to its effect on the paper as it condensed. Mistaking smoke for a kind of steam, they began filling their balloons with hot smoky air which they called "electric smoke" and, despite not fully understanding the principles at work, made some successful launches and in 1783 were invited to give a demonstration to the French Académie des Sciences.
Meanwhile, the discovery of hydrogen led Joseph Black in to propose its use as a lifting gas, though practical demonstration awaited a gas-tight balloon material. On hearing of the Montgolfier Brothers' invitation, the French Academy member Jacques Charles offered a similar demonstration of a hydrogen balloon. Charles and two craftsmen, the Robert brothers, developed a gas-tight material of rubberised silk for the envelope. The hydrogen gas was to be generated by chemical reaction during the filling process.
The Montgolfier designs had several shortcomings, not least the need for dry weather and a tendency for sparks from the fire to set light to the paper balloon. The manned design had a gallery around the base of the balloon rather than the hanging basket of the first, unmanned design, which brought the paper closer to the fire. On their free flight, De Rozier and d'Arlandes took buckets of water and sponges to douse these fires as they arose. On the other hand, the manned design of Charles was essentially modern. As a result of these exploits, the hot air balloon became known as the Montgolfière type and the gas balloon the Charlière.
Charles and the Robert brothers' next balloon, La Caroline, was a Charlière that followed Jean Baptiste Meusnier's proposals for an elongated dirigible balloon, and was notable for having an outer envelope with the gas contained in a second, inner ballonet. On 19 September 1784, it completed the first flight of over 100 km, between Paris and Beuvry, despite the man-powered propulsive devices proving useless.
In an attempt the next year to provide both endurance and controllability, de Rozier developed a balloon having both hot air and hydrogen gas bags, a design which was soon named after him as the Rozière. The principle was to use the hydrogen section for constant lift and to navigate vertically by heating and allowing to cool the hot air section, in order to catch the most favourable wind at whatever altitude it was blowing. The balloon envelope was made of goldbeater's skin. The first flight ended in disaster and the approach has seldom been used since.
Cayley and the foundation of modern aeronautics
Sir George Cayley (1773–1857) is widely acknowledged as the founder of modern aeronautics. He was first called the "father of the aeroplane" in 1846 and Henson called him the "father of aerial navigation." He was the first true scientific aerial investigator to publish his work, which included for the first time the underlying principles and forces of flight.
In 1809 he began the publication of a landmark three-part treatise titled "On Aerial Navigation" (1809–1810). In it he wrote the first scientific statement of the problem, "The whole problem is confined within these limits, viz. to make a surface support a given weight by the application of power to the resistance of air." He identified the four vector forces that influence an aircraft: thrust, lift, drag and weight and distinguished stability and control in his designs.
He developed the modern conventional form of the fixed-wing aeroplane having a stabilising tail with both horizontal and vertical surfaces, flying gliders both unmanned and manned.
He introduced the use of the whirling arm test rig to investigate the aerodynamics of flight, using it to discover the benefits of the curved or cambered aerofoil over the flat wing he had used for his first glider. He also identified and described the importance of dihedral, diagonal bracing and drag reduction, and contributed to the understanding and design of ornithopters and parachutes.
Another significant invention was the tension-spoked wheel, which he devised in order to create a light, strong wheel for aircraft undercarriage.
The 19th century: Otto Lilienthal and the first human flights
During the 19th century Cayley's ideas were refined, proved and expanded on, culminating in the works of Otto Lilienthal.
Lilienthal was a German engineer and businessman who became known as the "flying man". He was the first person to make well-documented, repeated, successful flights with gliders, therefore making the idea of "heavier than air" a reality. Newspapers and magazines published photographs of Lilienthal gliding, favourably influencing public and scientific opinion about the possibility of flying machines becoming practical.
His work lead to him developing the concept of the modern wing. His flight attempts in Berlin in the year 1891 are seen as the beginning of human flight and the "Lilienthal Normalsegelapparat" is considered to be the first air plane in series production, making the Maschinenfabrik Otto Lilienthal in Berlin the first air plane production company in the world.
Otto Lilienthal is often referred to as either the "father of aviation" or "father of flight".
Other important investigators included Horatio Phillips.
Branches
Aeronautics may be divided into three main branches, Aviation, Aeronautical science and Aeronautical engineering.
Aviation
Aviation is the art or practice of aeronautics. Historically aviation meant only heavier-than-air flight, but nowadays it includes flying in balloons and airships.
Aeronautical engineering
Aeronautical engineering covers the design and construction of aircraft, including how they are powered, how they are used and how they are controlled for safe operation.
A major part of aeronautical engineering is aerodynamics, the science of passing through the air.
With the increasing activity in space flight, nowadays aeronautics and astronautics are often combined as aerospace engineering.
Aerodynamics
The science of aerodynamics deals with the motion of air and the way that it interacts with objects in motion, such as an aircraft.
The study of aerodynamics falls broadly into three areas:
Incompressible flow occurs where the air simply moves to avoid objects, typically at subsonic speeds below that of sound (Mach 1).
Compressible flow occurs where shock waves appear at points where the air becomes compressed, typically at speeds above Mach 1.
Transonic flow occurs in the intermediate speed range around Mach 1, where the airflow over an object may be locally subsonic at one point and locally supersonic at another.
Rocketry
A rocket or rocket vehicle is a missile, spacecraft, aircraft or other vehicle which obtains thrust from a rocket engine. In all rockets, the exhaust is formed entirely from propellants carried within the rocket before use. Rocket engines work by action and reaction. Rocket engines push rockets forwards simply by throwing their exhaust backwards extremely fast.
Rockets for military and recreational uses date back to at least 13th-century China. Significant scientific, interplanetary and industrial use did not occur until the 20th century, when rocketry was the enabling technology of the Space Age, including setting foot on the Moon.
Rockets are used for fireworks, weaponry, ejection seats, launch vehicles for artificial satellites, human spaceflight and exploration of other planets. While comparatively inefficient for low speed use, they are very lightweight and powerful, capable of generating large accelerations and of attaining extremely high speeds with reasonable efficiency.
Chemical rockets are the most common type of rocket and they typically create their exhaust by the combustion of rocket propellant. Chemical rockets store a large amount of energy in an easily released form, and can be very dangerous. However, careful design, testing, construction and use minimizes risks.
See also
References
Citations
Sources
Wilson, E. B. (1920) Aeronautics: A Class Text, via Internet Archive
External links
Aeronautics
Aviation Terminology
Jeppesen The AVIATION DICTIONARY for pilots and aviation technicians
DTIC ADA032206: Chinese-English Aviation and Space Dictionary
Courses
Research
>
Articles containing video clips | Aeronautics | [
"Physics"
] | 2,378 | [
"Spacetime",
"Space",
"Aerospace"
] |
2,112 | https://en.wikipedia.org/wiki/Associative%20algebra | In mathematics, an associative algebra A over a commutative ring (often a field) K is a ring A together with a ring homomorphism from K into the center of A. This is thus an algebraic structure with an addition, a multiplication, and a scalar multiplication (the multiplication by the image of the ring homomorphism of an element of K). The addition and multiplication operations together give A the structure of a ring; the addition and scalar multiplication operations together give A the structure of a module or vector space over K. In this article we will also use the term K-algebra to mean an associative algebra over K. A standard first example of a K-algebra is a ring of square matrices over a commutative ring K, with the usual matrix multiplication.
A commutative algebra is an associative algebra for which the multiplication is commutative, or, equivalently, an associative algebra that is also a commutative ring.
In this article associative algebras are assumed to have a multiplicative identity, denoted 1; they are sometimes called unital associative algebras for clarification. In some areas of mathematics this assumption is not made, and we will call such structures non-unital associative algebras. We will also assume that all rings are unital, and all ring homomorphisms are unital.
Every ring is an associative algebra over its center and over the integers.
Definition
Let R be a commutative ring (so R could be a field). An associative R-algebra A (or more simply, an R-algebra A) is a ring A
that is also an R-module in such a way that the two additions (the ring addition and the module addition) are the same operation, and scalar multiplication satisfies
for all r in R and x, y in the algebra. (This definition implies that the algebra, being a ring, is unital, since rings are supposed to have a multiplicative identity.)
Equivalently, an associative algebra A is a ring together with a ring homomorphism from R to the center of A. If f is such a homomorphism, the scalar multiplication is (here the multiplication is the ring multiplication); if the scalar multiplication is given, the ring homomorphism is given by . (See also below).
Every ring is an associative Z-algebra, where Z denotes the ring of the integers.
A is an associative algebra that is also a commutative ring.
As a monoid object in the category of modules
The definition is equivalent to saying that a unital associative R-algebra is a monoid object in R-Mod (the monoidal category of R-modules). By definition, a ring is a monoid object in the category of abelian groups; thus, the notion of an associative algebra is obtained by replacing the category of abelian groups with the category of modules.
Pushing this idea further, some authors have introduced a "generalized ring" as a monoid object in some other category that behaves like the category of modules. Indeed, this reinterpretation allows one to avoid making an explicit reference to elements of an algebra A. For example, the associativity can be expressed as follows. By the universal property of a tensor product of modules, the multiplication (the R-bilinear map) corresponds to a unique R-linear map
.
The associativity then refers to the identity:
From ring homomorphisms
An associative algebra amounts to a ring homomorphism whose image lies in the center. Indeed, starting with a ring A and a ring homomorphism whose image lies in the center of A, we can make A an R-algebra by defining
for all and . If A is an R-algebra, taking , the same formula in turn defines a ring homomorphism whose image lies in the center.
If a ring is commutative then it equals its center, so that a commutative R-algebra can be defined simply as a commutative ring A together with a commutative ring homomorphism .
The ring homomorphism η appearing in the above is often called a structure map. In the commutative case, one can consider the category whose objects are ring homomorphisms for a fixed R, i.e., commutative R-algebras, and whose morphisms are ring homomorphisms that are under R; i.e., is (i.e., the coslice category of the category of commutative rings under R.) The prime spectrum functor Spec then determines an anti-equivalence of this category to the category of affine schemes over Spec R.
How to weaken the commutativity assumption is a subject matter of noncommutative algebraic geometry and, more recently, of derived algebraic geometry. See also: Generic matrix ring.
Algebra homomorphisms
A homomorphism between two R-algebras is an R-linear ring homomorphism. Explicitly, is an associative algebra homomorphism if
The class of all R-algebras together with algebra homomorphisms between them form a category, sometimes denoted R-Alg.
The subcategory of commutative R-algebras can be characterized as the coslice category R/CRing where CRing is the category of commutative rings.
Examples
The most basic example is a ring itself; it is an algebra over its center or any subring lying in the center. In particular, any commutative ring is an algebra over any of its subrings. Other examples abound both from algebra and other fields of mathematics.
Algebra
Any ring A can be considered as a Z-algebra. The unique ring homomorphism from Z to A is determined by the fact that it must send 1 to the identity in A. Therefore, rings and Z-algebras are equivalent concepts, in the same way that abelian groups and Z-modules are equivalent.
Any ring of characteristic n is a (Z/nZ)-algebra in the same way.
Given an R-module M, the endomorphism ring of M, denoted EndR(M) is an R-algebra by defining .
Any ring of matrices with coefficients in a commutative ring R forms an R-algebra under matrix addition and multiplication. This coincides with the previous example when M is a finitely-generated, free R-module.
In particular, the square n-by-n matrices with entries from the field K form an associative algebra over K.
The complex numbers form a 2-dimensional commutative algebra over the real numbers.
The quaternions form a 4-dimensional associative algebra over the reals (but not an algebra over the complex numbers, since the complex numbers are not in the center of the quaternions).
Every polynomial ring is a commutative R-algebra. In fact, this is the free commutative R-algebra on the set .
The free R-algebra on a set E is an algebra of "polynomials" with coefficients in R and noncommuting indeterminates taken from the set E.
The tensor algebra of an R-module is naturally an associative R-algebra. The same is true for quotients such as the exterior and symmetric algebras. Categorically speaking, the functor that maps an R-module to its tensor algebra is left adjoint to the functor that sends an R-algebra to its underlying R-module (forgetting the multiplicative structure).
Given a module M over a commutative ring R, the direct sum of modules has a structure of an R-algebra by thinking M consists of infinitesimal elements; i.e., the multiplication is given as . The notion is sometimes called the algebra of dual numbers.
A quasi-free algebra, introduced by Cuntz and Quillen, is a sort of generalization of a free algebra and a semisimple algebra over an algebraically closed field.
Representation theory
The universal enveloping algebra of a Lie algebra is an associative algebra that can be used to study the given Lie algebra.
If G is a group and R is a commutative ring, the set of all functions from G to R with finite support form an R-algebra with the convolution as multiplication. It is called the group algebra of G. The construction is the starting point for the application to the study of (discrete) groups.
If G is an algebraic group (e.g., semisimple complex Lie group), then the coordinate ring of G is the Hopf algebra A corresponding to G. Many structures of G translate to those of A.
A quiver algebra (or a path algebra) of a directed graph is the free associative algebra over a field generated by the paths in the graph.
Analysis
Given any Banach space X, the continuous linear operators form an associative algebra (using composition of operators as multiplication); this is a Banach algebra.
Given any topological space X, the continuous real- or complex-valued functions on X form a real or complex associative algebra; here the functions are added and multiplied pointwise.
The set of semimartingales defined on the filtered probability space forms a ring under stochastic integration.
The Weyl algebra
An Azumaya algebra
Geometry and combinatorics
The Clifford algebras, which are useful in geometry and physics.
Incidence algebras of locally finite partially ordered sets are associative algebras considered in combinatorics.
The partition algebra and its subalgebras, including the Brauer algebra and the Temperley-Lieb algebra.
A differential graded algebra is an associative algebra together with a grading and a differential. For example, the de Rham algebra , where consists of differential p-forms on a manifold M, is a differential graded algebra.
Mathematical physics
A Poisson algebra is a commutative associative algebra over a field together with a structure of a Lie algebra so that the Lie bracket satisfies the Leibniz rule; i.e., .
Given a Poisson algebra , consider the vector space of formal power series over . If has a structure of an associative algebra with multiplication such that, for ,
then is called a deformation quantization of .
A quantized enveloping algebra. The dual of such an algebra turns out to be an associative algebra (see ) and is, philosophically speaking, the (quantized) coordinate ring of a quantum group.
Gerstenhaber algebra
Constructions
Subalgebras A subalgebra of an R-algebra A is a subset of A which is both a subring and a submodule of A. That is, it must be closed under addition, ring multiplication, scalar multiplication, and it must contain the identity element of A.
Quotient algebras Let A be an R-algebra. Any ring-theoretic ideal I in A is automatically an R-module since . This gives the quotient ring the structure of an R-module and, in fact, an R-algebra. It follows that any ring homomorphic image of A is also an R-algebra.
Direct products The direct product of a family of R-algebras is the ring-theoretic direct product. This becomes an R-algebra with the obvious scalar multiplication.
Free products One can form a free product of R-algebras in a manner similar to the free product of groups. The free product is the coproduct in the category of R-algebras.
Tensor products The tensor product of two R-algebras is also an R-algebra in a natural way. See tensor product of algebras for more details. Given a commutative ring R and any ring A the tensor product R ⊗Z A can be given the structure of an R-algebra by defining . The functor which sends A to is left adjoint to the functor which sends an R-algebra to its underlying ring (forgetting the module structure). See also: Change of rings.
Free algebra A free algebra is an algebra generated by symbols. If one imposes commutativity; i.e., take the quotient by commutators, then one gets a polynomial algebra.
Dual of an associative algebra
Let A be an associative algebra over a commutative ring R. Since A is in particular a module, we can take the dual module A* of A. A priori, the dual A* need not have a structure of an associative algebra. However, A may come with an extra structure (namely, that of a Hopf algebra) so that the dual is also an associative algebra.
For example, take A to be the ring of continuous functions on a compact group G. Then, not only A is an associative algebra, but it also comes with the co-multiplication and co-unit . The "co-" refers to the fact that they satisfy the dual of the usual multiplication and unit in the algebra axiom. Hence, the dual A* is an associative algebra. The co-multiplication and co-unit are also important in order to form a tensor product of representations of associative algebras (see below).
Enveloping algebra
Given an associative algebra A over a commutative ring R, the enveloping algebra Ae of A is the algebra or , depending on authors.
Note that a bimodule over A is exactly a left module over Ae.
Separable algebra
Let A be an algebra over a commutative ring R. Then the algebra A is a right module over with the action . Then, by definition, A is said to separable if the multiplication map splits as an Ae-linear map, where is an Ae-module by . Equivalently,
A is separable if it is a projective module over ; thus, the -projective dimension of A, sometimes called the bidimension of A, measures the failure of separability.
Finite-dimensional algebra
Let A be a finite-dimensional algebra over a field k. Then A is an Artinian ring.
Commutative case
As A is Artinian, if it is commutative, then it is a finite product of Artinian local rings whose residue fields are algebras over the base field k. Now, a reduced Artinian local ring is a field and thus the following are equivalent
is separable.
is reduced, where is some algebraic closure of k.
for some n.
is the number of -algebra homomorphisms .
Let , the profinite group of finite Galois extensions of k. Then is an anti-equivalence of the category of finite-dimensional separable k-algebras to the category of finite sets with continuous -actions.
Noncommutative case
Since a simple Artinian ring is a (full) matrix ring over a division ring, if A is a simple algebra, then A is a (full) matrix algebra over a division algebra D over k; i.e., . More generally, if A is a semisimple algebra, then it is a finite product of matrix algebras (over various division k-algebras), the fact known as the Artin–Wedderburn theorem.
The fact that A is Artinian simplifies the notion of a Jacobson radical; for an Artinian ring, the Jacobson radical of A is the intersection of all (two-sided) maximal ideals (in contrast, in general, a Jacobson radical is the intersection of all left maximal ideals or the intersection of all right maximal ideals.)
The Wedderburn principal theorem states: for a finite-dimensional algebra A with a nilpotent ideal I, if the projective dimension of as a module over the enveloping algebra is at most one, then the natural surjection splits; i.e., A contains a subalgebra B such that is an isomorphism. Taking I to be the Jacobson radical, the theorem says in particular that the Jacobson radical is complemented by a semisimple algebra. The theorem is an analog of Levi's theorem for Lie algebras.
Lattices and orders
Let R be a Noetherian integral domain with field of fractions K (for example, they can be Z, Q). A lattice L in a finite-dimensional K-vector space V is a finitely generated R-submodule of V that spans V; in other words, .
Let AK be a finite-dimensional K-algebra. An order in AK is an R-subalgebra that is a lattice. In general, there are a lot fewer orders than lattices; e.g., Z is a lattice in Q but not an order (since it is not an algebra).
A maximal order is an order that is maximal among all the orders.
Related concepts
Coalgebras
An associative algebra over K is given by a K-vector space A endowed with a bilinear map having two inputs (multiplicator and multiplicand) and one output (product), as well as a morphism identifying the scalar multiples of the multiplicative identity. If the bilinear map is reinterpreted as a linear map (i.e., morphism in the category of K-vector spaces) (by the universal property of the tensor product), then we can view an associative algebra over K as a K-vector space A endowed with two morphisms (one of the form and one of the form ) satisfying certain conditions that boil down to the algebra axioms. These two morphisms can be dualized using categorial duality by reversing all arrows in the commutative diagrams that describe the algebra axioms; this defines the structure of a coalgebra.
There is also an abstract notion of F-coalgebra, where F is a functor. This is vaguely related to the notion of coalgebra discussed above.
Representations
A representation of an algebra A is an algebra homomorphism from A to the endomorphism algebra of some vector space (or module) V. The property of ρ being an algebra homomorphism means that ρ preserves the multiplicative operation (that is, for all x and y in A), and that ρ sends the unit of A to the unit of End(V) (that is, to the identity endomorphism of V).
If A and B are two algebras, and and are two representations, then there is a (canonical) representation of the tensor product algebra on the vector space . However, there is no natural way of defining a tensor product of two representations of a single associative algebra in such a way that the result is still a representation of that same algebra (not of its tensor product with itself), without somehow imposing additional conditions. Here, by tensor product of representations, the usual meaning is intended: the result should be a linear representation of the same algebra on the product vector space. Imposing such additional structure typically leads to the idea of a Hopf algebra or a Lie algebra, as demonstrated below.
Motivation for a Hopf algebra
Consider, for example, two representations and . One might try to form a tensor product representation according to how it acts on the product vector space, so that
However, such a map would not be linear, since one would have
for . One can rescue this attempt and restore linearity by imposing additional structure, by defining an algebra homomorphism , and defining the tensor product representation as
Such a homomorphism Δ is called a comultiplication if it satisfies certain axioms. The resulting structure is called a bialgebra. To be consistent with the definitions of the associative algebra, the coalgebra must be co-associative, and, if the algebra is unital, then the co-algebra must be co-unital as well. A Hopf algebra is a bialgebra with an additional piece of structure (the so-called antipode), which allows not only to define the tensor product of two representations, but also the Hom module of two representations (again, similarly to how it is done in the representation theory of groups).
Motivation for a Lie algebra
One can try to be more clever in defining a tensor product. Consider, for example,
so that the action on the tensor product space is given by
.
This map is clearly linear in x, and so it does not have the problem of the earlier definition. However, it fails to preserve multiplication:
.
But, in general, this does not equal
.
This shows that this definition of a tensor product is too naive; the obvious fix is to define it such that it is antisymmetric, so that the middle two terms cancel. This leads to the concept of a Lie algebra.
Non-unital algebras
Some authors use the term "associative algebra" to refer to structures which do not necessarily have a multiplicative identity, and hence consider homomorphisms which are not necessarily unital.
One example of a non-unital associative algebra is given by the set of all functions whose limit as x nears infinity is zero.
Another example is the vector space of continuous periodic functions, together with the convolution product.
See also
Abstract algebra
Algebraic structure
Algebra over a field
Sheaf of algebras, a sort of an algebra over a ringed space
Deligne's conjecture on Hochschild cohomology
Notes
Citations
References
James Byrnie Shaw (1907) A Synopsis of Linear Associative Algebra, link from Cornell University Historical Math Monographs.
Ross Street (1998) Quantum Groups: an entrée to modern algebra, an overview of index-free notation.
Algebras
Algebraic geometry | Associative algebra | [
"Mathematics"
] | 4,551 | [
"Mathematical structures",
"Algebras",
"Fields of abstract algebra",
"Algebraic structures",
"Algebraic geometry"
] |
2,113 | https://en.wikipedia.org/wiki/Axiom%20of%20regularity | In mathematics, the axiom of regularity (also known as the axiom of foundation) is an axiom of Zermelo–Fraenkel set theory that states that every non-empty set A contains an element that is disjoint from A. In first-order logic, the axiom reads:
The axiom of regularity together with the axiom of pairing implies that no set is an element of itself, and that there is no infinite sequence (an) such that ai+1 is an element of ai for all i. With the axiom of dependent choice (which is a weakened form of the axiom of choice), this result can be reversed: if there are no such infinite sequences, then the axiom of regularity is true. Hence, in this context the axiom of regularity is equivalent to the sentence that there are no downward infinite membership chains.
The axiom was originally formulated by von Neumann; it was adopted in a formulation closer to the one found in contemporary textbooks by Zermelo. Virtually all results in the branches of mathematics based on set theory hold even in the absence of regularity. However, regularity makes some properties of ordinals easier to prove; and it not only allows induction to be done on well-ordered sets but also on proper classes that are well-founded relational structures such as the lexicographical ordering on
Given the other axioms of Zermelo–Fraenkel set theory, the axiom of regularity is equivalent to the axiom of induction. The axiom of induction tends to be used in place of the axiom of regularity in intuitionistic theories (ones that do not accept the law of the excluded middle), where the two axioms are not equivalent.
In addition to omitting the axiom of regularity, non-standard set theories have indeed postulated the existence of sets that are elements of themselves.
Elementary implications of regularity
No set is an element of itself
Let A be a set, and apply the axiom of regularity to {A}, which is a set by the axiom of pairing. We see that there must be an element of {A} which is disjoint from {A}. Since the only element of {A} is A, it must be that A is disjoint from {A}. So, since , we cannot have A the only element of A (by the definition of disjoint).
No infinite descending sequence of sets exists
Suppose, to the contrary, that there is a function, f, on the natural numbers with f(n+1) an element of f(n) for each n. Define S = {f(n): n a natural number}, the range of f, which can be seen to be a set from the axiom schema of replacement. Applying the axiom of regularity to S, let B be an element of S which is disjoint from S. By the definition of S, B must be f(k) for some natural number k. However, we are given that f(k) contains f(k+1) which is also an element of S. So f(k+1) is in the intersection of f(k) and S. This contradicts the fact that they are disjoint sets. Since our supposition led to a contradiction, there must not be any such function, f.
The nonexistence of a set containing itself can be seen as a special case where the sequence is infinite and constant.
Notice that this argument only applies to functions f that can be represented as sets as opposed to undefinable classes. The hereditarily finite sets, Vω, satisfy the axiom of regularity (and all other axioms of ZFC except the axiom of infinity). So if one forms a non-trivial ultrapower of Vω, then it will also satisfy the axiom of regularity. The resulting model will contain elements, called non-standard natural numbers, that satisfy the definition of natural numbers in that model but are not really natural numbers. They are "fake" natural numbers which are "larger" than any actual natural number. This model will contain infinite descending sequences of elements. For example, suppose n is a non-standard natural number, then and , and so on. For any actual natural number k, . This is an unending descending sequence of elements. But this sequence is not definable in the model and thus not a set. So no contradiction to regularity can be proved.
Simpler set-theoretic definition of the ordered pair
The axiom of regularity enables defining the ordered pair (a,b) as {a,{a,b}}; see ordered pair for specifics. This definition eliminates one pair of braces from the canonical Kuratowski definition (a,b) = {{a},{a,b}}.
Every set has an ordinal rank
This was actually the original form of the axiom in von Neumann's axiomatization.
Suppose x is any set. Let t be the transitive closure of {x}. Let u be the subset of t consisting of unranked sets. If u is empty, then x is ranked and we are done. Otherwise, apply the axiom of regularity to u to get an element w of u which is disjoint from u. Since w is in u, w is unranked. w is a subset of t by the definition of transitive closure. Since w is disjoint from u, every element of w is ranked. Applying the axioms of replacement and union to combine the ranks of the elements of w, we get an ordinal rank for w, to wit . This contradicts the conclusion that w is unranked. So the assumption that u was non-empty must be false and x must have rank.
For every two sets, only one can be an element of the other
Let X and Y be sets. Then apply the axiom of regularity to the set {X,Y} (which exists by the axiom of pairing). We see there must be an element of {X,Y} which is also disjoint from it. It must be either X or Y. By the definition of disjoint then, we must have either Y is not an element of X or vice versa.
The axiom of dependent choice and no infinite descending sequence of sets implies regularity
Let the non-empty set S be a counter-example to the axiom of regularity; that is, every element of S has a non-empty intersection with S. We define a binary relation R on S by , which is entire by assumption. Thus, by the axiom of dependent choice, there is some sequence (an) in S satisfying anRan+1 for all n in N. As this is an infinite descending chain, we arrive at a contradiction and so, no such S exists.
Regularity and the rest of ZF(C) axioms
Regularity was shown to be relatively consistent with the rest of ZF by Skolem and von Neumann, meaning that if ZF without regularity is consistent, then ZF (with regularity) is also consistent.
The axiom of regularity was also shown to be independent from the other axioms of ZFC, assuming they are consistent. The result was announced by Paul Bernays in 1941, although he did not publish a proof until 1954. The proof involves (and led to the study of) Rieger-Bernays permutation models (or method), which were used for other proofs of independence for non-well-founded systems.
Regularity and Russell's paradox
Naive set theory (the axiom schema of unrestricted comprehension and the axiom of extensionality) is inconsistent due to Russell's paradox. In early formalizations of sets, mathematicians and logicians have avoided that contradiction by replacing the axiom schema of comprehension with the much weaker axiom schema of separation. However, this step alone takes one to theories of sets which are considered too weak. So some of the power of comprehension was added back via the other existence axioms of ZF set theory (pairing, union, powerset, replacement, and infinity) which may be regarded as special cases of comprehension. So far, these axioms do not seem to lead to any contradiction. Subsequently, the axiom of choice and the axiom of regularity were added to exclude models with some undesirable properties. These two axioms are known to be relatively consistent.
In the presence of the axiom schema of separation, Russell's paradox becomes a proof that there is no set of all sets. The axiom of regularity together with the axiom of pairing also prohibit such a universal set. However, Russell's paradox yields a proof that there is no "set of all sets" using the axiom schema of separation alone, without any additional axioms. In particular, ZF without the axiom of regularity already prohibits such a universal set.
If a theory is extended by adding an axiom or axioms, then any (possibly undesirable) consequences of the original theory remain consequences of the extended theory. In particular, if ZF without regularity is extended by adding regularity to get ZF, then any contradiction (such as Russell's paradox) which followed from the original theory would still follow in the extended theory.
The existence of Quine atoms (sets that satisfy the formula equation x = {x}, i.e. have themselves as their only elements) is consistent with the theory obtained by removing the axiom of regularity from ZFC. Various non-wellfounded set theories allow "safe" circular sets, such as Quine atoms, without becoming inconsistent by means of Russell's paradox.
Regularity, the cumulative hierarchy, and types
In ZF it can be proven that the class , called the von Neumann universe, is equal to the class of all sets. This statement is even equivalent to the axiom of regularity (if we work in ZF with this axiom omitted). From any model which does not satisfy the axiom of regularity, a model which satisfies it can be constructed by taking only sets in .
Herbert Enderton wrote that "The idea of rank is a descendant of Russell's concept of type". Comparing ZF with type theory, Alasdair Urquhart wrote that "Zermelo's system has the notational advantage of not containing any explicitly typed variables, although in fact it can be seen as having an implicit type structure built into it, at least if the axiom of regularity is included.
Dana Scott went further and claimed that:
In the same paper, Scott shows that an axiomatic system based on the inherent properties of the cumulative hierarchy turns out to be equivalent to ZF, including regularity.
History
The concept of well-foundedness and rank of a set were both introduced by Dmitry Mirimanoff. Mirimanoff called a set x "regular" () if every descending chain x ∋ x1 ∋ x2 ∋ ... is finite. Mirimanoff however did not consider his notion of regularity (and well-foundedness) as an axiom to be observed by all sets; in later papers Mirimanoff also explored what are now called non-well-founded sets ( in Mirimanoff's terminology).
Skolem and von Neumann pointed out that non-well-founded sets are superfluous and in the same publication von Neumann gives an axiom which excludes some, but not all, non-well-founded sets. In a subsequent publication, von Neumann gave an equivalent but more complex version of the axiom of class foundation:
The contemporary and final form of the axiom is due to Zermelo.
Regularity in the presence of urelements
Urelements are objects that are not sets, but which can be elements of sets. In ZF set theory, there are no urelements, but in some other set theories such as ZFA, there are. In these theories, the axiom of regularity must be modified. The statement "" needs to be replaced with a statement that is not empty and is not an urelement. One suitable replacement is , which states that x is inhabited.
See also
Non-well-founded set theory
Scott's trick
Epsilon-induction
References
Sources
Reprinted in
Reprinted in From Frege to Gödel, van Heijenoort, 1967, in English translation by Stefan Bauer-Mengelberg, pp. 291–301.
Translation in
Translation in
External links
Inhabited set and the axiom of foundation on nLab
Axioms of set theory
Wellfoundedness | Axiom of regularity | [
"Mathematics"
] | 2,643 | [
"Wellfoundedness",
"Mathematical axioms",
"Axioms of set theory",
"Order theory",
"Mathematical induction"
] |
2,115 | https://en.wikipedia.org/wiki/AppleTalk | AppleTalk is a discontinued proprietary suite of networking protocols developed by Apple Computer for their Macintosh computers. AppleTalk includes a number of features that allow local area networks to be connected with no prior setup or the need for a centralized router or server of any sort. Connected AppleTalk-equipped systems automatically assign addresses, update the distributed namespace, and configure any required inter-networking routing.
AppleTalk was released in 1985 and was the primary protocol used by Apple devices through the 1980s and 1990s. Versions were also released for the IBM PC and compatibles and the Apple IIGS. AppleTalk support was also available in most networked printers (especially laser printers), some file servers, and a number of routers.
The rise of TCP/IP during the 1990s led to a reimplementation of most of these types of support on that protocol, and AppleTalk became unsupported as of the release of Mac OS X v10.6 in 2009. Many of AppleTalk's more advanced autoconfiguration features have since been introduced in Bonjour, while Universal Plug and Play serves similar needs.
History
AppleNet
After the release of the Apple Lisa computer in January 1983, Apple invested considerable effort in the development of a local area networking (LAN) system for the machines. Known as AppleNet, it was based on the seminal Xerox XNS protocol stack but running on a custom 1 Mbit/s coaxial cable system rather than Xerox's 2.94 Mbit/s Ethernet. AppleNet was announced early in 1983 with a full introduction at the target price of $500 for plug-in AppleNet cards for the Lisa and the Apple II.
At that time, early LAN systems were just coming to market, including Ethernet, Token Ring, Econet, and ARCNET. This was a topic of major commercial effort at the time, dominating shows like the National Computer Conference (NCC) in Anaheim in May 1983. All of the systems were jockeying for position in the market, but even at this time, Ethernet's widespread acceptance suggested it was to become a de facto standard. It was at this show that Steve Jobs asked Gursharan Sidhu a seemingly innocuous question: "Why has networking not caught on?"
Four months later, in October, AppleNet was cancelled. At the time, they announced that "Apple realized that it's not in the business to create a networking system. We built and used AppleNet in-house, but we realized that if we had shipped it, we would have seen new standards coming up." In January, Jobs announced that they would instead be supporting IBM's Token Ring, which he expected to come out in a "few months".
AppleBus
Through this period, Apple was deep in development of the Macintosh computer. During development, engineers had made the decision to use the Zilog 8530 serial controller chip (SCC) instead of the lower-cost and more common UART to provide serial port connections. The SCC cost about $5 more than a UART, but offered much higher speeds of up to 250 kilobits per second (or higher with additional hardware) and internally supported a number of basic networking-like protocols like IBM's Bisync.
The SCC was chosen because it would allow multiple devices to be attached to the port. Peripherals equipped with similar SCCs could communicate using the built-in protocols, interleaving their data with other peripherals on the same bus. This would eliminate the need for more ports on the back of the machine, and allowed for the elimination of expansion slots for supporting more complex devices. The initial concept was known as AppleBus, envisioning a system controlled by the host Macintosh polling "dumb" devices in a fashion similar to the modern Universal Serial Bus.
AppleBus networking
The Macintosh team had already begun work on what would become the LaserWriter and had considered a number of other options to answer the question of how to share these expensive machines and other resources. A series of memos from Bob Belleville clarified these concepts, outlining the Mac, LaserWriter, and a file server system which would become the Macintosh Office. By late 1983 it was clear that IBM's Token Ring would not be ready in time for the launch of the Mac, and might miss the launch of these other products as well. In the end, Token Ring would not ship until October 1985.
Jobs' earlier question to Sidhu had already sparked a number of ideas. When AppleNet was cancelled in October, Sidhu led an effort to develop a new networking system based on the AppleBus hardware. This new system would not have to conform to any existing preconceptions, and was designed to be worthy of the Mac – a system that was user-installable and required no configuration or fixed network addresses – in short, a true plug-and-play network. Considerable effort was needed, but by the time the Mac was released, the basic concepts had been outlined, and some of the low-level protocols were on their way to completion. Sidhu mentioned the work to Belleville only two hours after the Mac was announced.
The "new" AppleBus was announced in early 1984, allowing direct connection from the Mac or Lisa through a small box that is plugged into the serial port and connected via cables to the next computer upstream and downstream. Adaptors for Apple II and Apple III were also announced. Apple also announced that an AppleBus network could be attached to, and would appear to be a single node within, a Token Ring system. Details of how this would work were sketchy.
AppleTalk Personal Network
Just prior to its release in early 1985, AppleBus was renamed AppleTalk. Initially marketed as AppleTalk Personal Network, it comprised a family of network protocols and a physical layer.
The physical layer had a number of limitations, including a speed of only 230.4 kbit/s, a maximum distance of from end to end, and only 32 nodes per LAN. But as the basic hardware was built into the Mac, adding nodes only cost about $50 for the adaptor box. In comparison, Ethernet or Token Ring cards cost hundreds or thousands of dollars. Additionally, the entire networking stack required only about 6 kB of RAM, allowing it to run on any Mac.
The relatively slow speed of AppleTalk allowed further reductions in cost. Instead of using RS-422's balanced transmit and receive circuits, the AppleTalk cabling used a single common electrical ground, which limited speeds to about 500 kbit/s, but allowed one conductor to be removed. This meant that common three-conductor cables could be used for wiring. Additionally, the adaptors were designed to be "self-terminating", meaning that nodes at the end of the network could simply leave their last connector unconnected. There was no need for the wires to be connected back together into a loop, nor the need for hubs or other devices.
The system was designed for future expansion; the addressing system allowed for expansion to 255 nodes in a LAN (although only 32 could be used at that time), and by using "bridges" (which came to be known as "routers", although technically not the same) one could interconnect LANs into larger collections. "Zones" allowed devices to be addressed within a bridge-connected internet. Additionally, AppleTalk was designed from the start to allow use with any potential underlying physical link, and within a few years, the physical layer would be renamed LocalTalk, so as to differentiate it from the AppleTalk protocols.
The main advantage of AppleTalk was that it was completely maintenance-free. To join a device to a network, a user simply plugged the adaptor into the machine, then connected a cable from it to any free port on any other adaptor. The AppleTalk network stack negotiated a network address, assigned the computer a human-readable name, and compiled a list of the names and types of other machines on the network so the user could browse the devices through the Chooser. AppleTalk was so easy to use that ad hoc networks tended to appear whenever multiple Macs were in the same room. Apple would later use this in an advertisement showing a network being created between two seats in an airplane.
PhoneNet and other adaptors
A thriving third-party market for AppleTalk devices developed over the next few years. One particularly notable example was an alternate adaptor designed by BMUG and commercialised by Farallon as PhoneNET in 1987. This was essentially a replacement for Apple's connector that had conventional phone jacks instead of Apple's round connectors. PhoneNet allowed AppleTalk networks to be connected together using normal telephone wires, and with very little extra work, could run analog phones and AppleTalk on a single four-conductor phone cable.
Other companies took advantage of the SCC's ability to read external clocks in order to support higher transmission speeds, up to 1 Mbit/s. In these systems, the external adaptor also included its own clock, and used that to signal the SCC's clock input pins. The best-known such system was Centram's FlashTalk, which ran at 768 kbit/s, and was intended to be used with their TOPS networking system. A similar solution was the 850 kbit/s DaynaTalk, which used a separate box that plugged in between the computer and a normal LocalTalk/PhoneNet box. Dayna also offered a PC expansion card that ran up to 1.7 Mbit/s when talking to other Dayna PC cards. Several other systems also existed with even higher performance, but these often required special cabling that was incompatible with LocalTalk/PhoneNet, and also required patches to the networking stack that often caused problems.
AppleTalk over Ethernet
As Apple expanded into more commercial and education markets, they needed to integrate AppleTalk into existing network installations. Many of these organisations had already invested in a very expensive Ethernet infrastructure and there was no direct way to connect a Macintosh to Ethernet. AppleTalk included a protocol structure for interconnecting AppleTalk subnets and so as a solution, EtherTalk was initially created to use the Ethernet as a backbone between LocalTalk subnets. To accomplish this, organizations would need to purchase a LocalTalk-to-Ethernet bridge and Apple left it to third parties to produce these products. A number of companies responded, including Hayes and a few newly formed companies like Kinetics.
LocalTalk, EtherTalk, TokenTalk, and AppleShare
By 1987, Ethernet was clearly winning the standards battle over Token Ring, and in the middle of that year, Apple introduced EtherTalk 1.0, an implementation of the AppleTalk protocol over the Ethernet physical layer. Introduced for the newly released Macintosh II computer, one of Apple's first two Macintoshes with expansion slots (the Macintosh SE had one slot of a different type), the operating system included a new Network control panel that allowed the user to select which physical connection to use for networking (from "Built-in" or "EtherTalk"). At introduction, Ethernet interface cards were available from 3Com and Kinetics that plugged into a Nubus slot in the machine. The new networking stack also expanded the system to allow a full 255 nodes per LAN. With EtherTalk's release, AppleTalk Personal Network was renamed LocalTalk, the name it would be known under for the bulk of its life. Token Ring would later be supported with a similar TokenTalk product, which used the same Network control panel and underlying software. Over time, many third-party companies would introduce compatible Ethernet and Token Ring cards that used these same drivers.
The appearance of a Macintosh with a direct Ethernet connection also magnified the Ethernet and LocalTalk compatibility problem: Networks with new and old Macs needed some way to communicate with each other. This could be as simple as a network of Ethernet Mac II's trying to talk to a LaserWriter that only connected to LocalTalk. Apple initially relied on the aforementioned LocalTalk-to-Ethernet bridge products, but contrary to Apple's belief that these would be low-volume products, by the end of 1987, 130,000 such networks were in use. AppleTalk was at that time the most used networking system in the world, with over three times the installations of any other vendor.
1987 also marked the introduction of the AppleShare product, a dedicated file server that ran on any Mac with 512 kB of RAM or more. A common AppleShare machine was the Mac Plus with an external SCSI hard drive. AppleShare was the #3 network operating system in the late 1980s, behind Novell NetWare and Microsoft's MS-Net. AppleShare was effectively the replacement for the failed Macintosh Office efforts, which had been based on a dedicated file server device.
AppleTalk Phase II and other developments
A significant re-design was released in 1989 as AppleTalk Phase II. In many ways, Phase II can be considered an effort to make the earlier version (never called Phase I) more generic. LANs could now support more than 255 nodes, and zones were no longer associated with physical networks but were entirely virtual constructs used simply to organize nodes. For instance, one could now make a "Printers" zone that would list all the printers in an organization, or one might want to place that same device in the "2nd Floor" zone to indicate its physical location. Phase II also included changes to the underlying inter-networking protocols to make them less "chatty", which had previously been a serious problem on networks that bridged over wide-area networks.
By this point, Apple had a wide variety of communications products under development, and many of these were announced along with AppleTalk Phase II. These included updates to EtherTalk and TokenTalk, AppleTalk software and LocalTalk hardware for the IBM PC, EtherTalk for Apple's A/UX operating system allowing it to use LaserWriters and other network resources, and the Mac X.25 and MacX products.
Ethernet had become almost universal by 1990, and it was time to build Ethernet into Macs direct from the factory. However, the physical wiring used by these networks was not yet completely standardized. Apple solved this problem using a single port on the back of the computer into which the user could plug an adaptor for any given cabling system. This FriendlyNet system was based on the industry-standard Attachment Unit Interface or AUI, but deliberately chose a non-standard connector that was smaller and easier to use, which they called "Apple AUI", or AAUI. FriendlyNet was first introduced on the Quadra 700 and Quadra 900 computers, and used across much of the Mac line for some time. As with LocalTalk, a number of third-party FriendlyNet adaptors quickly appeared.
As 10BASE-T became the de facto cabling system for Ethernet, second-generation Power Macintosh machines added a 10BASE-T port in addition to AAUI. The PowerBook 3400c and lower-end Power Macs also added 10BASE-T. The Power Macintosh 7300/8600/9600 were the final Macs to include AAUI, and 10BASE-T became universal starting with the Power Macintosh G3 and PowerBook G3.
The capital-I Internet
From the beginning of AppleTalk, users wanted to connect the Macintosh to TCP/IP network environments. In 1984, Bill Croft at Stanford University pioneered the development of IP packets encapsulated in DDP as part of the SEAGATE (Stanford Ethernet–AppleTalk Gateway) project. SEAGATE was commercialized by Kinetics in their LocalTalk-to-Ethernet bridge as an additional routing option. A few years later, MacIP was separated from the SEAGATE code and became the de facto method for IP packets to be routed over LocalTalk networks. By 1986, Columbia University released the first version of the Columbia AppleTalk Package (CAP) that allowed higher integration of Unix, TCP/IP, and AppleTalk environments. In 1988, Apple released MacTCP, a system that allowed the Mac to support TCP/IP on machines with suitable Ethernet hardware. However, this left many universities with the problem of supporting IP on their many LocalTalk-equipped Macs. It was soon common to include MacIP support in LocalTalk-to-Ethernet bridges. MacTCP would not become a standard part of the Classic Mac OS until 1994, by which time it also supported SNMP and PPP.
For some time in the early 1990s, the Mac was a primary client on the rapidly expanding Internet. Among the better-known programs in wide use were Fetch, Eudora, eXodus, NewsWatcher, and the NCSA packages, especially NCSA Mosaic and its offspring, Netscape Navigator. Additionally, a number of server products appeared that allowed the Mac to host Internet content. Through this period, Macs had about 2 to 3 times as many clients connected to the Internet as any other platform, despite the relatively small overall microcomputer market share.
As the world quickly moved to IP for both LAN and WAN uses, Apple was faced with maintaining two increasingly outdated code bases on an ever-wider group of machines as well as the introduction of the PowerPC-based machines. This led to the Open Transport efforts, which re-implemented both MacTCP and AppleTalk on an entirely new code base adapted from the Unix standard STREAMS. Early versions had problems and did not become stable for some time. By that point, Apple was deep in their ultimately doomed Copland efforts.
Legacy and abandonment
With the purchase of NeXT and subsequent development of Mac OS X, AppleTalk was strictly a legacy system. Support was added to Mac OS X in order to provide support for a large number of existing AppleTalk devices, notably laser printers and file shares, but alternate connection solutions common in this era, notably USB for printers, limited their demand. As Apple abandoned many of these product categories, and all new systems were based on IP, AppleTalk became less and less common. AppleTalk support was finally removed from the macOS line in Mac OS X v10.6 in 2009.
However, the loss of AppleTalk did not reduce the desire for networking solutions that combined its ease of use with IP routing. Apple has led the development of many such efforts, from the introduction of the AirPort router to the development of the zero-configuration networking system and their implementation of it, Rendezvous, later renamed Bonjour.
As of 2020, AppleTalk support has been completely removed from legacy support with macOS 11 Big Sur.
Design
The AppleTalk design rigorously followed the OSI model of protocol layering. Unlike most of the early LAN systems, AppleTalk was not built using the archetypal Xerox XNS system. The intended target was not Ethernet, and it did not have 48-bit addresses to route. Nevertheless, many portions of the AppleTalk system have direct analogs in XNS.
One key differentiation for AppleTalk was it contained two protocols aimed at making the system completely self-configuring. The AppleTalk address resolution protocol (AARP) allowed AppleTalk hosts to automatically generate their own network addresses, and the Name Binding Protocol (NBP) was a dynamic system for mapping network addresses to user-readable names. Although systems similar to AARP existed in other systems, Banyan VINES for instance. Beginning about 2002 Rendezvous (the combination of DNS-based service discovery, Multicast DNS, and link-local addressing) provided capabilities and usability using IP that were similar to those of AppleTalk.
Both AARP and NBP had defined ways to allow "controller" devices to override the default mechanisms. The concept was to allow routers to provide the information or "hardwire" the system to known addresses and names. On larger networks where AARP could cause problems as new nodes searched for free addresses, the addition of a router could reduce "chattiness." Together AARP and NBP made AppleTalk an easy-to-use networking system. New machines were added to the network by plugging them in and optionally giving them a name. The NBP lists were examined and displayed by a program known as the Chooser which would display a list of machines on the local network, divided into classes such as file-servers and printers.
Addressing
An AppleTalk address was a four-byte quantity. This consisted of a two-byte network number, a one-byte node number, and a one-byte socket number. Of these, only the network number required any configuration, being obtained from a router. Each node dynamically chose its own node number, according to a protocol (originally the LocalTalk Link Access Protocol LLAP and later, for Ethernet/EtherTalk, the AppleTalk Address Resolution Protocol, AARP) which handled contention between different nodes accidentally choosing the same number. For socket numbers, a few well-known numbers were reserved for special purposes specific to the AppleTalk protocol itself. Apart from these, all application-level protocols were expected to use dynamically assigned socket numbers at both the client and server end.
Because of this dynamism, users could not be expected to access services by specifying their address. Instead, all services had names which, being chosen by humans, could be expected to be meaningful to users, and also could be sufficiently long to minimize the chance of conflicts.
As NBP names translated to an address, which included a socket number as well as a node number, a name in AppleTalk mapped directly to a service being provided by a machine, which was entirely separate from the name of the machine itself. Thus, services could be moved to a different machine and, so long as they kept the same service name, there was no need for users to do anything different in order to continue accessing the service. And the same machine could host any number of instances of services of the same type, without any network connection conflicts.
Contrast this with A records in the DNS, in which a name translates to a machine's address, not including the port number that might be providing a service. Thus, if people are accustomed to using a particular machine name to access a particular service, their access will break when the service is moved to a different machine. This can be mitigated somewhat by insistence on using CNAME records indicating service rather than actual machine names to refer to the service, but there is no way of guaranteeing that users will follow such a convention. Some newer protocols, such as Kerberos and Active Directory use DNS SRV records to identify services by name, which is much closer to the AppleTalk model.
Protocols
AppleTalk Address Resolution Protocol
The AppleTalk Address Resolution Protocol (AARP) resolves AppleTalk addresses to link layer addresses. It is functionally equivalent to ARP and obtains address resolution by a method very similar to ARP.
AARP is a fairly simple system. When powered on, an AppleTalk machine broadcasts an AARP probe packet asking for a network address, intending to hear back from controllers such as routers. If no address is provided, one is picked at random from the "base subnet", 0. It then broadcasts another packet saying "I am selecting this address", and then waits to see if anyone else on the network complains. If another machine has that address, the newly connecting machine will pick another address, and keep trying until it finds a free one. On a network with many machines it may take several tries before a free address is found, so for performance purposes the successful address is recorded in NVRAM and used as the default address in the future. This means that in most real-world setups where machines are added a few at a time, only one or two tries are needed before the address effectively becomes constant.
AppleTalk Data Stream Protocol
The AppleTalk Data Stream Protocol (ADSP) was a comparatively late addition to the AppleTalk protocol suite, done when it became clear that a TCP-style reliable connection-oriented transport was needed. Significant differences from TCP were that:
A connection attempt could be rejected.
There were no "half-open" connections; once one end initiated a tear-down of the connection, the whole connection would be closed (i.e., ADSP is full-duplex, not dual simplex).
AppleTalk had an included attention message system which allowed short messages to be sent which would bypass the normal stream data flow. These were delivered reliably but out of order with respect to the stream. Any attention message would be delivered as soon as possible instead of waiting for the current stream byte sequence point to become current.
Apple Filing Protocol
The Apple Filing Protocol (AFP), formerly AppleTalk Filing Protocol, is the protocol for communicating with AppleShare file servers. Built on top of AppleTalk Session Protocol (for legacy AFP over DDP) or the Data Stream Interface (for AFP over TCP), it provides services for authenticating users (extensible to different authentication methods including two-way random-number exchange) and for performing operations specific to the Macintosh HFS filesystem. AFP is still in use in macOS, even though most other AppleTalk protocols have been deprecated.
AppleTalk Session Protocol
The AppleTalk Session Protocol (ASP) was an intermediate protocol, built on top of AppleTalk Transaction Protocol (ATP), which in turn was the foundation of AFP. It provided basic services for requesting responses to arbitrary commands and performing out-of-band status queries. It also allowed the server to send asynchronous attention messages to the client.
AppleTalk Transaction Protocol
The AppleTalk Transaction Protocol (ATP) was the original reliable transport-level protocol for AppleTalk, built on top of DDP. At the time it was being developed, a full, reliable connection-oriented protocol like TCP was considered to be too expensive to implement for most of the intended uses of AppleTalk. Thus, ATP was a simple request/response exchange, with no need to set up or tear down connections.
An ATP request packet could be answered by up to eight response packets. The requestor then sent an acknowledgement packet containing a bit mask indicating which of the response packets it received, so the responder could retransmit the remainder.
ATP could operate in either "at-least-once" mode or "exactly-once" mode. Exactly-once mode was essential for operations which were not idempotent; in this mode, the responder kept a copy of the response buffers in memory until successful receipt of a release packet from the requestor, or until a timeout elapsed. This way, it could respond to duplicate requests with the same transaction ID by resending the same response data, without performing the actual operation again.
Datagram Delivery Protocol
The Datagram Delivery Protocol (DDP) was the lowest-level data-link-independent transport protocol. It provided a datagram service with no guarantees of delivery. All application-level protocols, including the infrastructure protocols NBP, RTMP and ZIP, were built on top of DDP. AppleTalk's DDP corresponds closely to the Network layer of the Open Systems Interconnection (OSI) communication model.
Name Binding Protocol
The Name Binding Protocol (NBP) was a dynamic, distributed system for managing AppleTalk names. When a service started up on a machine, it registered a name for itself as chosen by a human administrator. At this point, NBP provided a system for checking that no other machine had already registered the same name. Later, when a client wanted to access that service, it used NBP to query machines to find that service. NBP provided browsability ("what are the names of all the services available?") as well as the ability to find a service with a particular name. Names were human-readable, containing spaces and upper- and lower-case letters, and including support for searching.
AppleTalk Echo Protocol
The AppleTalk Echo Protocol (AEP) was a transport layer protocol designed to test the reachability of network nodes. AEP generates packets to be sent to the network node and is identified in the Type field of a packet as an AEP packet. The packet is first passed to the source DDP. After it is identified as an AEP packet, it is forwarded to the node where the packet is examined by the DDP at the destination. After the packet is identified as an AEP packet, the packet is then copied and a field in the packet is altered to create an AEP reply packet, and is then returned to the source node.
Printer Access Protocol
The Printer Access Protocol (PAP) was the standard way of communicating with PostScript printers. It was built on top of ATP. When a PAP connection was opened, each end sent the other an ATP request which basically meant "send me more data". The client's response to the server was to send a block of PostScript code, while the server could respond with any diagnostic messages that might be generated as a result, after which another "send-more-data" request was sent. This use of ATP provided automatic flow control; each end could only send data to the other end if there was an outstanding ATP request to respond to.
PAP also provided for out-of-band status queries, handled by separate ATP transactions. Even while it was busy servicing a print job from one client, a PAP server could continue to respond to status requests from any number of other clients. This allowed other Macintoshes on the LAN that were waiting to print to display status messages indicating that the printer was busy, and what the job was that it was busy with.
Routing Table Maintenance Protocol
The Routing Table Maintenance Protocol (RTMP) was the protocol by which routers kept each other informed about the topology of the network. This was the only part of AppleTalk that required periodic unsolicited broadcasts: every 10 seconds, each router had to send out a list of all the network numbers it knew about and how far away it thought they were.
Zone Information Protocol
The Zone Information Protocol (ZIP) was the protocol by which AppleTalk network numbers were associated with zone names. A zone was a subdivision of the network that made sense to humans (for example, "Accounting Department"); but while a network number had to be assigned to a topologically contiguous section of the network, a zone could include several different discontiguous portions of the network.
Physical implementation
The initial default hardware implementation for AppleTalk was a high-speed serial protocol known as LocalTalk that used the Macintosh's built-in RS-422 ports at 230.4 kbit/s. LocalTalk used a splitter box in the RS-422 port to provide an upstream and downstream cable from a single port. The topology was a bus: cables were daisy-chained from each connected machine to the next, up to the maximum of 32 permitted on any LocalTalk segment. The system was slow by today's standards, but at the time the additional cost and complexity of networking on PC machines was such that it was common that Macs were the only networked personal computers in an office. Other larger computers, such as UNIX or VAX workstations, would commonly be networked via Ethernet.
Other physical implementations were also available. A very popular replacement for LocalTalk was PhoneNET, a third-party solution from Farallon Computing, Inc. (renamed Netopia, acquired by Motorola in 2007) that also used the RS-422 port and was indistinguishable from LocalTalk as far as Apple's LocalTalk port drivers were concerned, but ran over very inexpensive standard phone cabling with four-wire, six-position modular connectors, the same cables used to connect landline telephones. Since it used the second pair of wires, network devices could even be connected through existing telephone jacks if a second line was not present. Foreshadowing today's network hubs and switches, Farallon provided solutions for PhoneNet to be used in star as well as bus configurations, with both passive star connections (with the phone wires simply bridged to each other at a central point), and active star with "PhoneNet Star Controller" hub hardware. In a star configuration, any wiring issue only affected one device, and problems were easy to pinpoint. PhoneNet's low cost, flexibility, and easy troubleshooting resulted in it being the dominant choice for Mac networks into the early 1990s.
AppleTalk protocols also came to run over Ethernet (first coaxial and then twisted pair) and Token Ring physical layers, labeled by Apple as EtherTalk and TokenTalk, respectively. EtherTalk gradually became the dominant implementation method for AppleTalk as Ethernet became generally popular in the PC industry throughout the 1990s. Besides AppleTalk and TCP/IP, any Ethernet network could also simultaneously carry other protocols such as DECnet and IPX.
Networking model
Versions
Cross-platform solutions
When AppleTalk was first introduced, the dominant office computing platform was the PC compatible running MS-DOS. Apple introduced the AppleTalk PC Card in early 1987, allowing PCs to join AppleTalk networks and print to LaserWriter printers. A year later AppleShare PC was released, allowing PCs to access AppleShare file servers.
The "TOPS Teleconnector" MS-DOS networking system over AppleTalk system enabled MS-DOS PCs to communicate over AppleTalk network hardware; it comprised an AppleTalk interface card for the PC and a suite of networking software allowing such functions as file, drive and printer sharing. As well as allowing the construction of a PC-only AppleTalk network, it allowed communication between PCs and Macs with TOPS software installed. (Macs without TOPS installed could use the same network but only to communicate with other Apple machines.) The Mac TOPS software did not match the quality of Apple's own either in ease of use or in robustness and freedom from crashes, but the DOS software was relatively simple to use in DOS terms, and was robust.
The BSD and Linux operating systems support AppleTalk through an open source project called Netatalk, which implements the complete protocol suite and allows them to both act as native file or print servers for Macintosh computers, and print to LocalTalk printers over the network.
The Windows Server operating systems supported AppleTalk starting with Windows NT and ending after Windows Server 2003. Miramar included AppleTalk in its PC MacLAN product which was discontinued by CA in 2007. GroupLogic continues to bundle its AppleTalk protocol with its ExtremeZ-IP server software for Macintosh-Windows integration which supports Windows Server 2008 and Windows Vista as well prior versions. HELIOS Software GmbH offers a proprietary implementation of the AppleTalk protocol stack, as part of their HELIOS UB2 server. This is essentially a File and Print Server suite that runs on a whole range of different platforms.
In addition, Columbia University released the Columbia AppleTalk Package (CAP) which implemented the protocol suite for various Unix flavours including Ultrix, SunOS, BSD and IRIX. This package is no longer actively maintained.
See also
Netatalk is a free, open-source implementation of the AppleTalk suite of protocols.
Network File System
Remote File Sharing
Samba
Server Message Block
Notes
References
Citations
Bibliography
External links
Pushing AppleTalk Across the Internet
Apple Inc. software
Network operating systems
Network protocols | AppleTalk | [
"Engineering"
] | 7,250 | [
"Computer networks engineering",
"Network operating systems"
] |
2,120 | https://en.wikipedia.org/wiki/Aliphatic%20compound | In organic chemistry, hydrocarbons (compounds composed solely of carbon and hydrogen) are divided into two classes: aromatic compounds and aliphatic compounds (; G. aleiphar, fat, oil). Aliphatic compounds can be saturated (in which all the C-C bonds are single requiring the structure to be completed, or 'saturated', by hydrogen) like hexane, or unsaturated, like hexene and hexyne. Open-chain compounds, whether straight or branched, and which contain no rings of any type, are always aliphatic. Cyclic compounds can be aliphatic if they are not aromatic.
Structure
Aliphatic compounds can be saturated, joined by single bonds (alkanes), or unsaturated, with double bonds (alkenes) or triple bonds (alkynes). If other elements (heteroatoms) are bound to the carbon chain, the most common being oxygen, nitrogen, sulfur, and chlorine, it is no longer a hydrocarbon, and therefore no longer an aliphatic compound. However, such compounds may still be referred to as aliphatic if the hydrocarbon portion of the molecule is aliphatic, e.g. aliphatic amines, to differentiate them from aromatic amines.
The least complex aliphatic compound is methane (CH4).
Properties
Most aliphatic compounds are flammable, allowing the use of hydrocarbons as fuel, such as methane in natural gas for stoves or heating; butane in torches and lighters; various aliphatic (as well as aromatic) hydrocarbons in liquid transportation fuels like petrol/gasoline, diesel, and jet fuel; and other uses such as ethyne (acetylene) in welding.
Examples of aliphatic compounds
The most important aliphatic compounds are:
n-, iso- and cyclo-alkanes (saturated hydrocarbons)
n-, iso- and cyclo-alkenes and -alkynes (unsaturated hydrocarbons).
Important examples of low-molecular aliphatic compounds can be found in the list below (sorted by the number of carbon-atoms):
References
Organic compounds | Aliphatic compound | [
"Chemistry"
] | 462 | [
"Organic compounds",
"Organic chemistry stubs"
] |
2,122 | https://en.wikipedia.org/wiki/Astrology | Astrology is a range of divinatory practices, recognized as pseudoscientific since the 18th century, that propose that information about human affairs and terrestrial events may be discerned by studying the apparent positions of celestial objects. Different cultures have employed forms of astrology since at least the 2nd millennium BCE, these practices having originated in calendrical systems used to predict seasonal shifts and to interpret celestial cycles as signs of divine communications. Most, if not all, cultures have attached importance to what they observed in the sky, and some—such as the Hindus, Chinese, and the Maya—developed elaborate systems for predicting terrestrial events from celestial observations. Western astrology, one of the oldest astrological systems still in use, can trace its roots to 19th–17th century BCE Mesopotamia, from where it spread to Ancient Greece, Rome, the Islamic world, and eventually Central and Western Europe. Contemporary Western astrology is often associated with systems of horoscopes that purport to explain aspects of a person's personality and predict significant events in their lives based on the positions of celestial objects; the majority of professional astrologers rely on such systems.
Throughout its history, astrology has had its detractors, competitors and skeptics who opposed it for moral, religious, political, and empirical reasons. Nonetheless, prior to the Enlightenment, astrology was generally considered a scholarly tradition and was common in learned circles, often in close relation with astronomy, meteorology, medicine, and alchemy. It was present in political circles and is mentioned in various works of literature, from Dante Alighieri and Geoffrey Chaucer to William Shakespeare, Lope de Vega, and Pedro Calderón de la Barca. During the Enlightenment, however, astrology lost its status as an area of legitimate scholarly pursuit. Following the end of the 19th century and the wide-scale adoption of the scientific method, researchers have successfully challenged astrology on both theoretical and experimental grounds, and have shown it to have no scientific validity or explanatory power. Astrology thus lost its academic and theoretical standing in the western world, and common belief in it largely declined, until a continuing resurgence starting in the 1960s.
Etymology
The word astrology comes from the early Latin word astrologia, which derives from the Greek —from ἄστρον astron ("star") and -λογία -logia, ("study of"—"account of the stars"). The word entered the English language via Latin and medieval French, and its use overlapped considerably with that of astronomy (derived from the Latin astronomia). By the 17th century, astronomy became established as the scientific term, with astrology referring to divinations and schemes for predicting human affairs.
History
Many cultures have attached importance to astronomical events, and the Indians, Chinese, and Maya developed elaborate systems for predicting terrestrial events from celestial observations. A form of astrology was practised in the Old Babylonian period of Mesopotamia, . Vedāṅga Jyotiṣa is one of earliest known Hindu texts on astronomy and astrology (Jyotisha). The text is dated between 1400 BCE to final centuries BCE by various scholars according to astronomical and linguistic evidences. Chinese astrology was elaborated in the Zhou dynasty (1046–256 BCE). Hellenistic astrology after 332 BCE mixed Babylonian astrology with Egyptian Decanic astrology in Alexandria, creating horoscopic astrology. Alexander the Great's conquest of Asia allowed astrology to spread to Ancient Greece and Rome. In Rome, astrology was associated with "Chaldean wisdom". After the conquest of Alexandria in the 7th century, astrology was taken up by Islamic scholars, and Hellenistic texts were translated into Arabic and Persian. In the 12th century, Arabic texts were imported to Europe and translated into Latin. Major astronomers including Tycho Brahe, Johannes Kepler and Galileo practised as court astrologers. Astrological references appear in literature in the works of poets such as Dante Alighieri and Geoffrey Chaucer, and of playwrights such as Christopher Marlowe and William Shakespeare.
Throughout most of its history, astrology was considered a scholarly tradition. It was accepted in political and academic contexts, and was connected with other studies, such as astronomy, alchemy, meteorology, and medicine. At the end of the 17th century, new scientific concepts in astronomy and physics (such as heliocentrism and Newtonian mechanics) called astrology into question. Astrology thus lost its academic and theoretical standing, and common belief in astrology has largely declined.
Ancient world
Astrology, in its broadest sense, is the search for meaning in the sky. Early evidence for humans making conscious attempts to measure, record, and predict seasonal changes by reference to astronomical cycles, appears as markings on bones and cave walls, which show that lunar cycles were being noted as early as 25,000 years ago. This was a first step towards recording the Moon's influence upon tides and rivers, and towards organising a communal calendar. Farmers addressed agricultural needs with increasing knowledge of the constellations that appear in the different seasons—and used the rising of particular star-groups to herald annual floods or seasonal activities. By the 3rd millennium BCE, civilisations had sophisticated awareness of celestial cycles, and may have oriented temples in alignment with heliacal risings of the stars.
Scattered evidence suggests that the oldest known astrological references are copies of texts made in the ancient world. The Venus tablet of Ammisaduqa is thought to have been compiled in Babylon around 1700 BCE. A scroll documenting an early use of electional astrology is doubtfully ascribed to the reign of the Sumerian ruler Gudea of Lagash ( – 2124 BCE). This describes how the gods revealed to him in a dream the constellations that would be most favourable for the planned construction of a temple. However, there is controversy about whether these were genuinely recorded at the time or merely ascribed to ancient rulers by posterity. The oldest undisputed evidence of the use of astrology as an integrated system of knowledge is therefore attributed to the records of the first dynasty of Babylon (1950–1651 BCE). This astrology had some parallels with Hellenistic Greek (western) astrology, including the zodiac, a norming point near 9 degrees in Aries, the trine aspect, planetary exaltations, and the dodekatemoria (the twelve divisions of 30 degrees each). The Babylonians viewed celestial events as possible signs rather than as causes of physical events.
The system of Chinese astrology was elaborated during the Zhou dynasty (1046–256 BCE) and flourished during the Han dynasty (2nd century BCE to 2nd century CE), during which all the familiar elements of traditional Chinese culture – the Yin-Yang philosophy, theory of the five elements, Heaven and Earth, Confucian morality – were brought together to formalise the philosophical principles of Chinese medicine and divination, astrology, and alchemy.
The ancient Arabs that inhabited the Arabian Peninsula before the advent of Islam used to profess a widespread belief in fatalism (ḳadar) alongside a fearful consideration for the sky and the stars, which they held to be ultimately responsible for every phenomena that occurs on Earth and for the destiny of humankind. Accordingly, they shaped their entire lives in accordance with their interpretations of astral configurations and phenomena.
Ancient objections
The Hellenistic schools of philosophical skepticism criticized the rationality of astrology. Criticism of astrology by academic skeptics such as Cicero, Carneades, and Favorinus; and Pyrrhonists such as Sextus Empiricus has been preserved.
Carneades argued that belief in fate denies free will and morality; that people born at different times can all die in the same accident or battle; and that contrary to uniform influences from the stars, tribes and cultures are all different.
Cicero, in De Divinatione, leveled a critique of astrology that some modern philosophers consider to be the first working definition of pseudoscience and the answer to the demarcation problem. Philosopher of Science Massimo Pigliucci, building on the work of Historian of Science, Damien Fernandez-Beanato, argues that Cicero outlined a "convincing distinction between astrology and astronomy that remains valid in the twenty-first century." Cicero stated the twins objection (that with close birth times, personal outcomes can be very different), later developed by Augustine. He argued that since the other planets are much more distant from the Earth than the Moon, they could have only very tiny influence compared to the Moon's. He also argued that if astrology explains everything about a person's fate, then it wrongly ignores the visible effect of inherited ability and parenting, changes in health worked by medicine, or the effects of the weather on people.
Favorinus argued that it was absurd to imagine that stars and planets would affect human bodies in the same way as they affect the tides, and equally absurd that small motions in the heavens cause large changes in people's fates.
Sextus Empiricus argued that it was absurd to link human attributes with myths about the signs of the zodiac, and wrote an entire book, Against the Astrologers (Πρὸς ἀστρολόγους, Pros astrologous), compiling arguments against astrology. Against the Astrologers was the fifth section of a larger work arguing against philosophical and scientific inquiry in general, Against the Professors (Πρὸς μαθηματικούς, Pros mathematikous).
Plotinus, a neoplatonist, argued that since the fixed stars are much more distant than the planets, it is laughable to imagine the planets' effect on human affairs should depend on their position with respect to the zodiac. He also argues that the interpretation of the Moon's conjunction with a planet as good when the moon is full, but bad when the moon is waning, is clearly wrong, as from the Moon's point of view, half of its surface is always in sunlight; and from the planet's point of view, waning should be better, as then the planet sees some light from the Moon, but when the Moon is full to us, it is dark, and therefore bad, on the side facing the planet in question.
Hellenistic Egypt
In 525 BCE, Egypt was conquered by the Persians. The 1st century BCE Egyptian Dendera Zodiac shares two signs – the Balance and the Scorpion – with Mesopotamian astrology.
With the occupation by Alexander the Great in 332 BCE, Egypt became Hellenistic. The city of Alexandria was founded by Alexander after the conquest, becoming the place where Babylonian astrology was mixed with Egyptian Decanic astrology to create Horoscopic astrology. This contained the Babylonian zodiac with its system of planetary exaltations, the triplicities of the signs and the importance of eclipses. It used the Egyptian concept of dividing the zodiac into thirty-six decans of ten degrees each, with an emphasis on the rising decan, and the Greek system of planetary Gods, sign rulership and four elements. 2nd century BCE texts predict positions of planets in zodiac signs at the time of the rising of certain decans, particularly Sothis. The astrologer and astronomer Ptolemy lived in Alexandria. Ptolemy's work the Tetrabiblos formed the basis of Western astrology, and, "...enjoyed almost the authority of a Bible among the astrological writers of a thousand years or more."
Greece and Rome
The conquest of Asia by Alexander the Great exposed the Greeks to ideas from Syria, Babylon, Persia and central Asia. Around 280 BCE, Berossus, a priest of Bel from Babylon, moved to the Greek island of Kos, teaching astrology and Babylonian culture. By the 1st century BCE, there were two varieties of astrology, one using horoscopes to describe the past, present and future; the other, theurgic, emphasising the soul's ascent to the stars. Greek influence played a crucial role in the transmission of astrological theory to Rome.
The first definite reference to astrology in Rome comes from the orator Cato, who in 160 BCE warned farm overseers against consulting with Chaldeans, who were described as Babylonian 'star-gazers'. Among both Greeks and Romans, Babylonia (also known as Chaldea) became so identified with astrology that 'Chaldean wisdom' became synonymous with divination using planets and stars. The 2nd-century Roman poet and satirist Juvenal complains about the pervasive influence of Chaldeans, saying, "Still more trusted are the Chaldaeans; every word uttered by the astrologer they will believe has come from Hammon's fountain."
One of the first astrologers to bring Hermetic astrology to Rome was Thrasyllus, astrologer to the emperor Tiberius, the first emperor to have had a court astrologer, though his predecessor Augustus had used astrology to help legitimise his Imperial rights.
Medieval world
Hindu
The main texts upon which classical Indian astrology is based are early medieval compilations, notably the , and Sārāvalī by .
The Horāshastra is a composite work of 71 chapters, of which the first part (chapters 1–51) dates to the 7th to early 8th centuries and the second part (chapters 52–71) to the later 8th century. The Sārāvalī likewise dates to around 800 CE. English translations of these texts were published by N.N. Krishna Rau and V.B. Choudhari in 1963 and 1961, respectively.
Islamic
Astrology was taken up by Islamic scholars following the collapse of Alexandria to the Arabs in the 7th century, and the founding of the Abbasid empire in the 8th. The second Abbasid caliph, Al Mansur (754–775) founded the city of Baghdad to act as a centre of learning, and included in its design a library-translation centre known as Bayt al-Hikma 'House of Wisdom', which continued to receive development from his heirs and was to provide a major impetus for Arabic-Persian translations of Hellenistic astrological texts. The early translators included Mashallah, who helped to elect the time for the foundation of Baghdad, and Sahl ibn Bishr, (a.k.a. Zael), whose texts were directly influential upon later European astrologers such as Guido Bonatti in the 13th century, and William Lilly in the 17th century. Knowledge of Arabic texts started to become imported into Europe during the Latin translations of the 12th century.
Europe
In the seventh century, Isidore of Seville argued in his Etymologiae that astronomy described the movements of the heavens, while astrology had two parts: one was scientific, describing the movements of the Sun, the Moon and the stars, while the other, making predictions, was theologically erroneous.
The first astrological book published in Europe was the Liber Planetis et Mundi Climatibus ("Book of the Planets and Regions of the World"), which appeared between 1010 and 1027 AD, and may have been authored by Gerbert of Aurillac. Ptolemy's second century AD Tetrabiblos was translated into Latin by Plato of Tivoli in 1138. The Dominican theologian Thomas Aquinas followed Aristotle in proposing that the stars ruled the imperfect 'sublunary' body, while attempting to reconcile astrology with Christianity by stating that God ruled the soul. The thirteenth century mathematician Campanus of Novara is said to have devised a system of astrological houses that divides the prime vertical into 'houses' of equal 30° arcs, though the system was used earlier in the East. The thirteenth century astronomer Guido Bonatti wrote a textbook, the Liber Astronomicus, a copy of which King Henry VII of England owned at the end of the fifteenth century.
In Paradiso, the final part of the Divine Comedy, the Italian poet Dante Alighieri referred "in countless details" to the astrological planets, though he adapted traditional astrology to suit his Christian viewpoint, for example using astrological thinking in his prophecies of the reform of Christendom.
John Gower in the fourteenth century defined astrology as essentially limited to the making of predictions. The influence of the stars was in turn divided into natural astrology, with for example effects on tides and the growth of plants, and judicial astrology, with supposedly predictable effects on people. The fourteenth-century sceptic Nicole Oresme however included astronomy as a part of astrology in his Livre de divinacions. Oresme argued that current approaches to prediction of events such as plagues, wars, and weather were inappropriate, but that such prediction was a valid field of inquiry. However, he attacked the use of astrology to choose the timing of actions (so-called interrogation and election) as wholly false, and rejected the determination of human action by the stars on grounds of free will. The friar Laurens Pignon (c. 1368–1449) similarly rejected all forms of divination and determinism, including by the stars, in his 1411 Contre les Devineurs. This was in opposition to the tradition carried by the Arab astronomer Albumasar (787–886) whose Introductorium in Astronomiam and De Magnis Coniunctionibus argued the view that both individual actions and larger scale history are determined by the stars.
In the late 15th century, Giovanni Pico della Mirandola forcefully attacked astrology in Disputationes contra Astrologos, arguing that the heavens neither caused, nor heralded earthly events. His contemporary, Pietro Pomponazzi, a "rationalistic and critical thinker", was much more sanguine about astrology and critical of Pico's attack.
Renaissance and Early Modern
Renaissance scholars commonly practised astrology. Gerolamo Cardano cast the horoscope of king Edward VI of England, while John Dee was the personal astrologer to queen Elizabeth I of England. Catherine de Medici paid Michael Nostradamus in 1566 to verify the prediction of the death of her husband, king Henry II of France made by her astrologer Lucus Gauricus. Major astronomers who practised as court astrologers included Tycho Brahe in the royal court of Denmark, Johannes Kepler to the Habsburgs, Galileo Galilei to the Medici, and Giordano Bruno who was burnt at the stake for heresy in Rome in 1600. The distinction between astrology and astronomy was not entirely clear. Advances in astronomy were often motivated by the desire to improve the accuracy of astrology. Kepler, for example, was driven by a belief in harmonies between Earthly and celestial affairs, yet he disparaged the activities of most astrologers as "evil-smelling dung".
Ephemerides with complex astrological calculations, and almanacs interpreting celestial events for use in medicine and for choosing times to plant crops, were popular in Elizabethan England. In 1597, the English mathematician and physician Thomas Hood made a set of paper instruments that used revolving overlays to help students work out relationships between fixed stars or constellations, the midheaven, and the twelve astrological houses. Hood's instruments also illustrated, for pedagogical purposes, the supposed relationships between the signs of the zodiac, the planets, and the parts of the human body adherents believed were governed by the planets and signs. While Hood's presentation was innovative, his astrological information was largely standard and was taken from Gerard Mercator's astrological disc made in 1551, or a source used by Mercator. Despite its popularity, Renaissance astrology had what historian Gabor Almasi calls "elite debate", exemplified by the polemical letters of Swiss physician Thomas Erastus who fought against astrology, calling it "vanity" and "superstition." Then around the time of the new star of 1572 and the comet of 1577 there began what Almasi calls an "extended epistemological reform" which began the process of excluding religion, astrology and anthropocentrism from scientific debate. By 1679, the yearly publication La Connoissance des temps eschewed astrology as a legitimate topic.
Enlightenment period and onwards
During the Enlightenment, intellectual sympathy for astrology fell away, leaving only a popular following supported by cheap almanacs. One English almanac compiler, Richard Saunders, followed the spirit of the age by printing a derisive Discourse on the Invalidity of Astrology, while in France Pierre Bayle's Dictionnaire of 1697 stated that the subject was puerile. The Anglo-Irish satirist Jonathan Swift ridiculed the Whig political astrologer John Partridge.
In the second half of the 17th century, the Society of Astrologers (1647–1684), a trade, educational, and social organization, sought to unite London's often fractious astrologers in the task of revitalizing astrology. Following the template of the popular "Feasts of Mathematicians" they endeavored to defend their art in the face of growing religious criticism. The Society hosted banquets, exchanged "instruments and manuscripts", proposed research projects, and funded the publication of sermons that depicted astrology as a legitimate biblical pursuit for Christians. They commissioned sermons that argued Astrology was divine, Hebraic, and scripturally supported by Bible passages about the Magi and the sons of Seth. According to historian Michelle Pfeffer, "The society's public relations campaign ultimately failed." Modern historians have mostly neglected the Society of Astrologers in favor of the still extant Royal Society (1660), even though both organizations initially had some of the same members.
Astrology saw a popular revival starting in the 19th century, as part of a general revival of spiritualism and—later, New Age philosophy, and through the influence of mass media such as newspaper horoscopes. Early in the 20th century the psychiatrist Carl Jung developed some concepts concerning astrology, which led to the development of psychological astrology.
Principles and practice
Advocates have defined astrology as a symbolic language, an art form, a science, and a method of divination. Though most cultural astrology systems share common roots in ancient philosophies that influenced each other, many use methods that differ from those in the West. These include Hindu astrology (also known as "Indian astrology" and in modern times referred to as "Vedic astrology") and Chinese astrology, both of which have influenced the world's cultural history.
Western
Western astrology is a form of divination based on the construction of a horoscope for an exact moment, such as a person's birth. It uses the tropical zodiac, which is aligned to the equinoctial points.
Western astrology is founded on the movements and relative positions of celestial bodies such as the Sun, Moon and planets, which are analysed by their movement through signs of the zodiac (twelve spatial divisions of the ecliptic) and by their aspects (based on geometric angles) relative to one another. They are also considered by their placement in houses (twelve spatial divisions of the sky). Astrology's modern representation in western popular media is usually reduced to sun sign astrology, which considers only the zodiac sign of the Sun at an individual's date of birth, and represents only 1/12 of the total chart.
The horoscope visually expresses the set of relationships for the time and place of the chosen event. These relationships are between the seven 'planets', signifying tendencies such as war and love; the twelve signs of the zodiac; and the twelve houses. Each planet is in a particular sign and a particular house at the chosen time, when observed from the chosen place, creating two kinds of relationship. A third kind is the aspect of each planet to every other planet, where for example two planets 120° apart (in 'trine') are in a harmonious relationship, but two planets 90° apart ('square') are in a conflicted relationship. Together these relationships and their interpretations are said to form "...the language of the heavens speaking to learned men."
Along with tarot divination, astrology is one of the core studies of Western esotericism, and as such has influenced systems of magical belief not only among Western esotericists and Hermeticists, but also belief systems such as Wicca, which have borrowed from or been influenced by the Western esoteric tradition. Tanya Luhrmann has said that "all magicians know something about astrology," and refers to a table of correspondences in Starhawk's The Spiral Dance, organised by planet, as an example of the astrological lore studied by magicians.
Hindu
The earliest Vedic text on astronomy is the Vedanga Jyotisha; Vedic thought later came to include astrology as well.
Hindu natal astrology originated with Hellenistic astrology by the 3rd century BCE, though incorporating the Hindu lunar mansions. The names of the signs (e.g. Greek 'Krios' for Aries, Hindi 'Kriya'), the planets (e.g. Greek 'Helios' for Sun, astrological Hindi 'Heli'), and astrological terms (e.g. Greek 'apoklima' and 'sunaphe' for declination and planetary conjunction, Hindi 'apoklima' and 'sunapha' respectively) in Varaha Mihira's texts are considered conclusive evidence of a Greek origin for Hindu astrology. The Indian techniques may also have been augmented with some of the Babylonian techniques.
Chinese and East Asian
Chinese astrology has a close relation with Chinese philosophy (theory of the three harmonies: heaven, earth and man) and uses concepts such as yin and yang, the Five phases, the 10 Celestial stems, the 12 Earthly Branches, and shichen (時辰 a form of timekeeping used for religious purposes). The early use of Chinese astrology was mainly confined to political astrology, the observation of unusual phenomena, identification of portents and the selection of auspicious days for events and decisions.
The constellations of the Zodiac of western Asia and Europe were not used; instead the sky is divided into Three Enclosures (三垣 sān yuán), and Twenty-Eight Mansions (二十八宿 èrshíbā xiù) in twelve Ci (十二次). The Chinese zodiac of twelve animal signs is said to represent twelve different types of personality. It is based on cycles of years, lunar months, and two-hour periods of the day (the shichen). The zodiac traditionally begins with the sign of the Rat, and the cycle proceeds through 11 other animal signs: the Ox, Tiger, Rabbit, Dragon, Snake, Horse, Goat, Monkey, Rooster, Dog, and Pig. Complex systems of predicting fate and destiny based on one's birthday, birth season, and birth hours, such as ziping and Zi Wei Dou Shu () are still used regularly in modern-day Chinese astrology. They do not rely on direct observations of the stars.
The Korean zodiac is identical to the Chinese one. The Vietnamese zodiac is almost identical to the Chinese, except for second animal being the Water Buffalo instead of the Ox, and the fourth animal the Cat instead of the Rabbit. The Japanese have since 1873 celebrated the beginning of the new year on 1 January as per the Gregorian calendar. The Thai zodiac begins, not at Chinese New Year, but either on the first day of the fifth month in the Thai lunar calendar, or during the Songkran festival (now celebrated every 13–15 April), depending on the purpose of the use.
Theological viewpoints
Ancient
Augustine (354430) believed that the determinism of astrology conflicted with the Christian doctrines of man's free will and responsibility, and God not being the cause of evil, but he also grounded his opposition philosophically, citing the failure of astrology to explain twins who behave differently although conceived at the same moment and born at approximately the same time.
Medieval
Some of the practices of astrology were contested on theological grounds by medieval Muslim astronomers such as Al-Farabi (Alpharabius), Ibn al-Haytham (Alhazen) and Avicenna. They said that the methods of astrologers conflicted with orthodox religious views of Islamic scholars, by suggesting that the Will of God can be known and predicted. For example, Avicenna's 'Refutation against astrology', Risāla fī ibṭāl aḥkām al-nojūm, argues against the practice of astrology while supporting the principle that planets may act as agents of divine causation. Avicenna considered that the movement of the planets influenced life on earth in a deterministic way, but argued against the possibility of determining the exact influence of the stars. Essentially, Avicenna did not deny the core dogma of astrology, but denied our ability to understand it to the extent that precise and fatalistic predictions could be made from it. Ibn Qayyim al-Jawziyya (1292–1350), in his Miftah Dar al-SaCadah, also used physical arguments in astronomy to question the practice of judicial astrology. He recognised that the stars are much larger than the planets, and argued: And if you astrologers answer that it is precisely because of this distance and smallness that their influences are negligible, then why is it that you claim a great influence for the smallest heavenly body, Mercury? Why is it that you have given an influence to [the head] and [the tail], which are two imaginary points [ascending and descending nodes]?
Modern
Martin Luther denounced astrology in his Table Talk. He asked why twins like Esau and Jacob had two different natures yet were born at the same time. Luther also compared astrologers to those who say their dice will always land on a certain number. Although the dice may roll on the number a couple of times, the predictor is silent for all the times the dice fails to land on that number.
The Catechism of the Catholic Church maintains that divination, including predictive astrology, is incompatible with modern Catholic beliefs such as free will:
Scientific analysis and criticism
The scientific community rejects astrology as having no explanatory power for describing the universe, and considers it a pseudoscience. Scientific testing of astrology has been conducted, and no evidence has been found to support any of the premises or purported effects outlined in astrological traditions. There is no proposed mechanism of action by which the positions and motions of stars and planets could affect people and events on Earth that does not contradict basic and well understood aspects of biology and physics. Those who have faith in astrology have been characterised by scientists including Bart J. Bok as doing so "...in spite of the fact that there is no verified scientific basis for their beliefs, and indeed that there is strong evidence to the contrary".
Confirmation bias is a form of cognitive bias, a psychological factor that contributes to belief in astrology. Astrology believers tend to selectively remember predictions that turn out to be true, and do not remember those that turn out false. Another, separate, form of confirmation bias also plays a role, where believers often fail to distinguish between messages that demonstrate special ability and those that do not. Thus there are two distinct forms of confirmation bias that are under study with respect to astrological belief.
Demarcation
Under the criterion of falsifiability, first proposed by the philosopher of science Karl Popper, astrology is a pseudoscience. Popper regarded astrology as "pseudo-empirical" in that "it appeals to observation and experiment," but "nevertheless does not come up to scientific standards." In contrast to scientific disciplines, astrology has not responded to falsification through experiment.
In contrast to Popper, the philosopher Thomas Kuhn argued that it was not lack of falsifiability that makes astrology unscientific, but rather that the process and concepts of astrology are non-empirical. Kuhn thought that, though astrologers had, historically, made predictions that categorically failed, this in itself does not make astrology unscientific, nor do attempts by astrologers to explain away failures by saying that creating a horoscope is very difficult. Rather, in Kuhn's eyes, astrology is not science because it was always more akin to medieval medicine; astrologers followed a sequence of rules and guidelines for a seemingly necessary field with known shortcomings, but they did no research because the fields are not amenable to research, and so "they had no puzzles to solve and therefore no science to practise." While an astronomer could correct for failure, an astrologer could not. An astrologer could only explain away failure but could not revise the astrological hypothesis in a meaningful way. As such, to Kuhn, even if the stars could influence the path of humans through life, astrology is not scientific.
The philosopher Paul Thagard asserts that astrology cannot be regarded as falsified in this sense until it has been replaced with a successor. In the case of predicting behaviour, psychology is the alternative. To Thagard a further criterion of demarcation of science from pseudoscience is that the state-of-the-art must progress and that the community of researchers should be attempting to compare the current theory to alternatives, and not be "selective in considering confirmations and disconfirmations." Progress is defined here as explaining new phenomena and solving existing problems, yet astrology has failed to progress having only changed little in nearly 2000 years. To Thagard, astrologers are acting as though engaged in normal science believing that the foundations of astrology were well established despite the "many unsolved problems", and in the face of better alternative theories (psychology). For these reasons Thagard views astrology as pseudoscience.
For the philosopher Edward W. James, astrology is irrational not because of the numerous problems with mechanisms and falsification due to experiments, but because an analysis of the astrological literature shows that it is infused with fallacious logic and poor reasoning.
Effectiveness
Astrology has not demonstrated its effectiveness in controlled studies and has no scientific validity. Where it has made falsifiable predictions under controlled conditions, they have been falsified. One famous experiment included 28 astrologers who were asked to match over a hundred natal charts to psychological profiles generated by the California Psychological Inventory (CPI) questionnaire. The double-blind experimental protocol used in this study was agreed upon by a group of physicists and a group of astrologers nominated by the National Council for Geocosmic Research, who advised the experimenters, helped ensure that the test was fair and helped draw the central proposition of natal astrology to be tested. They also chose 26 out of the 28 astrologers for the tests (two more volunteered afterwards). The study, published in Nature in 1985, found that predictions based on natal astrology were no better than chance, and that the testing "...clearly refutes the astrological hypothesis."
In 1955, the astrologer and psychologist Michel Gauquelin stated that though he had failed to find evidence that supported indicators like zodiacal signs and planetary aspects in astrology, he did find positive correlations between the diurnal positions of some planets and success in professions that astrology traditionally associates with those planets. The best-known of Gauquelin's findings is based on the positions of Mars in the natal charts of successful athletes and became known as the Mars effect. A study conducted by seven French scientists attempted to replicate the claim, but found no statistical evidence. They attributed the effect to selective bias on Gauquelin's part, accusing him of attempting to persuade them to add or delete names from their study.
Geoffrey Dean has suggested that the effect may be caused by self-reporting of birth dates by parents rather than any issue with the study by Gauquelin. The suggestion is that a small subset of the parents may have had changed birth times to be consistent with better astrological charts for a related profession. The number of births under astrologically undesirable conditions was also lower, indicating that parents choose dates and times to suit their beliefs. The sample group was taken from a time where belief in astrology was more common. Gauquelin had failed to find the Mars effect in more recent populations, where a nurse or doctor recorded the birth information.
Dean, a scientist and former astrologer, and psychologist Ivan Kelly conducted a large scale scientific test that involved more than one hundred cognitive, behavioural, physical, and other variables—but found no support for astrology. Furthermore, a meta-analysis pooled 40 studies that involved 700 astrologers and over 1,000 birth charts. Ten of the tests—which involved 300 participants—had the astrologers pick the correct chart interpretation out of a number of others that were not the astrologically correct chart interpretation (usually three to five others). When date and other obvious clues were removed, no significant results suggested there was any preferred chart.
Lack of mechanisms and consistency
Testing the validity of astrology can be difficult, because there is no consensus amongst astrologers as to what astrology is or what it can predict. Most professional astrologers are paid to predict the future or describe a person's personality and life, but most horoscopes only make vague untestable statements that can apply to almost anyone.
Many astrologers believe that astrology is scientific, while some have proposed conventional causal agents such as electromagnetism and gravity. Scientists reject these mechanisms as implausible since, for example, the magnetic field, when measured from Earth, of a large but distant planet such as Jupiter is far smaller than that produced by ordinary household appliances.
Western astrology has taken the earth's axial precession (also called precession of the equinoxes) into account since Ptolemy's Almagest, so the "first point of Aries", the start of the astrological year, continually moves against the background of the stars. The tropical zodiac has no connection to the stars; tropical astrologers distinguish the constellations from their historically associated sign, thereby avoiding complications involving precession. Charpak and Broch, noting this, referred to astrology based on the tropical zodiac as being "...empty boxes that have nothing to do with anything and are devoid of any consistency or correspondence with the stars." Sole use of the tropical zodiac is inconsistent with references made, by the same astrologers, to the Age of Aquarius, which depends on when the vernal point enters the constellation of Aquarius.
Astrologers usually have only a small knowledge of astronomy, and often do not take into account basic principles—such as the precession of the equinoxes, which changes the position of the sun with time. They commented on the example of Élizabeth Teissier, who wrote that, "The sun ends up in the same place in the sky on the same date each year", as the basis for the idea that two people with the same birthday, but a number of years apart, should be under the same planetary influence. Charpak and Broch noted that, "There is a difference of about twenty-two thousand miles between Earth's location on any specific date in two successive years", and that thus they should not be under the same influence according to astrology. Over a 40-year period there would be a difference greater than 780,000 miles.
Reception in the social sciences
The general consensus of astronomers and other natural scientists is that astrology is a pseudoscience which carries no predictive capability, with many philosophers of science considering it a "paradigm or prime example of pseudoscience." Some scholars in the social sciences have cautioned against categorizing astrology, especially ancient astrology, as "just" a pseudoscience or projecting the distinction backwards into the past. Thagard, while demarcating it as a pseudoscience, notes that astrology "should be judged as not pseudoscientific in classical or Renaissance times...Only when the historical and social aspects of science are neglected does it become plausible that pseudoscience is an unchanging category." Historians of science such as Tamsyn Barton, Roger Beck, Francesca Rochberg, and Wouter J. Hanegraaff argue that such a wholesale description is anachronistic when applied to historical contexts, stressing that astrology was not pseudoscience before the 18th century and the importance of the discipline to the development of medieval science. R. J. Hakinson writes in the context of Hellenistic astrology that "the belief in the possibility of [astrology] was, at least some of the time, the result of careful reflection on the nature and structure of the universe."
Nicholas Campion, both an astrologer and academic historian of astrology, argues that Indigenous astronomy is largely used as a synonym for astrology in academia, and that modern Indian and Western astrology are better understood as modes of cultural astronomy or ethnoastronomy. Roy Willis and Patrick Curry draw a distinction between propositional episteme and metaphoric metis in the ancient world, identifying astrology with the latter and noting that the central concern of astrology "is not knowledge (factual, let alone scientific) but (ethical, spiritual and pragmatic)". Similarly, historian of science Justin Niermeier-Dohoney writes that astrology was "more than simply a science of prediction using the stars and comprised a vast body of beliefs, knowledge, and practices with the overarching theme of understanding the relationship between humanity and the rest of the cosmos through an interpretation of stellar, solar, lunar, and planetary movement." Scholars such as Assyriologist Matthew Rutz have begun using the term "astral knowledge" rather than astrology "to better describe a category of beliefs and practices much broader than the term 'astrology' can capture."
Cultural impact
Western politics and society
In the West, political leaders have sometimes consulted astrologers. For example, the British intelligence agency MI5 employed Louis de Wohl as an astrologer after it was reported that Adolf Hitler used astrology to time his actions. The War Office was "...interested to know what Hitler's own astrologers would be telling him from week to week." In fact, de Wohl's predictions were so inaccurate that he was soon labelled a "complete charlatan", and later evidence showed that Hitler considered astrology "complete nonsense". After John Hinckley's attempted assassination of US President Ronald Reagan, first lady Nancy Reagan commissioned astrologer Joan Quigley to act as the secret White House astrologer. However, Quigley's role ended in 1988 when it became public through the memoirs of former chief of staff, Donald Regan.
There was a boom in interest in astrology in the late 1960s. The sociologist Marcello Truzzi described three levels of involvement of "Astrology-believers" to account for its revived popularity in the face of scientific discrediting. He found that most astrology-believers did not think that it was a scientific explanation with predictive power. Instead, those superficially involved, knowing "next to nothing" about astrology's 'mechanics', read newspaper astrology columns, and could benefit from "tension-management of anxieties" and "a cognitive belief-system that transcends science." Those at the second level usually had their horoscopes cast and sought advice and predictions. They were much younger than those at the first level, and could benefit from knowledge of the language of astrology and the resulting ability to belong to a coherent and exclusive group. Those at the third level were highly involved and usually cast horoscopes for themselves. Astrology provided this small minority of astrology-believers with a "meaningful view of their universe and [gave] them an understanding of their place in it." This third group took astrology seriously, possibly as an overarching religious worldview (a sacred canopy, in Peter L. Berger's phrase), whereas the other two groups took it playfully and irreverently.
In 1953, the sociologist Theodor W. Adorno conducted a study of the astrology column of a Los Angeles newspaper as part of a project examining mass culture in capitalist society. Adorno believed that popular astrology, as a device, invariably leads to statements that encouraged conformity—and that astrologers who go against conformity, by discouraging performance at work etc., risk losing their jobs. Adorno concluded that astrology is a large-scale manifestation of systematic irrationalism, where individuals are subtly led—through flattery and vague generalisations—to believe that the author of the column is addressing them directly. Adorno drew a parallel with the phrase opium of the people, by Karl Marx, by commenting, "occultism is the metaphysic of the dopes."
A 2005 Gallup poll and a 2009 survey by the Pew Research Center reported that 25% of US adults believe in astrology, while a 2018 Pew survey found a figure of 29%. According to data released in the National Science Foundation's 2014 Science and Engineering Indicators study, "Fewer Americans rejected astrology in 2012 than in recent years." The NSF study noted that in 2012, "slightly more than half of Americans said that astrology was 'not at all scientific,' whereas nearly two-thirds gave this response in 2010. The comparable percentage has not been this low since 1983." Astrology apps became popular in the late 2010s, some receiving millions of dollars in Silicon Valley venture capital.
India and Japan
In India, there is a long-established and widespread belief in astrology. It is commonly used for daily life, particularly in matters concerning marriage and career, and makes extensive use of electional, horary and karmic astrology. Indian politics have also been influenced by astrology. It is still considered a branch of the Vedanga. In 2001, Indian scientists and politicians debated and critiqued a proposal to use state money to fund research into astrology, resulting in permission for Indian universities to offer courses in Vedic astrology.
In February 2011, the Bombay High Court reaffirmed astrology's standing in India when it dismissed a case that challenged its status as a science.
In Japan, strong belief in astrology has led to dramatic changes in the fertility rate and the number of abortions in the years of Fire Horse. Adherents believe that women born in hinoeuma years are unmarriageable and bring bad luck to their father or husband. In 1966, the number of babies born in Japan dropped by over 25% as parents tried to avoid the stigma of having a daughter born in the hinoeuma year.
Literature and music
The fourteenth-century English poets John Gower and Geoffrey Chaucer both referred to astrology in their works, including Gower's Confessio Amantis and Chaucer's The Canterbury Tales. Chaucer commented explicitly on astrology in his Treatise on the Astrolabe, demonstrating personal knowledge of one area, judicial astrology, with an account of how to find the ascendant or rising sign.
In the fifteenth century, references to astrology, such as with similes, became "a matter of course" in English literature.
In the sixteenth century, John Lyly's 1597 play, The Woman in the Moon, is wholly motivated by astrology, while Christopher Marlowe makes astrological references in his plays Doctor Faustus and Tamburlaine (both c. 1590), and Sir Philip Sidney refers to astrology at least four times in his romance The Countess of Pembroke's Arcadia (c. 1580). Edmund Spenser uses astrology both decoratively and causally in his poetry, revealing "...unmistakably an abiding interest in the art, an interest shared by a large number of his contemporaries." George Chapman's play, Byron's Conspiracy (1608), similarly uses astrology as a causal mechanism in the drama. William Shakespeare's attitude towards astrology is unclear, with contradictory references in plays including King Lear, Antony and Cleopatra, and Richard II. Shakespeare was familiar with astrology and made use of his knowledge of astrology in nearly every play he wrote, assuming a basic familiarity with the subject in his commercial audience. Outside theatre, the physician and mystic Robert Fludd practised astrology, as did the quack doctor Simon Forman. In Elizabethan England, "The usual feeling about astrology ... [was] that it is the most useful of the sciences."
In seventeenth century Spain, Lope de Vega, with a detailed knowledge of astronomy, wrote plays that ridicule astrology. In his pastoral romance La Arcadia (1598), it leads to absurdity; in his novela Guzman el Bravo (1624), he concludes that the stars were made for man, not man for the stars. Calderón de la Barca wrote the 1641 comedy Astrologo Fingido (The Pretended Astrologer); the plot was borrowed by the French playwright Thomas Corneille for his 1651 comedy Feint Astrologue.
The most famous piece of music influenced by astrology is the orchestral suite The Planets. Written by the British composer Gustav Holst (1874–1934), and first performed in 1918, the framework of The Planets is based upon the astrological symbolism of the planets. Each of the seven movements of the suite is based upon a different planet, though the movements are not in the order of the planets from the Sun. The composer Colin Matthews wrote an eighth movement entitled Pluto, the Renewer, first performed in 2000, as the suite was written prior to Pluto's discovery. In 1937, another British composer, Constant Lambert, wrote a ballet on astrological themes, called Horoscope. In 1974, the New Zealand composer Edwin Carr wrote The Twelve Signs: An Astrological Entertainment for orchestra without strings. Camille Paglia acknowledges astrology as an influence on her work of literary criticism Sexual Personae (1990). The American comedian Harvey Sid Fisher is best known for his comedic songs about astrology.
Astrology features strongly in Eleanor Catton's The Luminaries, recipient of the 2013 Man Booker Prize.
See also
Astrology and science
Astrology software
Barnum effect
Glossary of astrology
List of astrological traditions, types, and systems
List of topics characterised as pseudoscience
Jewish astrology
Scientific skepticism
Worship of heavenly bodies
Notes
References
Works cited
Further reading
External links
Digital International Astrology Library (ancient astrological works)
Biblioastrology (www.biblioastrology.com) (specialised bibliography)
Paris Observatory
Pseudoscience | Astrology | [
"Astronomy"
] | 10,515 | [
"Astrology",
"History of astronomy"
] |
2,142 | https://en.wikipedia.org/wiki/List%20of%20artificial%20intelligence%20projects | The following is a list of current and past, non-classified notable artificial intelligence projects.
Specialized projects
Brain-inspired
Blue Brain Project, an attempt to create a synthetic brain by reverse-engineering the mammalian brain down to the molecular level.
Google Brain, a deep learning project part of Google X attempting to have intelligence similar or equal to human-level.
Human Brain Project, ten-year scientific research project, based on exascale supercomputers.
Cognitive architectures
4CAPS, developed at Carnegie Mellon University under Marcel A. Just
ACT-R, developed at Carnegie Mellon University under John R. Anderson.
AIXI, Universal Artificial Intelligence developed by Marcus Hutter at IDSIA and ANU.
CALO, a DARPA-funded, 25-institution effort to integrate many artificial intelligence approaches (natural language processing, speech recognition, machine vision, probabilistic logic, planning, reasoning, many forms of machine learning) into an AI assistant that learns to help manage your office environment.
CHREST, developed under Fernand Gobet at Brunel University and Peter C. Lane at the University of Hertfordshire.
CLARION, developed under Ron Sun at Rensselaer Polytechnic Institute and University of Missouri.
CoJACK, an ACT-R inspired extension to the JACK multi-agent system that adds a cognitive architecture to the agents for eliciting more realistic (human-like) behaviors in virtual environments.
Copycat, by Douglas Hofstadter and Melanie Mitchell at the Indiana University.
DUAL, developed at the New Bulgarian University under Boicho Kokinov.
FORR developed by Susan L. Epstein at The City University of New York.
IDA and LIDA, implementing Global Workspace Theory, developed under Stan Franklin at the University of Memphis.
OpenCog Prime, developed using the OpenCog Framework.
Procedural Reasoning System (PRS), developed by Michael Georgeff and Amy L. Lansky at SRI International.
Psi-Theory developed under Dietrich Dörner at the Otto-Friedrich University in Bamberg, Germany.
Soar, developed under Allen Newell and John Laird at Carnegie Mellon University and the University of Michigan.
Society of Mind and its successor The Emotion Machine proposed by Marvin Minsky.
Subsumption architectures, developed e.g. by Rodney Brooks (though it could be argued whether they are cognitive).
Games
AlphaGo, software developed by Google that plays the Chinese board game Go.
Chinook, a computer program that plays English draughts; the first to win the world champion title in the competition against humans.
Deep Blue, a chess-playing computer developed by IBM which beat Garry Kasparov in 1997.
Halite, an artificial intelligence programming competition created by Two Sigma in 2016.
Libratus, a poker AI that beat world-class poker players in 2017, intended to be generalisable to other applications.
The Matchbox Educable Noughts and Crosses Engine (sometimes called the Machine Educable Noughts and Crosses Engine or MENACE) was a mechanical computer made from 304 matchboxes designed and built by artificial intelligence researcher Donald Michie in 1961.
Quick, Draw!, an online game developed by Google that challenges players to draw a picture of an object or idea and then uses a neural network to guess what the drawing is.
The Samuel Checkers-playing Program (1959) was among the world's first successful self-learning programs, and as such a very early demonstration of the fundamental concept of artificial intelligence (AI).
Stockfish AI, an open source chess engine currently ranked the highest in many computer chess rankings.
TD-Gammon, a program that learned to play world-class backgammon partly by playing against itself (temporal difference learning with neural networks).
Internet activism
Serenata de Amor, project for the analysis of public expenditures and detect discrepancies.
Knowledge and reasoning
Alice (Microsoft), a project from Microsoft Research Lab aimed at improving decision-making in Economics
Braina, an intelligent personal assistant application with a voice interface for Windows OS.
Cyc, an attempt to assemble an ontology and database of everyday knowledge, enabling human-like reasoning.
Eurisko, a language by Douglas Lenat for solving problems which consists of heuristics, including some for how to use and change its heuristics.
Google Now, an intelligent personal assistant with a voice interface in Google's Android and Apple Inc.'s iOS, as well as Google Chrome web browser on personal computers.
Holmes a new AI created by Wipro.
Microsoft Cortana, an intelligent personal assistant with a voice interface in Microsoft's various Windows 10 editions.
Mycin, an early medical expert system.
Open Mind Common Sense, a project based at the MIT Media Lab to build a large common sense knowledge base from online contributions.
Siri, an intelligent personal assistant and knowledge navigator with a voice-interface in Apple Inc.'s iOS and macOS.
SNePS, simultaneously a logic-based, frame-based, and network-based knowledge representation, reasoning, and acting system.
Viv (software), a new AI by the creators of Siri.
Wolfram Alpha, an online service that answers queries by computing the answer from structured data.
MindsDB, is an AI automation platform for building AI/ML powered features and applications.
Motion and manipulation
AIBO, the robot pet for the home, grew out of Sony's Computer Science Laboratory (CSL).
Cog, a robot developed by MIT to study theories of cognitive science and artificial intelligence, now discontinued.
Music
Melomics, a bioinspired technology for music composition and synthesization of music, where computers develop their own style, rather than mimic musicians.
Natural language processing
AIML, an XML dialect for creating natural language software agents.
Apache Lucene, a high-performance, full-featured text search engine library written entirely in Java.
Apache OpenNLP, a machine learning based toolkit for the processing of natural language text. It supports the most common NLP tasks, such as tokenization, sentence segmentation, part-of-speech tagging, named entity extraction, chunking and parsing.
Artificial Linguistic Internet Computer Entity (A.L.I.C.E.), a natural language processing chatterbot.
ChatGPT, a chatbot built on top of OpenAI's GPT-3.5 and GPT-4 family of large language models.
Claude, a family of large language models developed by Anthropic and launched in 2023. Claude LLMs achieved high coding scores in several recognized LLM benchmarks.
Cleverbot, successor to Jabberwacky, now with 170m lines of conversation, Deep Context, fuzziness and parallel processing. Cleverbot learns from around 2 million user interactions per month.
ELIZA, a famous 1966 computer program by Joseph Weizenbaum, which parodied person-centered therapy.
FreeHAL, a self-learning conversation simulator (chatterbot) which uses semantic nets to organize its knowledge to imitate a very close human behavior within conversations.
Gemini, a family of multimodal large language model developed by Google's DeepMind. Drives the Gemini chatbot, formerly known as Bard.
GigaChat, a chatbot by Russian Sberbank.
GPT-3, a 2020 language model developed by OpenAI that can produce text difficult to distinguish from that written by a human.
Jabberwacky, a chatbot by Rollo Carpenter, aiming to simulate natural human chat.
LaMDA, a family of conversational neural language models developed by Google.
LLaMA, a 2023 language model family developed by Meta that includes 7, 13, 33 and 65 billion parameter models.
Mycroft, a free and open-source intelligent personal assistant that uses a natural language user interface.
PARRY, another early chatterbot, written in 1972 by Kenneth Colby, attempting to simulate a paranoid schizophrenic.
SHRDLU, an early natural language processing computer program developed by Terry Winograd at MIT from 1968 to 1970.
SYSTRAN, a machine translation technology by the company of the same name, used by Yahoo!, AltaVista and Google, among others.
DBRX, 136 billion parameter open sourced large language model developed by Mosaic ML and Databricks.
Speech recognition
CMU Sphinx, a group of speech recognition systems developed at Carnegie Mellon University.
DeepSpeech, an open-source Speech-To-Text engine based on Baidu's deep speech research paper.
Whisper, an open-source speech recognition system developed at OpenAI.
Speech synthesis
15.ai, a real-time artificial intelligence text-to-speech tool developed by an anonymous researcher from MIT.
Amazon Polly, a speech synthesis software by Amazon.
Festival Speech Synthesis System, a general multi-lingual speech synthesis system developed at the Centre for Speech Technology Research (CSTR) at the University of Edinburgh.
WaveNet, a deep neural network for generating raw audio.
Video
Synthesia is a video creation and editing platform, with AI-generated avatars that resemble real human beings.
Other
1 the Road, the first novel marketed by an AI.
AlphaFold is a deep learning based system developed by DeepMind for prediction of protein structure.
Otter.ai is a speech-to-text synthesis and summary platform, which allows users to record online meetings as text. It additionally creates live captions during meetings.
Synthetic Environment for Analysis and Simulations (SEAS), a model of the real world used by Homeland security and the United States Department of Defense that uses simulation and AI to predict and evaluate future events and courses of action.
Multipurpose projects
Software libraries
Apache Mahout, a library of scalable machine learning algorithms.
Deeplearning4j, an open-source, distributed deep learning framework written for the JVM.
Keras, a high level open-source software library for machine learning (works on top of other libraries).
Microsoft Cognitive Toolkit (previously known as CNTK), an open source toolkit for building artificial neural networks.
OpenNN, a comprehensive C++ library implementing neural networks.
PyTorch, an open-source Tensor and Dynamic neural network in Python.
TensorFlow, an open-source software library for machine learning.
Theano, a Python library and optimizing compiler for manipulating and evaluating mathematical expressions, especially matrix-valued ones.
GUI frameworks
Neural Designer, a commercial deep learning tool for predictive analytics.
Neuroph, a Java neural network framework.
OpenCog, a GPL-licensed framework for artificial intelligence written in C++, Python and Scheme.
PolyAnalyst: A commercial tool for data mining, text mining, and knowledge management.
RapidMiner, an environment for machine learning and data mining, now developed commercially.
Weka, a free implementation of many machine learning algorithms in Java.
Cloud services
Data Applied, a web based data mining environment.
Watson, a pilot service by IBM to uncover and share data-driven insights, and to spur cognitive applications.
See also
Comparison of cognitive architectures
Comparison of deep-learning software
References
External links
AI projects on GitHub
AI projects on SourceForge
Artificial intelligence projects | List of artificial intelligence projects | [
"Technology"
] | 2,276 | [
"Computing-related lists"
] |
2,193 | https://en.wikipedia.org/wiki/Arcology | Arcology, a portmanteau of "architecture" and "ecology", is a field of creating architectural design principles for very densely populated and ecologically low-impact human habitats.
The term was coined in 1969 by architect Paolo Soleri, who believed that a completed arcology would provide space for a variety of residential, commercial, and agricultural facilities while minimizing individual human environmental impact. These structures have been largely hypothetical, as no large-scale arcology has yet been built.
The concept has been popularized by various science fiction writers. Larry Niven and Jerry Pournelle provided a detailed description of an arcology in their 1981 novel Oath of Fealty. William Gibson mainstreamed the term in his seminal 1984 cyberpunk novel Neuromancer, where each corporation has its own self-contained city known as an arcology. More recently, authors such as Peter Hamilton in Neutronium Alchemist and Paolo Bacigalupi in The Water Knife explicitly used arcologies as part of their scenarios. They are often portrayed as self-contained or economically self-sufficient.
Development
An arcology is distinguished from a merely large building in that it is designed to lessen the impact of human habitation on any given ecosystem. It could be self-sustainable, employing all or most of its own available resources for a comfortable life: power, climate control, food production, air and water conservation and purification, sewage treatment, etc. An arcology is designed to make it possible to supply those items for a large population. An arcology would supply and maintain its own municipal or urban infrastructures in order to operate and connect with other urban environments apart from its own.
Arcologies were proposed in order to reduce human impact on natural resources. Arcology designs might apply conventional building and civil engineering techniques in very large, but practical projects in order to achieve pedestrian economies of scale that have proven, post-automobile, to be difficult to achieve in other ways.
Frank Lloyd Wright proposed an early version called Broadacre City although, in contrast to an arcology, his idea is comparatively two-dimensional and depends on a road network. Wright's plan described transportation, agriculture, and commerce systems that would support an economy. Critics said that Wright's solution failed to account for population growth, and assumed a more rigid democracy than the US actually has.
Buckminster Fuller proposed the Old Man River's City project, a domed city with a capacity of 125,000, as a solution to the housing problems in East St. Louis, Illinois.
Paolo Soleri proposed later solutions, and coined the term "arcology". Soleri describes ways of compacting city structures in three dimensions to combat two-dimensional urban sprawl, to economize on transportation and other energy uses. Like Wright, Soleri proposed changes in transportation, agriculture, and commerce. Soleri explored reductions in resource consumption and duplication, land reclamation; he also proposed to eliminate most private transportation. He advocated for greater "frugality" and favored greater use of shared social resources, including public transit (and public libraries).
Similar real-world projects
Arcosanti is an experimental "arcology prototype", a demonstration project under construction in central Arizona since 1970. Designed by Paolo Soleri, its primary purpose is to demonstrate Soleri's personal designs, his application of principles of arcology to create a pedestrian-friendly urban form.
Many cities in the world have proposed projects adhering to the design principles of the arcology concept, like Tokyo, and Dongtan near Shanghai. The Dongtan project may have collapsed, and it failed to open for the Shanghai World Expo in 2010. The Ihme-Zentrum in Hanover was an attempt to build a "city within a city".
McMurdo Station of the United States Antarctic Program and other scientific research stations on Antarctica resemble the popular conception of an arcology as a technologically advanced, relatively self-sufficient human community. The Antarctic research base provides living and entertainment amenities for roughly 3,000 staff who visit each year. Its remoteness and the measures needed to protect its population from the harsh environment give it an insular character. The station is not self-sufficientthe U.S. military delivers 30,000,000 liters (8,000,000 US gal) of fuel and of supplies and equipment yearly through its Operation Deep Freeze resupply effortbut it is isolated from conventional support networks. Under international treaty, it must avoid damage to the surrounding ecosystem.
Begich Towers operates like a small-scale arcology encompassing nearly all of the population of Whittier, Alaska. The building contains residential housing as well as a police station, grocery, and municipal offices.
Whittier once boasted a second structure known as the Buckner Building. The Buckner Building still stands but was deemed unfit for habitation after the 1969 earthquake.
The Line was planned as a long and wide linear smart city in Saudi Arabia in Neom, Tabuk Province, designed to have no cars, streets or carbon emissions. The Line is planned to be the first development in Neom, a $500 billion project. The city's plans anticipated a population of 9 million. Excavation work had started along the entire length of the project by October 2022. However, the project was scaled down in 2024 to long, housing 300,000 people.
In popular culture
Most proposals to build real arcologies have failed due to financial, structural or conceptual shortcomings. Arcologies are therefore found primarily in fictional works.
In Robert Silverberg's The World Inside, most of the global population of 75 billion live inside giant skyscrapers, called "urbmons", each of which contains hundreds of thousands of people. The urbmons are arranged in "constellations". Each urbmon is divided into "neighborhoods" of 40 or so floors. All the needs of the inhabitants are provided inside the building – food is grown outside and brought into the building – so the idea of going outside is heretical and can be a sign of madness. The book examines human life when the population density is extremely high.
Another significant example is the 1981 novel Oath of Fealty by Larry Niven and Jerry Pournelle, in which a segment of the population of Los Angeles has moved into an arcology. The plot examines the social changes that result, both inside and outside the arcology. Thus the arcology is not just a plot device but a subject of critique.
In the city-building video game SimCity 2000, self-contained arcologies can be built, reducing the infrastructure needs of the city.
The isometric, cyberpunk themed action roleplay game The Ascent takes place in a futuristic dystopian version of an arcology on the alien world Velesand prominently uses the structure and its levels to flesh out progression in the game, starting you in the bottom levels of the sewers with the ultimate goal of reaching the top of the structure to leave the city.
See also
References
Notes
Further reading
Soleri, Paolo. Arcology: The City in the Image of Man. 1969: Cambridge, Massachusetts, MIT Press.
External links
Arcology: The City in the Image of Man by Paolo Soleri (full text online)
Arcology.com – useful links
The Night Land by William Hope Hodgson (full text online)
Victory City
A discussion of arcology concepts
What is an Arcology?
Usage of "arcology" vs. "hyperstructure"
Arcology.com ("An arcology in southern China" on front page)
Arcology ("An arcology is a self-contained environment...")
SculptorsWiki: Arcology ("The only arcology yet on Earth...")
Review of Shadowrun: Renraku Arcology ("What's an arcology? A self-contained, largely self-sufficient living, working, recreational structure...")
Megastructures
Exploratory engineering
Environmental design
Human habitats
Planned communities
Urban studies and planning terminology
Cyberpunk themes
Architecture related to utopias | Arcology | [
"Technology",
"Engineering"
] | 1,642 | [
"Environmental design",
"Exploratory engineering",
"Architecture related to utopias",
"Megastructures",
"Design",
"Proposed arcologies",
"Architecture"
] |
2,250 | https://en.wikipedia.org/wiki/Abiotic%20stress | Abiotic stress is the negative impact of non-living factors on the living organisms in a specific environment. The non-living variable must influence the environment beyond its normal range of variation to adversely affect the population performance or individual physiology of the organism in a significant way.
Whereas a biotic stress would include living disturbances such as fungi or harmful insects, abiotic stress factors, or stressors, are naturally occurring, often intangible and inanimate factors such as intense sunlight, temperature or wind that may cause harm to the plants and animals in the area affected. Abiotic stress is essentially unavoidable. Abiotic stress affects animals, but plants are especially dependent, if not solely dependent, on environmental factors, so it is particularly constraining. Abiotic stress is the most harmful factor concerning the growth and productivity of crops worldwide. Research has also shown that abiotic stressors are at their most harmful when they occur together, in combinations of abiotic stress factors.
Examples
Abiotic stress comes in many forms. The most common of the stressors are the easiest for people to identify, but there are many other, less recognizable abiotic stress factors which affect environments constantly.
The most basic stressors include:
High winds
Extreme temperatures
Drought
Flood
Other natural disasters, such as tornadoes and wildfires.
Cold
Heat
Nutrient deficiency
Lesser-known stressors generally occur on a smaller scale. They include: poor edaphic conditions like rock content and pH levels, high radiation, compaction, contamination, and other, highly specific conditions like rapid rehydration during seed germination.
Effects
Abiotic stress, as a natural part of every ecosystem, will affect organisms in a variety of ways. Although these effects may be either beneficial or detrimental, the location of the area is crucial in determining the extent of the impact that abiotic stress will have. The higher the latitude of the area affected, the greater the impact of abiotic stress will be on that area. So, a taiga or boreal forest is at the mercy of whatever abiotic stress factors may come along, while tropical zones are much less susceptible to such stressors.
Benefits
One example of a situation where abiotic stress plays a constructive role in an ecosystem is in natural wildfires. While they can be a human safety hazard, it is productive for these ecosystems to burn out every once in a while so that new organisms can begin to grow and thrive. Even though it is healthy for an ecosystem, a wildfire can still be considered an abiotic stressor, because it puts an obvious stress on individual organisms within the area. Every tree that is scorched and each bird nest that is devoured is a sign of the abiotic stress. On the larger scale, though, natural wildfires are positive manifestations of abiotic stress.
What also needs to be taken into account when looking for benefits of abiotic stress, is that one phenomenon may not affect an entire ecosystem in the same way. While a flood will kill most plants living low on the ground in a certain area, if there is rice there, it will thrive in the wet conditions. Another example of this is in phytoplankton and zooplankton. The same types of conditions are usually considered stressful for these two types of organisms. They act very similarly when exposed to ultraviolet light and most toxins, but at elevated temperatures the phytoplankton reacts negatively, while the thermophilic zooplankton reacts positively to the increase in temperature. The two may be living in the same environment, but an increase in temperature of the area would prove stressful only for one of the organisms.
Lastly, abiotic stress has enabled species to grow, develop, and evolve, furthering natural selection as it picks out the weakest of a group of organisms. Both plants and animals have evolved mechanisms allowing them to survive extremes.
Detriments
The most obvious detriment concerning abiotic stress involves farming. It has been claimed by one study that abiotic stress causes the most crop loss of any other factor and that most major crops are reduced in their yield by more than 50% from their potential yield.
Because abiotic stress is widely considered a detrimental effect, the research on this branch of the issue is extensive. For more information on the harmful effects of abiotic stress, see the sections below on plants and animals.
In plants
A plant's first line of defense against abiotic stress is in its roots. If the soil holding the plant is healthy and biologically diverse, the plant will have a higher chance of surviving stressful conditions.
The plant responses to stress are dependent on the tissue or organ affected by the stress. For example, transcriptional responses to stress are tissue or cell specific in roots and are quite different depending on the stress involved.
One of the primary responses to abiotic stress such as high salinity is the disruption of the Na+/K+ ratio in the cytoplasm of the plant cell. High concentrations of Na+, for example, can decrease the capacity for the plant to take up water and also alter enzyme and transporter functions. Evolved adaptations to efficiently restore cellular ion homeostasis have led to a wide variety of stress tolerant plants.
Facilitation, or the positive interactions between different species of plants, is an intricate web of association in a natural environment. It is how plants work together. In areas of high stress, the level of facilitation is especially high as well. This could possibly be because the plants need a stronger network to survive in a harsher environment, so their interactions between species, such as cross-pollination or mutualistic actions, become more common to cope with the severity of their habitat.
Plants also adapt very differently from one another, even from a plant living in the same area. When a group of different plant species was prompted by a variety of different stress signals, such as drought or cold, each plant responded uniquely. Hardly any of the responses were similar, even though the plants had become accustomed to exactly the same home environment.
Serpentine soils (media with low concentrations of nutrients and high concentrations of heavy metals) can be a source of abiotic stress. Initially, the absorption of toxic metal ions is limited by cell membrane exclusion. Ions that are absorbed into tissues are sequestered in cell vacuoles. This sequestration mechanism is facilitated by proteins on the vacuole membrane. An example of plants that adapt to serpentine soil are Metallophytes, or hyperaccumulators, as they are known for their ability to absorbed heavy metals using the root-to-shoot translocation (which it will absorb into shoots rather than the plant itself). They're also extinguished for their ability to absorb toxic substances from heavy metals.
Chemical priming has been proposed to increase tolerance to abiotic stresses in crop plants. In this method, which is analogous to vaccination, stress-inducing chemical agents are introduced to the plant in brief doses so that the plant begins preparing defense mechanisms. Thus, when the abiotic stress occurs, the plant has already prepared defense mechanisms that can be activated faster and increase tolerance. Prior exposure to tolerable doses of biotic stresses such as phloem-feeding insect infestation have also been shown to increase tolerance to abiotic stresses in plant
Impact on food production
Abiotic stress mostly affects plants used in agriculture. Some examples of adverse conditions (which may be caused by climate change) are high or low temperatures, drought, salinity, and toxins.
Rice (Oryza sativa) is a classic example. Rice is a staple food throughout the world, especially in China and India. Rice plants can undergo different types of abiotic stresses, like drought and high salinity. These stress conditions adversely affect rice production. Genetic diversity has been studied among several rice varieties with different genotypes, using molecular markers.
Chickpea production is affected by drought. Chickpeas are one of the most important foods in the world.
Wheat is another major crop that is affected by drought: lack of water affects the plant development, and can wither the leaves.
Maize crops can be affected by high temperature and drought, leading to the loss of maize crops due to poor plant development.
Soybean is a major source of protein, and its production is also affected by drought.
Salt stress in plants
Soil salinization, the accumulation of water-soluble salts to levels that negatively impact plant production, is a global phenomenon affecting approximately 831 million hectares of land. More specifically, the phenomenon threatens 19.5% of the world's irrigated agricultural land and 2.1% of the world's non-irrigated (dry-land) agricultural lands. High soil salinity content can be harmful to plants because water-soluble salts can alter osmotic potential gradients and consequently inhibit many cellular functions. For example, high soil salinity content can inhibit the process of photosynthesis by limiting a plant's water uptake; high levels of water-soluble salts in the soil can decrease the osmotic potential of the soil and consequently decrease the difference in water potential between the soil and the plant's roots, thereby limiting electron flow from H2O to P680 in Photosystem II's reaction center.
Over generations, many plants have mutated and built different mechanisms to counter salinity effects. A good combatant of salinity in plants is the hormone ethylene. Ethylene is known for regulating plant growth and development and dealing with stress conditions. Many central membrane proteins in plants, such as ETO2, ERS1 and EIN2, are used for ethylene signaling in many plant growth processes. Mutations in these proteins can lead to heightened salt sensitivity and can limit plant growth. The effects of salinity has been studied on Arabidopsis plants that have mutated ERS1, ERS2, ETR1, ETR2 and EIN4 proteins. These proteins are used for ethylene signaling against certain stress conditions, such as salt and the ethylene precursor ACC is used to suppress any sensitivity to the salt stress.
Phosphate starvation in plants
Phosphorus (P) is an essential macronutrient required for plant growth and development, but it is present only in limited quantities in most of the world's soil. Plants use P mainly in the form of soluble inorganic phosphates (PO4−−−) but are subject to abiotic stress when there is not enough soluble PO4−−− in the soil. Phosphorus forms insoluble complexes with Ca and Mg in alkaline soils and with Al and Fe in acidic soils that make the phosphorus unavailable for plant roots. When there is limited bioavailable P in the soil, plants show extensive symptoms of abiotic stress, such as short primary roots and more lateral roots and root hairs to make more surface available for phosphate absorption, exudation of organic acids and phosphatase to release phosphates from complex P–containing molecules and make it available for growing plants' organs. It has been shown that PHR1, a MYB-related transcription factor, is a master regulator of P-starvation response in plants. PHR1 also has been shown to regulate extensive remodeling of lipids and metabolites during phosphorus limitation stress
Drought stress
Drought stress, defined as naturally occurring water deficit, is a main cause of crop losses in agriculture. This is because water is essential for many fundamental processes in plant growth. It has become especially important in recent years to find a way to combat drought stress. A decrease in precipitation and consequent increase in drought are extremely likely in the future due to an increase in global warming. Plants have come up with many mechanisms and adaptations to try and deal with drought stress. One of the leading ways that plants combat drought stress is by closing their stomata. A key hormone regulating stomatal opening and closing is abscisic acid (ABA). Synthesis of ABA causes the ABA to bind to receptors. This binding then affects the opening of ion channels, thereby decreasing turgor pressure in the stomata and causing them to close. Recent studies, by Gonzalez-Villagra, et al., have showed how ABA levels increased in drought-stressed plants (2018). They showed that when plants were placed in a stressful situation they produced more ABA to try to conserve any water they had in their leaves. Another extremely important factor in dealing with drought stress and regulating the uptake and export of water is aquaporins (AQPs). AQPs are integral membrane proteins that make up channels. These channels' main job is the transport of water and other essential solutes. AQPs are both transcriptionally and post-transcriptionally regulated by many different factors such as ABA, GA3, pH and Ca2+; and the specific levels of AQPs in certain parts of the plant, such as roots or leaves, helps to draw as much water into the plant as possible. By understanding the mechanisms of both AQPs and the hormone ABA, scientists will be better able to produce drought-resistant plants in the future.
It is interesting that plants that are consistently exposed to drought have been found to form a sort of "memory". A study by Tombesi et al., found that plants which had previously been exposed to drought were able to come up with a sort of strategy to minimize water loss and decrease water use. They found that plants which were exposed to drought conditions actually changed the way they regulated their stomata and what they called "hydraulic safety margin" so as to decrease the vulnerability of the plant. By changing the regulation of stomata and subsequently the transpiration, plants were able to function better when less water was available.
In animals
For animals, the most stressful of all the abiotic stressors is heat. This is because many species are unable to regulate their internal body temperature. Even in the species that are able to regulate their own temperature, it is not always a completely accurate system. Temperature determines metabolic rates, heart rates, and other very important factors within the bodies of animals, so an extreme temperature change can easily distress the animal's body. Animals can respond to extreme heat, for example, through natural heat acclimation or by burrowing into the ground to find a cooler space.
It is also possible to see in animals that a high genetic diversity is beneficial in providing resiliency against harsh abiotic stressors. This acts as a sort of stock room when a species is plagued by the perils of natural selection. A variety of galling insects are among the most specialized and diverse herbivores on the planet, and their extensive protections against abiotic stress factors have helped the insect in gaining that position of honor.
In endangered species
Biodiversity is determined by many things, and one of them is abiotic stress. If an environment is highly stressful, biodiversity tends to be low. If abiotic stress does not have a strong presence in an area, the biodiversity will be much higher.
This idea leads into the understanding of how abiotic stress and endangered species are related. It has been observed through a variety of environments that as the level of abiotic stress increases, the number of species decreases. This means that species are more likely to become population threatened, endangered, and even extinct, when and where abiotic stress is especially harsh.
See also
Ecophysiology
References
Stress (biological and psychological)
Biodiversity
Habitat
Agriculture
Botany | Abiotic stress | [
"Biology"
] | 3,203 | [
"Plants",
"Biodiversity",
"Botany"
] |
2,268 | https://en.wikipedia.org/wiki/Chemistry%20of%20ascorbic%20acid | Ascorbic acid is an organic compound with formula , originally called hexuronic acid. It is a white solid, but impure samples can appear yellowish. It dissolves freely in water to give mildly acidic solutions. It is a mild reducing agent.
Ascorbic acid exists as two enantiomers (mirror-image isomers), commonly denoted "" (for "levo") and "" (for "dextro"). The isomer is the one most often encountered: it occurs naturally in many foods, and is one form ("vitamer") of vitamin C, an essential nutrient for humans and many animals. Deficiency of vitamin C causes scurvy, formerly a major disease of sailors in long sea voyages. It is used as a food additive and a dietary supplement for its antioxidant properties. The "" form (erythorbic acid) can be made by chemical synthesis, but has no significant biological role.
History
The antiscorbutic properties of certain foods were demonstrated in the 18th century by James Lind. In 1907, Axel Holst and Theodor Frølich discovered that the antiscorbutic factor was a water-soluble chemical substance, distinct from the one that prevented beriberi. Between 1928 and 1932, Albert Szent-Györgyi isolated a candidate for this substance, which he called "hexuronic acid", first from plants and later from animal adrenal glands. In 1932 Charles Glen King confirmed that it was indeed the antiscorbutic factor.
In 1933, sugar chemist Walter Norman Haworth, working with samples of "hexuronic acid" that Szent-Györgyi had isolated from paprika and sent him in the previous year, deduced the correct structure and optical-isomeric nature of the compound, and in 1934 reported its first synthesis. In reference to the compound's antiscorbutic properties, Haworth and Szent-Györgyi proposed to rename it "a-scorbic acid" for the compound, and later specifically -ascorbic acid. Because of their work, in 1937 two Nobel Prizes: in Chemistry and in Physiology or Medicine were awarded to Haworth and Szent-Györgyi, respectively.
Chemical properties
Acidity
Ascorbic acid is a furan-based lactone of 2-ketogluconic acid. It contains an adjacent enediol adjacent to the carbonyl. This −C(OH)=C(OH)−C(=O)− structural pattern is characteristic of reductones, and increases the acidity of one of the enol hydroxyl groups. The deprotonated conjugate base is the ascorbate anion, which is stabilized by electron delocalization that results from resonance between two forms:
For this reason, ascorbic acid is much more acidic than would be expected if the compound contained only isolated hydroxyl groups.
Salts
The ascorbate anion forms salts, such as sodium ascorbate, calcium ascorbate, and potassium ascorbate.
Esters
Ascorbic acid can also react with organic acids as an alcohol forming esters such as ascorbyl palmitate and ascorbyl stearate.
Nucleophilic attack
Nucleophilic attack of ascorbic acid on a proton results in a 1,3-diketone:
Oxidation
The ascorbate ion is the predominant species at typical biological pH values. It is a mild reducing agent and antioxidant, typically reacting with oxidants of the reactive oxygen species, such as the hydroxyl radical.
Reactive oxygen species are damaging to animals and plants at the molecular level due to their possible interaction with nucleic acids, proteins, and lipids. Sometimes these radicals initiate chain reactions. Ascorbate can terminate these chain radical reactions by electron transfer. The oxidized forms of ascorbate are relatively unreactive and do not cause cellular damage.
Ascorbic acid and its sodium, potassium, and calcium salts are commonly used as antioxidant food additives. These compounds are water-soluble and, thus, cannot protect fats from oxidation: For this purpose, the fat-soluble esters of ascorbic acid with long-chain fatty acids (ascorbyl palmitate or ascorbyl stearate) can be used as antioxidant food additives. Sodium-dependent active transport process enables absorption of Ascorbic acid from the intestine.
Ascorbate readily donates a hydrogen atom to free radicals, forming the radical anion semidehydroascorbate (also known as monodehydroascorbate), a resonance-stabilized semitrione:
Loss of an electron from semidehydroascorbate to produce the 1,2,3-tricarbonyl pseudodehydroascorbate is thermodynamically disfavored, which helps prevent propagation of free radical chain reactions such as autoxidation:
However, being a good electron donor, excess ascorbate in the presence of free metal ions can not only promote but also initiate free radical reactions, thus making it a potentially dangerous pro-oxidative compound in certain metabolic contexts.
Semidehydroascorbate oxidation instead occurs in conjunction with hydration, yielding the bicyclic hemiketal dehydroascorbate. In particular, semidehydroascorbate undergoes disproportionation to ascorbate and dehydroascorbate:
Aqueous solutions of dehydroascorbate are unstable, undergoing hydrolysis with a half-life of 5–15 minutes at . Decomposition products include diketogulonic acid, xylonic acid, threonic acid and oxalic acid.
Other reactions
It creates volatile compounds when mixed with glucose and amino acids at 90 °C.
It is a cofactor in tyrosine oxidation, though because a crude extract of animal liver is used, it is unclear which reaction catalyzed by which enzyme is being helped here. For known roles in enzymatic reactions, see .
Because it reduces iron(III) and chelates iron ions, it enhances the oral absorption of non-heme iron. This property also applies to its enantiomer.
Conversion to oxalate
In 1958, it was discovered that ascorbic acid can be converted to oxalate, a key component of calcium oxalate kidney stones. The process begins with the formation of dehydroascorbic acid (DHA) from the ascorbyl radical. While DHA can be recycled back to ascorbic acid, a portion irreversibly degrades to 2,3-diketogulonic acid (DKG), which then breaks down to both oxalate and the sugars L-erythrulose and threosone. Research conducted in the 1960s suggested ascorbic acid could substantially contribute to urinary oxalate content (possibly over 40%), but these estimates have been questioned due to methodological limitations. Subsequent large cohort studies have yielded conflicting results regarding the link between vitamin C intake and kidney stone formation. The overall clinical significance of ascorbic acid consumption to kidney stone risk, however, remains inconclusive, although several studies have suggested a potential association, especially with high-dose supplementation in men.
Uses
Food additive
The main use of -ascorbic acid and its salts is as food additives, mostly to combat oxidation and prevent discoloration of the product during storage. It is approved for this purpose in the EU with E number E300, the US, Australia, and New Zealand.
The "" enantiomer (erythorbic acid) shares all of the non-biological chemical properties with the more common enantiomer. As a result, it is an equally effective food antioxidant, and is also approved in processed foods.
Dietary supplement
Another major use of -ascorbic acid is as a dietary supplement. It is on the World Health Organization's List of Essential Medicines. It's deficiency over a prolonged period of time could cause scurvy, which is characterized by fatigue, widespread weakness in connective tissues and capillary fragility. It affects multiple organ systems due to its role in the biochemical reactions of connective tissue synthesis.
Niche, non-food uses
Ascorbic acid is easily oxidized and so is used as a reductant in photographic developer solutions (among others) and as a preservative.
In fluorescence microscopy and related fluorescence-based techniques, ascorbic acid can be used as an antioxidant to increase fluorescent signal and chemically retard dye photobleaching.
It is also commonly used to remove dissolved metal stains, such as iron, from fiberglass swimming pool surfaces.
In plastic manufacturing, ascorbic acid can be used to assemble molecular chains more quickly and with less waste than traditional synthesis methods.
Heroin users are known to use ascorbic acid as a means to convert heroin base to a water-soluble salt so that it can be injected.
As justified by its reaction with iodine, it is used to negate the effects of iodine tablets in water purification. It reacts with the sterilized water, removing the taste, color, and smell of the iodine. This is why it is often sold as a second set of tablets in most sporting goods stores as Potable Aqua-Neutralizing Tablets, along with the potassium iodide tablets.
Intravenous high-dose ascorbate is being used as a chemotherapeutic and biological response modifying agent. It is undergoing clinical trials.
It is sometimes used as a urinary acidifier to enhance the antiseptic effect of methenamine.
Synthesis
Natural biosynthesis of vitamin C occurs through various processes in many plants and animals.
Industrial preparation
Seventy percent of the world's supply of ascorbic acid is produced in China. Ascorbic acid is prepared in industry from glucose in a method based on the historical Reichstein process. In the first of a five-step process, glucose is catalytically hydrogenated to sorbitol, which is then oxidized by the microorganism Acetobacter suboxydans to sorbose. Only one of the six hydroxy groups is oxidized by this enzymatic reaction. From this point, two routes are available. Treatment of the product with acetone in the presence of an acid catalyst converts four of the remaining hydroxyl groups to acetals. The unprotected hydroxyl group is oxidized to the carboxylic acid by reaction with the catalytic oxidant TEMPO (regenerated by sodium hypochlorite bleaching solution). Historically, industrial preparation via the Reichstein process used potassium permanganate as the bleaching solution. Acid-catalyzed hydrolysis of this product performs the dual function of removing the two acetal groups and ring-closing lactonization. This step yields ascorbic acid. Each of the five steps has a yield larger than 90%.
A biotechnological process, first developed in China in the 1960s but further developed in the 1990s, bypassing acetone-protecting groups. A second genetically modified microbe species, such as mutant Erwinia, among others, oxidises sorbose into 2-ketogluconic acid (2-KGA), which can then undergo ring-closing lactonization via dehydration. This method is used in the predominant process used by the ascorbic acid industry in China, which supplies 70% of the world's ascorbic acid. Researchers are exploring means for one-step fermentation.
Determination
The traditional way to analyze the ascorbic acid content is by titration with an oxidizing agent, and several procedures have been developed.
The popular iodometry approach uses iodine in the presence of a starch indicator. Iodine is reduced by ascorbic acid, and when all the ascorbic acid has reacted, the iodine is in excess, forming a blue-black complex with the starch indicator. This indicates the end-point of the titration.
As an alternative, ascorbic acid can be treated with iodine in excess, followed by back titration with sodium thiosulfate using starch as an indicator.
This iodometric method has been revised to exploit the reaction of ascorbic acid with iodate and iodide in acid solution. Electrolyzing the potassium iodide solution produces iodine, which reacts with ascorbic acid. The end of the process is determined by potentiometric titration like Karl Fischer titration. The amount of ascorbic acid can be calculated by Faraday's law.
Another alternative uses N-bromosuccinimide (NBS) as the oxidizing agent in the presence of potassium iodide and starch. The NBS first oxidizes the ascorbic acid; when the latter is exhausted, the NBS liberates the iodine from the potassium iodide, which then forms the blue-black complex with starch.
See also
Colour retention agent
Erythorbic acid: a diastereomer of ascorbic acid.
Mineral ascorbates: salts of ascorbic acid
Acids in wine
References
Further reading
External links
IPCS Poisons Information Monograph (PIM) 046
Interactive 3D-structure of vitamin C with details on the x-ray structure
Organic acids
Antioxidants
Dietary antioxidants
Coenzymes
Corrosion inhibitors
Furanones
Vitamers
Vitamin C
Biomolecules
3-Hydroxypropenals | Chemistry of ascorbic acid | [
"Chemistry",
"Biology"
] | 2,857 | [
"Organic acids",
"Natural products",
"Acids",
"Biochemistry",
"Coenzymes",
"Organic compounds",
"Structural biology",
"Biomolecules",
"Corrosion inhibitors",
"Process chemicals",
"Molecular biology"
] |
2,274 | https://en.wikipedia.org/wiki/Arthur%20Eddington | Sir Arthur Stanley Eddington, (28 December 1882 – 22 November 1944) was an English astronomer, physicist, and mathematician. He was also a philosopher of science and a populariser of science. The Eddington limit, the natural limit to the luminosity of stars, or the radiation generated by accretion onto a compact object, is named in his honour.
Around 1920, he foreshadowed the discovery and mechanism of nuclear fusion processes in stars, in his paper "The Internal Constitution of the Stars". At that time, the source of stellar energy was a complete mystery; Eddington was the first to correctly speculate that the source was fusion of hydrogen into helium.
Eddington wrote a number of articles that announced and explained Einstein's theory of general relativity to the English-speaking world. World War I had severed many lines of scientific communication, and new developments in German science were not well known in England. He also conducted an expedition to observe the solar eclipse of 29 May 1919 on the Island of Príncipe that provided one of the earliest confirmations of general relativity, and he became known for his popular expositions and interpretations of the theory.
Early years
Eddington was born 28 December 1882 in Kendal, Westmorland (now Cumbria), England, the son of Quaker parents, Arthur Henry Eddington, headmaster of the Quaker School, and Sarah Ann Shout.
His father taught at a Quaker training college in Lancashire before moving to Kendal to become headmaster of Stramongate School. He died in the typhoid epidemic which swept England in 1884. His mother was left to bring up her two children with relatively little income. The family moved to Weston-super-Mare where at first Stanley (as his mother and sister always called Eddington) was educated at home before spending three years at a preparatory school. The family lived in a house called Varzin, at 42 Walliscote Road, Weston-super-Mare. A commemorative plaque on the building explains Eddington's contributions to science.
In 1893 Eddington entered Brynmelyn School. He proved to be a most capable scholar, particularly in mathematics and English literature. His performance earned him a scholarship to Owens College, Manchester (what was later to become the University of Manchester), in 1898, which he was able to attend, having turned 16 that year. He spent the first year in a general course, but he turned to physics for the next three years. Eddington was greatly influenced by his physics and mathematics teachers, Arthur Schuster and Horace Lamb. At Manchester, Eddington lived at Dalton Hall, where he came under the lasting influence of the Quaker mathematician J. W. Graham. His progress was rapid, winning him several scholarships, and he graduated with a BSc in physics with First Class Honours in 1902.
Based on his performance at Owens College, he was awarded a scholarship to Trinity College, Cambridge, in 1902. His tutor at Cambridge was Robert Alfred Herman and in 1904 Eddington became the first ever second-year student to be placed as Senior Wrangler. After receiving his M.A. in 1905, he began research on thermionic emission in the Cavendish Laboratory. This did not go well, and meanwhile he spent time teaching mathematics to first year engineering students. This hiatus was brief. Through a recommendation by E. T. Whittaker, his senior colleague at Trinity College, he secured a position at the Royal Observatory, Greenwich, where he was to embark on his career in astronomy, a career whose seeds had been sown even as a young child when he would often "try to count the stars".
Astronomy
In January 1906, Eddington was nominated to the post of chief assistant to the Astronomer Royal at the Royal Greenwich Observatory. He left Cambridge for Greenwich the following month. He was put to work on a detailed analysis of the parallax of 433 Eros on photographic plates that had started in 1900. He developed a new statistical method based on the apparent drift of two background stars, winning him the Smith's Prize in 1907. The prize won him a fellowship of Trinity College, Cambridge. In December 1912, George Darwin, son of Charles Darwin, died suddenly, and Eddington was promoted to his chair as the Plumian Professor of Astronomy and Experimental Philosophy in early 1913. Later that year, Robert Ball, holder of the theoretical Lowndean chair, also died, and Eddington was named the director of the entire Cambridge Observatory the next year. In May 1914, he was elected a fellow of the Royal Society: he was awarded the Royal Medal in 1928 and delivered the Bakerian Lecture in 1926.
Eddington also investigated the interior of stars through theory, and developed the first true understanding of stellar processes. He began this in 1916 with investigations of possible physical explanations for Cepheid variable stars. He began by extending Karl Schwarzschild's earlier work on radiation pressure in Emden polytropic models. These models treated a star as a sphere of gas held up against gravity by internal thermal pressure, and one of Eddington's chief additions was to show that radiation pressure was necessary to prevent collapse of the sphere. He developed his model despite knowingly lacking firm foundations for understanding opacity and energy generation in the stellar interior. However, his results allowed for calculation of temperature, density and pressure at all points inside a star (thermodynamic anisotropy), and Eddington argued that his theory was so useful for further astrophysical investigation that it should be retained despite not being based on completely accepted physics. James Jeans contributed the important suggestion that stellar matter would certainly be ionized, but that was the end of any collaboration between the pair, who became famous for their lively debates.
Eddington defended his method by pointing to the utility of his results, particularly his important mass–luminosity relation. This had the unexpected result of showing that virtually all stars, including giants and dwarfs, behaved as ideal gases. In the process of developing his stellar models, he sought to overturn current thinking about the sources of stellar energy. Jeans and others defended the Kelvin–Helmholtz mechanism, which was based on classical mechanics, while Eddington speculated broadly about the qualitative and quantitative consequences of possible proton–electron annihilation and nuclear fusion processes.
Around 1920, he anticipated the discovery and mechanism of nuclear fusion processes in stars, in his paper "The Internal Constitution of the Stars". At that time, the source of stellar energy was a complete mystery; Eddington correctly speculated that the source was fusion of hydrogen into helium, liberating enormous energy according to Einstein's equation . This was a particularly remarkable development since at that time fusion and thermonuclear energy, and even the fact that stars are largely composed of hydrogen (see metallicity), had not yet been discovered. Eddington's paper, based on knowledge at the time, reasoned that:
The leading theory of stellar energy, the contraction hypothesis (cf. the Kelvin–Helmholtz mechanism), should cause stars' rotation to visibly speed up due to conservation of angular momentum. But observations of Cepheid variable stars showed this was not happening.
The only other known plausible source of energy was conversion of matter to energy; Einstein had shown some years earlier that a small amount of matter was equivalent to a large amount of energy.
Francis Aston had also recently shown that the mass of a helium atom was about 0.8% less than the mass of the four hydrogen atoms which would, combined, form a helium atom, suggesting that if such a combination could happen, it would release considerable energy as a byproduct.
If a star contained just 5% of fusible hydrogen, it would suffice to explain how stars got their energy. (We now know that most "ordinary" stars contain far more than 5% hydrogen.)
Further elements might also be fused, and other scientists had speculated that stars were the "crucible" in which light elements combined to create heavy elements, but without more-accurate measurements of their atomic masses nothing more could be said at the time.
All of these speculations were proven correct in the following decades.
With these assumptions, he demonstrated that the interior temperature of stars must be millions of degrees. In 1924, he discovered the mass–luminosity relation for stars (see Lecchini in ). Despite some disagreement, Eddington's models were eventually accepted as a powerful tool for further investigation, particularly in issues of stellar evolution. The confirmation of his estimated stellar diameters by Michelson in 1920 proved crucial in convincing astronomers unused to Eddington's intuitive, exploratory style. Eddington's theory appeared in mature form in 1926 as The Internal Constitution of the Stars, which became an important text for training an entire generation of astrophysicists.
Eddington's work in astrophysics in the late 1920s and the 1930s continued his work in stellar structure, and precipitated further clashes with Jeans and Edward Arthur Milne. An important topic was the extension of his models to take advantage of developments in quantum physics, including the use of degeneracy physics in describing dwarf stars.
Dispute with Chandrasekhar on the mass limit of stars
The topic of extension of his models precipitated his dispute with Subrahmanyan Chandrasekhar, who was then a student at Cambridge. Chandrasekhar's work presaged the discovery of black holes, which at the time seemed so absurdly non-physical that Eddington refused to believe that Chandrasekhar's purely mathematical derivation had consequences for the real world. Eddington was wrong and his motivation is controversial. Chandrasekhar's narrative of this incident, in which his work is harshly rejected, portrays Eddington as rather cruel and dogmatic. Chandra benefited from his friendship with Eddington. It was Eddington and Milne who put up Chandra's name for the fellowship for the Royal Society which Chandra obtained. An FRS meant he was at the Cambridge high-table with all the luminaries and a very comfortable endowment for research. Eddington's criticism seems to have been based partly on a suspicion that a purely mathematical derivation from relativity theory was not enough to explain the seemingly daunting physical paradoxes that were inherent to degenerate stars, but to have "raised irrelevant objections" in addition, as Thanu Padmanabhan puts it.
Relativity
During World War I, Eddington was secretary of the Royal Astronomical Society, which meant he was the first to receive a series of letters and papers from Willem de Sitter regarding Einstein's theory of general relativity. Eddington was fortunate in being not only one of the few astronomers with the mathematical skills to understand general relativity, but owing to his internationalist and pacifist views inspired by his Quaker religious beliefs, one of the few at the time who was still interested in pursuing a theory developed by a German physicist. He quickly became the chief supporter and expositor of relativity in Britain. He and Astronomer Royal Frank Watson Dyson organized two expeditions to observe a solar eclipse in 1919 to make the first empirical test of Einstein's theory: the measurement of the deflection of light by the Sun's gravitational field. In fact, Dyson's argument for the indispensability of Eddington's expertise in this test was what prevented Eddington from eventually having to enter military service.
When conscription was introduced in Britain on 2 March 1916, Eddington intended to apply for an exemption as a conscientious objector. Cambridge University authorities instead requested and were granted an exemption on the ground of Eddington's work being of national interest. In 1918, this was appealed against by the Ministry of National Service. Before the appeal tribunal in June, Eddington claimed conscientious objector status, which was not recognized and would have ended his exemption in August 1918. A further two hearings took place in June and July, respectively. Eddington's personal statement at the June hearing about his objection to war based on religious grounds is on record. The Astronomer Royal, Sir Frank Dyson, supported Eddington at the July hearing with a written statement, emphasising Eddington's essential role in the solar eclipse expedition to Príncipe in May 1919. Eddington made clear his willingness to serve in the Friends' Ambulance Unit, under the jurisdiction of the British Red Cross, or as a harvest labourer. However, the tribunal's decision to grant a further twelve months' exemption from military service was on condition of Eddington continuing his astronomy work, in particular in preparation for the Príncipe expedition. The war ended before the end of his exemption.
After the war, Eddington travelled to the island of Príncipe off the west coast of Africa to watch the solar eclipse of 29 May 1919. During the eclipse, he took pictures of the stars (several stars in the Hyades cluster, including Kappa Tauri of the constellation Taurus) whose line of sight from the Earth happened to be near the Sun's location in the sky at that time of year. This effect is noticeable only during a total solar eclipse when the sky is dark enough to see stars which are normally obscured by the Sun's brightness. According to the theory of general relativity, stars with light rays that passed near the Sun would appear to have been slightly shifted because their light had been curved by its gravitational field. Eddington showed that Newtonian gravitation could be interpreted to predict half the shift predicted by Einstein.
Eddington's observations published the next year allegedly confirmed Einstein's theory, and were hailed at the time as evidence of general relativity over the Newtonian model. The news was reported in newspapers all over the world as a major story. Afterward, Eddington embarked on a campaign to popularize relativity and the expedition as landmarks both in scientific development and international scientific relations.
It has been claimed that Eddington's observations were of poor quality, and he had unjustly discounted simultaneous observations at Sobral, Brazil, which appeared closer to the Newtonian model, but a 1979 re-analysis with modern measuring equipment and contemporary software validated Eddington's results and conclusions. The quality of the 1919 results was indeed poor compared to later observations, but was sufficient to persuade contemporary astronomers. The rejection of the results from the expedition to Brazil was due to a defect in the telescopes used which, again, was completely accepted and well understood by contemporary astronomers.
Throughout this period, Eddington lectured on relativity, and was particularly well known for his ability to explain the concepts in lay terms as well as scientific. He collected many of these into the Mathematical Theory of Relativity in 1923, which Albert Einstein suggested was "the finest presentation of the subject in any language." He was an early advocate of Einstein's general relativity, and an interesting anecdote well illustrates his humour and personal intellectual investment: Ludwik Silberstein, a physicist who thought of himself as an expert on relativity, approached Eddington at the Royal Society's (6 November) 1919 meeting where he had defended Einstein's relativity with his Brazil-Príncipe solar eclipse calculations with some degree of scepticism, and ruefully charged Arthur as one who claimed to be one of three men who actually understood the theory (Silberstein, of course, was including himself and Einstein as the other). When Eddington refrained from replying, he insisted Arthur not be "so shy", whereupon Eddington replied, "Oh, no! I was wondering who the third one might be!"
Cosmology
Eddington was also heavily involved with the development of the first generation of general relativistic cosmological models. He had been investigating the instability of the Einstein universe when he learned of both Lemaître's 1927 paper postulating an expanding or contracting universe and Hubble's work on the recession of the spiral nebulae. He felt the cosmological constant must have played the crucial role in the universe's evolution from an Einsteinian steady state to its current expanding state, and most of his cosmological investigations focused on the constant's significance and characteristics. In The Mathematical Theory of Relativity, Eddington interpreted the cosmological constant to mean that the universe is "self-gauging".
Fundamental theory and the Eddington number
During the 1920s until his death, Eddington increasingly concentrated on what he called "fundamental theory" which was intended to be a unification of quantum theory, relativity, cosmology, and gravitation. At first he progressed along "traditional" lines, but turned increasingly to an almost numerological analysis of the dimensionless ratios of fundamental constants.
His basic approach was to combine several fundamental constants in order to produce a dimensionless number. In many cases these would result in numbers close to 1040, its square, or its square root. He was convinced that the mass of the proton and the charge of the electron were a "natural and complete specification for constructing a Universe" and that their values were not accidental. One of the discoverers of quantum mechanics, Paul Dirac, also pursued this line of investigation, which has become known as the Dirac large numbers hypothesis.
A somewhat damaging statement in his defence of these concepts involved the fine-structure constant, α. At the time it was measured to be very close to 1/136, and he argued that the value should in fact be exactly 1/136 for epistemological reasons. Later measurements placed the value much closer to 1/137, at which point he switched his line of reasoning to argue that one more should be added to the degrees of freedom, so that the value should in fact be exactly 1/137, the Eddington number. Wags at the time started calling him "Arthur Adding-one". This change of stance detracted from Eddington's credibility in the physics community. The current CODATA value is 1/
Eddington believed he had identified an algebraic basis for fundamental physics, which he termed "E-numbers" (representing a certain group – a Clifford algebra). These in effect incorporated spacetime into a higher-dimensional structure. While his theory has long been neglected by the general physics community, similar algebraic notions underlie many modern attempts at a grand unified theory. Moreover, Eddington's emphasis on the values of the fundamental constants, and specifically upon dimensionless numbers derived from them, is nowadays a central concern of physics. In particular, he predicted a number of hydrogen atoms in the Universe ≈ , or equivalently the half of the total number of particles protons + electrons. He did not complete this line of research before his death in 1944; his book Fundamental Theory was published posthumously in 1948.
Eddington number for cycling
Eddington is credited with devising a measure of a cyclist's long-distance riding achievements. The Eddington number in the context of cycling is defined as the maximum number E such that the cyclist has cycled at least E miles on at least E days.
For example, an Eddington number of 70 would imply that the cyclist has cycled at least 70 miles in a day on at least 70 occasions. Achieving a high Eddington number is difficult, since moving from, say, 70 to 75 will (probably) require more than five new long-distance rides, since any rides shorter than 75 miles will no longer be included in the reckoning. Eddington's own life-time E-number was 84.
The Eddington number for cycling is analogous to the h-index that quantifies both the actual scientific productivity and the apparent scientific impact of a scientist.
Philosophy
Idealism
Eddington wrote in his book The Nature of the Physical World that "The stuff of the world is mind-stuff."
The idealist conclusion was not integral to his epistemology but was based on two main arguments.
The first derives directly from current physical theory. Briefly, mechanical theories of the ether and of the behaviour of fundamental particles have been discarded in both relativity and quantum physics. From this, Eddington inferred that a materialistic metaphysics was outmoded and that, in consequence, since the disjunction of materialism or idealism are assumed to be exhaustive, an idealistic metaphysics is required. The second, and more interesting argument, was based on Eddington's epistemology, and may be regarded as consisting of two parts. First, all we know of the objective world is its structure, and the structure of the objective world is precisely mirrored in our own consciousness. We therefore have no reason to doubt that the objective world too is "mind-stuff". Dualistic metaphysics, then, cannot be evidentially supported.
But, second, not only can we not know that the objective world is nonmentalistic, we also cannot intelligibly suppose that it could be material. To conceive of a dualism entails attributing material properties to the objective world. However, this presupposes that we could observe that the objective world has material properties. But this is absurd, for whatever is observed must ultimately be the content of our own consciousness, and consequently, nonmaterial.
Eddington believed that physics cannot explain consciousness - "light waves are propagated from the table to the eye; chemical changes occur in the retina; propagation of some kind occurs in the optic nerves; atomic changes follow in the brain. Just where the final leap into consciousness occurs is not clear. We do not know the last stage of the message in the physical world before it became a sensation in consciousness".
Ian Barbour, in his book Issues in Science and Religion (1966), p. 133, cites Eddington's The Nature of the Physical World (1928) for a text that argues the Heisenberg uncertainty principle provides a scientific basis for "the defense of the idea of human freedom" and his Science and the Unseen World (1929) for support of philosophical idealism, "the thesis that reality is basically mental".
Charles De Koninck points out that Eddington believed in objective reality existing apart from our minds, but was using the phrase "mind-stuff" to highlight the inherent intelligibility of the world: that our minds and the physical world are made of the same "stuff" and that our minds are the inescapable connection to the world. As De Koninck quotes Eddington,
Science
Against Albert Einstein and others who advocated determinism, indeterminism—championed by Eddington—says that a physical object has an ontologically undetermined component that is not due to the epistemological limitations of physicists' understanding. The uncertainty principle in quantum mechanics, then, would not necessarily be due to hidden variables but to an indeterminism in nature itself. Eddington proclaimed "It is a consequence of the advent of the quantum theory that physics is no longer pledged to a scheme of deterministic law".
Eddington agreed with the tenet of logical positivism that "the meaning of a scientific statement is to be ascertained by reference to the steps which would be taken to verify it".
Popular and philosophical writings
Eddington wrote a parody of The Rubaiyat of Omar Khayyam, recounting his 1919 solar eclipse experiment. It contained the following quatrain:
In addition to his textbook The Mathematical Theory of Relativity, during the 1920s and 30s, Eddington gave numerous lectures, interviews, and radio broadcasts on relativity, and later, quantum mechanics. Many of these were gathered into books, including The Nature of the Physical World and New Pathways in Science. His use of literary allusions and humour helped make these difficult subjects more accessible.
Eddington's books and lectures were immensely popular with the public, not only because of his clear exposition, but also for his willingness to discuss the philosophical and religious implications of the new physics. He argued for a deeply rooted philosophical harmony between scientific investigation and religious mysticism, and also that the positivist nature of relativity and quantum physics provided new room for personal religious experience and free will. Unlike many other spiritual scientists, he rejected the idea that science could provide proof of religious propositions.
His popular writings made him a household name in Great Britain between the world wars.
Death
Eddington died of cancer in the Evelyn Nursing Home, Cambridge, on 22 November 1944. He was unmarried. His body was cremated at Cambridge Crematorium (Cambridgeshire) on 27 November 1944; the cremated remains were buried in the grave of his mother in the Ascension Parish Burial Ground in Cambridge.
Cambridge University's North West Cambridge development has been named Eddington in his honour.
Eddington was played by David Tennant in the television film Einstein and Eddington, with Einstein played by Andy Serkis. The film was notable for its groundbreaking portrayal of Eddington as a somewhat repressed gay man. It was first broadcast in 2008.
The actor Paul Eddington was a relative, mentioning in his autobiography (in light of his own weakness in mathematics) "what I then felt to be the misfortune" of being related to "one of the foremost physicists in the world". Paul's father Albert and Sir Arthur were second cousins, both great-grandsons of William Eddington (1755–1806).
Honours
Awards and honors
Smith's Prize (1907)
International Honorary Member of the American Academy of Arts and Sciences (1922)
Bruce Medal of Astronomical Society of the Pacific (1924)
Henry Draper Medal of the National Academy of Sciences (1924)
Gold Medal of the Royal Astronomical Society (1924)
International Member of the United States National Academy of Sciences (1925)
Foreign membership of the Royal Netherlands Academy of Arts and Sciences (1926)
Prix Jules Janssen of the Société astronomique de France (French Astronomical Society) (1928)
Royal Medal of the Royal Society (1928)
Knighthood (1930)
International Member of the American Philosophical Society (1931)
Order of Merit (1938)
Honorary member of the Norwegian Astronomical Society (1939)
Hon. Freeman of Kendal, 1930
Named after him
Lunar crater Eddington
asteroid 2761 Eddington
Royal Astronomical Society's Eddington Medal
Eddington mission, now cancelled
Eddington Tower, halls of residence at the University of Essex
Eddington Astronomical Society, an amateur society based in his hometown of Kendal
Eddington, a house (group of students, used for in-school sports matches) of Kirkbie Kendal School
Eddington, new suburb of North West Cambridge, opened in 2017
Eddington Community Interest Company (CIC), 2003. A Community Centre focusing on Climate Information and projects, including a Waste Food Community Café and Larder, in partnership with SLACC (South Lakes Action on Climate Change), converting the former United Reform Church in Kendal
Service
Gave the Swarthmore Lecture in 1929
Chairman of the National Peace Council 1941–1943
President of the International Astronomical Union; of the Physical Society, 1930–32; of the Royal Astronomical Society, 1921–23
Romanes Lecturer, 1922
Gifford Lecturer, 1927
In popular culture
Eddington is a central figure in the short story "The Mathematician's Nightmare: The Vision of Professor Squarepunt" by Bertrand Russell, a work featured in The Mathematical Magpie by Clifton Fadiman.
He was portrayed by David Tennant in the television film Einstein and Eddington, a co-production of the BBC and HBO, broadcast in the United Kingdom on Saturday, 22 November 2008, on BBC2.
His thoughts on humour and religious experience were quoted in the adventure game The Witness, a production of the Thelka, Inc., released on 26 January 2016.
Time placed him on the cover on 16 April 1934.
The song “In Transit”, from the 2023 album Signs Of Life by Neil Gaiman and Fourplay String Quartet was written in memory of him.
Publications
1914. Stellar Movements and the Structure of the Universe. London: Macmillan.
1918. Report on the relativity theory of gravitation. London, Fleetway Press, Ltd.
1920. Space, Time and Gravitation: An Outline of the General Relativity Theory. Cambridge University Press.
1922. The theory of relativity and its influence on scientific thought
1923. 1952. The Mathematical Theory of Relativity. Cambridge University Press.
1925. The Domain of Physical Science. 2005 reprint:
1926. Stars and Atoms. Oxford: British Association.
1926. The Internal Constitution of Stars. Cambridge University Press.
1928. The Nature of the Physical World. MacMillan. 1935 replica edition: , University of Michigan 1981 edition: (1926–27 Gifford lectures)
1929. Science and the Unseen World. US Macmillan, UK Allen & Unwin. 1980 Reprint Arden Library . 2004 US reprint – Whitefish, Montana : Kessinger Publications: . 2007 UK reprint London, Allen & Unwin (Swarthmore Lecture), with a new foreword by George Ellis.
1930. Why I Believe in God: Science and Religion, as a Scientist Sees It. Arrow/scrollable preview.
1933. The Expanding Universe: Astronomy's 'Great Debate', 1900–1931. Cambridge University Press.
1935. New Pathways in Science. Cambridge University Press.
1936. Relativity Theory of Protons and Electrons. Cambridge Univ. Press.
1939. Philosophy of Physical Science. Cambridge University Press. (1938 Tarner lectures at Cambridge)
1946. Fundamental Theory. Cambridge University Press.
See also
Astronomy
Chandrasekhar limit
Eddington luminosity (also called the Eddington limit)
Gravitational lens
Outline of astronomy
Stellar nucleosynthesis
Timeline of stellar astronomy
List of astronomers
Science
Arrow of time
Classical unified field theories
Degenerate matter
Dimensionless physical constant
Dirac large numbers hypothesis (also called the Eddington–Dirac number)
Eddington number
Introduction to quantum mechanics
Luminiferous aether
Parameterized post-Newtonian formalism
Special relativity
Theory of everything (also called "final theory" or "ultimate theory")
Timeline of gravitational physics and relativity
List of experiments
People
List of science and religion scholars
Other
Infinite monkey theorem
Numerology
Ontic structural realism
References
Further reading
Durham, Ian T., "Eddington & Uncertainty". Physics in Perspective (September – December). Arxiv, History of Physics
Lecchini, Stefano, "How Dwarfs Became Giants. The Discovery of the Mass–Luminosity Relation" Bern Studies in the History and Philosophy of Science, pp. 224. (2007)
Stanley, Matthew. "An Expedition to Heal the Wounds of War: The 1919 Eclipse Expedition and Eddington as Quaker Adventurer." Isis 94 (2003): 57–89.
Stanley, Matthew. "So Simple a Thing as a Star: Jeans, Eddington, and the Growth of Astrophysical Phenomenology" in British Journal for the History of Science, 2007, 40: 53–82.
External links
Trinity College Chapel
Arthur Stanley Eddington (1882–1944) . University of St Andrews, Scotland.
Quotations by Arthur Eddington
Arthur Stanley Eddington The Bruce Medalists.
Russell, Henry Norris, "Review of The Internal Constitution of the Stars by A.S. Eddington". Ap.J. 67, 83 (1928).
Experiments of Sobral and Príncipe repeated in the space project in proceeding in fórum astronomical.
Biography and bibliography of Bruce medalists: Arthur Stanley Eddington
Eddington books: The Nature of the Physical World, The Philosophy of Physical Science, Relativity Theory of Protons and Electrons, and Fundamental Theory
Obituaries
Obituary 1 by Henry Norris Russell, Astrophysical Journal 101 (1943–46) 133
Obituary 2 by A. Vibert Douglas, Journal of the Royal Astronomical Society of Canada, 39 (1943–46) 1
Obituary 3 by Harold Spencer Jones and E. T. Whittaker, Monthly Notices of the Royal Astronomical Society 105 (1943–46) 68
Obituary 4 by Herbert Dingle, The Observatory 66 (1943–46) 1
The Times, Thursday, 23 November 1944; pg. 7; Issue 49998; col D: Obituary (unsigned) – Image of cutting available at
1882 births
1944 deaths
Alumni of Trinity College, Cambridge
Alumni of the Victoria University of Manchester
British anti–World War I activists
British astrophysicists
British conscientious objectors
British Christian pacifists
Corresponding Members of the Russian Academy of Sciences (1917–1925)
Corresponding Members of the USSR Academy of Sciences
British cosmologists
British Quakers
20th-century British astronomers
Fellows of Trinity College, Cambridge
Fellows of the Royal Astronomical Society
Fellows of the Royal Society
Foreign associates of the National Academy of Sciences
Knights Bachelor
Members of the Order of Merit
Members of the Royal Netherlands Academy of Arts and Sciences
People from Kendal
Presidents of the Physical Society
Presidents of the Royal Astronomical Society
Recipients of the Bruce Medal
Recipients of the Gold Medal of the Royal Astronomical Society
British relativity theorists
Royal Medal winners
Senior Wranglers
20th-century British physicists
Plumian Professors of Astronomy and Experimental Philosophy
Presidents of the International Astronomical Union
Members of the American Philosophical Society | Arthur Eddington | [
"Astronomy"
] | 6,692 | [
"Astronomers",
"Presidents of the International Astronomical Union"
] |
2,308 | https://en.wikipedia.org/wiki/Actinide | The actinide () or actinoid () series encompasses at least the 14 metallic chemical elements in the 5f series, with atomic numbers from 89 to 102, actinium through nobelium. Number 103, lawrencium, is also generally included despite being part of the 6d transition series. The actinide series derives its name from the first element in the series, actinium. The informal chemical symbol An is used in general discussions of actinide chemistry to refer to any actinide.
The 1985 IUPAC Red Book recommends that actinoid be used rather than actinide, since the suffix -ide normally indicates a negative ion. However, owing to widespread current use, actinide is still allowed. Since actinoid literally means actinium-like (cf. humanoid or android), it has been argued for semantic reasons that actinium cannot logically be an actinoid, but IUPAC acknowledges its inclusion based on common usage.
Actinium through nobelium are f-block elements, while lawrencium is a d-block element and a transition metal. The series mostly corresponds to the filling of the 5f electron shell, although as isolated atoms in the ground state many have anomalous configurations involving the filling of the 6d shell due to interelectronic repulsion. In comparison with the lanthanides, also mostly f-block elements, the actinides show much more variable valence. They all have very large atomic and ionic radii and exhibit an unusually large range of physical properties. While actinium and the late actinides (from curium onwards) behave similarly to the lanthanides, the elements thorium, protactinium, and uranium are much more similar to transition metals in their chemistry, with neptunium, plutonium, and americium occupying an intermediate position.
All actinides are radioactive and release energy upon radioactive decay; naturally occurring uranium and thorium, and synthetically produced plutonium are the most abundant actinides on Earth. These have been used in nuclear reactors, and uranium and plutonium are critical elements of nuclear weapons. Uranium and thorium also have diverse current or historical uses, and americium is used in the ionization chambers of most modern smoke detectors.
Of the actinides, primordial thorium and uranium occur naturally in substantial quantities. The radioactive decay of uranium produces transient amounts of actinium and protactinium, and atoms of neptunium and plutonium are occasionally produced from transmutation reactions in uranium ores. The other actinides are purely synthetic elements. Nuclear weapons tests have released at least six actinides heavier than plutonium into the environment; analysis of debris from a 1952 hydrogen bomb explosion showed the presence of americium, curium, berkelium, californium, einsteinium and fermium.
In presentations of the periodic table, the f-block elements are customarily shown as two additional rows below the main body of the table. This convention is entirely a matter of aesthetics and formatting practicality; a rarely used wide-formatted periodic table inserts the 4f and 5f series in their proper places, as parts of the table's sixth and seventh rows (periods).
Actinides
Discovery, isolation and synthesis
Like the lanthanides, the actinides form a family of elements with similar properties. Within the actinides, there are two overlapping groups: transuranium elements, which follow uranium in the periodic table; and transplutonium elements, which follow plutonium. Compared to the lanthanides, which (except for promethium) are found in nature in appreciable quantities, most actinides are rare. Most do not occur in nature, and of those that do, only thorium and uranium do so in more than trace quantities. The most abundant or easily synthesized actinides are uranium and thorium, followed by plutonium, americium, actinium, protactinium, neptunium, and curium.
The existence of transuranium elements was suggested in 1934 by Enrico Fermi, based on his experiments. However, even though four actinides were known by that time, it was not yet understood that they formed a family similar to lanthanides. The prevailing view that dominated early research into transuranics was that they were regular elements in the 7th period, with thorium, protactinium and uranium corresponding to 6th-period hafnium, tantalum and tungsten, respectively. Synthesis of transuranics gradually undermined this point of view. By 1944, an observation that curium failed to exhibit oxidation states above 4 (whereas its supposed 6th period homolog, platinum, can reach oxidation state of 6) prompted Glenn Seaborg to formulate an "actinide hypothesis". Studies of known actinides and discoveries of further transuranic elements provided more data in support of this position, but the phrase "actinide hypothesis" (the implication being that a "hypothesis" is something that has not been decisively proven) remained in active use by scientists through the late 1950s.
At present, there are two major methods of producing isotopes of transplutonium elements: (1) irradiation of the lighter elements with neutrons; (2) irradiation with accelerated charged particles. The first method is more important for applications, as only neutron irradiation using nuclear reactors allows the production of sizeable amounts of synthetic actinides; however, it is limited to relatively light elements. The advantage of the second method is that elements heavier than plutonium, as well as neutron-deficient isotopes, can be obtained, which are not formed during neutron irradiation.
In 1962–1966, there were attempts in the United States to produce transplutonium isotopes using a series of six underground nuclear explosions. Small samples of rock were extracted from the blast area immediately after the test to study the explosion products, but no isotopes with mass number greater than 257 could be detected, despite predictions that such isotopes would have relatively long half-lives of α-decay. This non-observation was attributed to spontaneous fission owing to the large speed of the products and to other decay channels, such as neutron emission and nuclear fission.
From actinium to uranium
Uranium and thorium were the first actinides discovered. Uranium was identified in 1789 by the German chemist Martin Heinrich Klaproth in pitchblende ore. He named it after the planet Uranus, which had been discovered eight years earlier. Klaproth was able to precipitate a yellow compound (likely sodium diuranate) by dissolving pitchblende in nitric acid and neutralizing the solution with sodium hydroxide. He then reduced the obtained yellow powder with charcoal, and extracted a black substance that he mistook for metal. Sixty years later, the French scientist Eugène-Melchior Péligot identified it as uranium oxide. He also isolated the first sample of uranium metal by heating uranium tetrachloride with metallic potassium. The atomic mass of uranium was then calculated as 120, but Dmitri Mendeleev in 1872 corrected it to 240 using his periodicity laws. This value was confirmed experimentally in 1882 by K. Zimmerman.
Thorium oxide was discovered by Friedrich Wöhler in the mineral thorianite, which was found in Norway (1827). Jöns Jacob Berzelius characterized this material in more detail in 1828. By reduction of thorium tetrachloride with potassium, he isolated the metal and named it thorium after the Norse god of thunder and lightning Thor. The same isolation method was later used by Péligot for uranium.
Actinium was discovered in 1899 by André-Louis Debierne, an assistant of Marie Curie, in the pitchblende waste left after removal of radium and polonium. He described the substance (in 1899) as similar to titanium and (in 1900) as similar to thorium. The discovery of actinium by Debierne was however questioned in 1971 and 2000, arguing that Debierne's publications in 1904 contradicted his earlier work of 1899–1900. This view instead credits the 1902 work of Friedrich Oskar Giesel, who discovered a radioactive element named emanium that behaved similarly to lanthanum. The name actinium comes from the , meaning beam or ray. This metal was discovered not by its own radiation but by the radiation of the daughter products. Owing to the close similarity of actinium and lanthanum and low abundance, pure actinium could only be produced in 1950. The term actinide was probably introduced by Victor Goldschmidt in 1937.
Protactinium was possibly isolated in 1900 by William Crookes. It was first identified in 1913, when Kasimir Fajans and Oswald Helmuth Göhring encountered the short-lived isotope 234mPa (half-life 1.17 minutes) during their studies of the 238U decay chain. They named the new element brevium (from Latin brevis meaning brief); the name was changed to protoactinium (from Greek πρῶτος + ἀκτίς meaning "first beam element") in 1918 when two groups of scientists, led by the Austrian Lise Meitner and Otto Hahn of Germany and Frederick Soddy and John Arnold Cranston of Great Britain, independently discovered the much longer-lived 231Pa. The name was shortened to protactinium in 1949. This element was little characterized until 1960, when Alfred Maddock and his co-workers in the U.K. isolated 130 grams of protactinium from 60 tonnes of waste left after extraction of uranium from its ore.
Neptunium and above
Neptunium (named for the planet Neptune, the next planet out from Uranus, after which uranium was named) was discovered by Edwin McMillan and Philip H. Abelson in 1940 in Berkeley, California. They produced the 239Np isotope (half-life 2.4 days) by bombarding uranium with slow neutrons. It was the first transuranium element produced synthetically.
Transuranium elements do not occur in sizeable quantities in nature and are commonly synthesized via nuclear reactions conducted with nuclear reactors. For example, under irradiation with reactor neutrons, uranium-238 partially converts to plutonium-239:
This synthesis reaction was used by Fermi and his collaborators in their design of the reactors located at the Hanford Site, which produced significant amounts of plutonium-239 for the nuclear weapons of the Manhattan Project and the United States' post-war nuclear arsenal.
Actinides with the highest mass numbers are synthesized by bombarding uranium, plutonium, curium and californium with ions of nitrogen, oxygen, carbon, neon or boron in a particle accelerator. Thus nobelium was produced by bombarding uranium-238 with neon-22 as
_{92}^{238}U + _{10}^{22}Ne -> _{102}^{256}No + 4_0^1n.
The first isotopes of transplutonium elements, americium-241 and curium-242, were synthesized in 1944 by Glenn T. Seaborg, Ralph A. James and Albert Ghiorso. Curium-242 was obtained by bombarding plutonium-239 with 32-MeV α-particles:
_{94}^{239}Pu + _2^4He -> _{96}^{242}Cm + _0^1n.
The americium-241 and curium-242 isotopes also were produced by irradiating plutonium in a nuclear reactor. The latter element was named after Marie Curie and her husband Pierre who are noted for discovering radium and for their work in radioactivity.
Bombarding curium-242 with α-particles resulted in an isotope of californium 245Cf in 1950, and a similar procedure yielded berkelium-243 from americium-241 in 1949. The new elements were named after Berkeley, California, by analogy with its lanthanide homologue terbium, which was named after the village of Ytterby in Sweden.
In 1945, B. B. Cunningham obtained the first bulk chemical compound of a transplutonium element, namely americium hydroxide. Over the few years, milligram quantities of americium and microgram amounts of curium were accumulated that allowed production of isotopes of berkelium and californium. Sizeable amounts of these elements were produced in 1958, and the first californium compound (0.3 μg of CfOCl) was obtained in 1960 by B. B. Cunningham and J. C. Wallmann.
Einsteinium and fermium were identified in 1952–1953 in the fallout from the "Ivy Mike" nuclear test (1 November 1952), the first successful test of a hydrogen bomb. Instantaneous exposure of uranium-238 to a large neutron flux resulting from the explosion produced heavy isotopes of uranium, which underwent a series of beta decays to nuclides such as einsteinium-253 and fermium-255. The discovery of the new elements and the new data on neutron capture were initially kept secret on the orders of the US military until 1955 due to Cold War tensions. Nevertheless, the Berkeley team were able to prepare einsteinium and fermium by civilian means, through the neutron bombardment of plutonium-239, and published this work in 1954 with the disclaimer that it was not the first studies that had been carried out on those elements. The "Ivy Mike" studies were declassified and published in 1955. The first significant (submicrogram) amounts of einsteinium were produced in 1961 by Cunningham and colleagues, but this has not been done for fermium yet.
The first isotope of mendelevium, 256Md (half-life 87 min), was synthesized by Albert Ghiorso, Glenn T. Seaborg, Gregory Robert Choppin, Bernard G. Harvey and Stanley Gerald Thompson when they bombarded an 253Es target with alpha particles in the 60-inch cyclotron of Berkeley Radiation Laboratory; this was the first isotope of any element to be synthesized one atom at a time.
There were several attempts to obtain isotopes of nobelium by Swedish (1957) and American (1958) groups, but the first reliable result was the synthesis of 256No by the Russian group of Georgy Flyorov in 1965, as acknowledged by the IUPAC in 1992. In their experiments, Flyorov et al. bombarded uranium-238 with neon-22.
In 1961, Ghiorso et al. obtained the first isotope of lawrencium by irradiating californium (mostly californium-252) with boron-10 and boron-11 ions. The mass number of this isotope was not clearly established (possibly 258 or 259) at the time. In 1965, 256Lr was synthesized by Flyorov et al. from 243Am and 18O. Thus IUPAC recognized the nuclear physics teams at Dubna and Berkeley as the co-discoverers of lawrencium.
Isotopes
Thirty-four isotopes of actinium and eight excited isomeric states of some of its nuclides are known, ranging in mass number from 203 to 236. Three isotopes, 225Ac, 227Ac and 228Ac, were found in nature and the others were produced in the laboratory; only the three natural isotopes are used in applications. Actinium-225 is a member of the radioactive neptunium series; it was first discovered in 1947 as a decay product of uranium-233 and it is an α-emitter with a half-life of 10 days. Actinium-225 is less available than actinium-228, but is more promising in radiotracer applications. Actinium-227 (half-life 21.77 years) occurs in all uranium ores, but in small quantities. One gram of uranium (in radioactive equilibrium) contains only 2 gram of 227Ac. Actinium-228 is a member of the radioactive thorium series formed by the decay of 228Ra; it is a β− emitter with a half-life of 6.15 hours. In one tonne of thorium there is 5 gram of 228Ac. It was discovered by Otto Hahn in 1906.
There are 32 known isotopes of thorium ranging in mass number from 207 to 238. Of these, the longest-lived is 232Th, whose half-life of means that it still exists in nature as a primordial nuclide. The next longest-lived is 230Th, an intermediate decay product of 238U with a half-life of 75,400 years. Several other thorium isotopes have half-lives over a day; all of these are also transient in the decay chains of 232Th, 235U, and 238U.
Twenty-nine isotopes of protactinium are known with mass numbers 211–239 as well as three excited isomeric states. Only 231Pa and 234Pa have been found in nature. All the isotopes have short lifetimes, except for protactinium-231 (half-life 32,760 years). The most important isotopes are 231Pa and 233Pa, which is an intermediate product in obtaining uranium-233 and is the most affordable among artificial isotopes of protactinium. 233Pa has convenient half-life and energy of γ-radiation, and thus was used in most studies of protactinium chemistry. Protactinium-233 is a β-emitter with a half-life of 26.97 days.
There are 27 known isotopes of uranium, having mass numbers 215–242 (except 220). Three of them, 234U, 235U and 238U, are present in appreciable quantities in nature. Among others, the most important is 233U, which is a final product of transformation of 232Th irradiated by slow neutrons. 233U has a much higher fission efficiency by low-energy (thermal) neutrons, compared e.g. with 235U. Most uranium chemistry studies were carried out on uranium-238 owing to its long half-life of 4.4 years.
There are 25 isotopes of neptunium with mass numbers 219–244 (except 221); they are all highly radioactive. The most popular among scientists are long-lived 237Np (t1/2 = 2.20 years) and short-lived 239Np, 238Np (t1/2 ~ 2 days).
There are 21 known isotopes of plutonium, having mass numbers 227–247. The most stable isotope of plutonium is 244Pu with half-life of 8.13 years.
Eighteen isotopes of americium are known with mass numbers from 229 to 247 (with the exception of 231). The most important are 241Am and 243Am, which are alpha-emitters and also emit soft, but intense γ-rays; both of them can be obtained in an isotopically pure form. Chemical properties of americium were first studied with 241Am, but later shifted to 243Am, which is almost 20 times less radioactive. The disadvantage of 243Am is production of the short-lived daughter isotope 239Np, which has to be considered in the data analysis.
Among 19 isotopes of curium, ranging in mass number from 233 to 251, the most accessible are 242Cm and 244Cm; they are α-emitters, but with much shorter lifetime than the americium isotopes. These isotopes emit almost no γ-radiation, but undergo spontaneous fission with the associated emission of neutrons. More long-lived isotopes of curium (245–248Cm, all α-emitters) are formed as a mixture during neutron irradiation of plutonium or americium. Upon short irradiation, this mixture is dominated by 246Cm, and then 248Cm begins to accumulate. Both of these isotopes, especially 248Cm, have a longer half-life (3.48 years) and are much more convenient for carrying out chemical research than 242Cm and 244Cm, but they also have a rather high rate of spontaneous fission. 247Cm has the longest lifetime among isotopes of curium (1.56 years), but is not formed in large quantities because of the strong fission induced by thermal neutrons.
Seventeen isotopes of berkelium have been identified with mass numbers 233, 234, 236, 238, and 240–252. Only 249Bk is available in large quantities; it has a relatively short half-life of 330 days and emits mostly soft β-particles, which are inconvenient for detection. Its alpha radiation is rather weak (1.45% with respect to β-radiation), but is sometimes used to detect this isotope. 247Bk is an alpha-emitter with a long half-life of 1,380 years, but it is hard to obtain in appreciable quantities; it is not formed upon neutron irradiation of plutonium because β-decay of curium isotopes with mass number below 248 is not known. (247Cm would actually release energy by β-decaying to 247Bk, but this has never been seen.)
The 20 isotopes of californium with mass numbers 237–256 are formed in nuclear reactors; californium-253 is a β-emitter and the rest are α-emitters. The isotopes with even mass numbers (250Cf, 252Cf and 254Cf) have a high rate of spontaneous fission, especially 254Cf of which 99.7% decays by spontaneous fission. Californium-249 has a relatively long half-life (352 years), weak spontaneous fission and strong γ-emission that facilitates its identification. 249Cf is not formed in large quantities in a nuclear reactor because of the slow β-decay of the parent isotope 249Bk and a large cross section of interaction with neutrons, but it can be accumulated in the isotopically pure form as the β-decay product of (pre-selected) 249Bk. Californium produced by reactor-irradiation of plutonium mostly consists of 250Cf and 252Cf, the latter being predominant for large neutron fluences, and its study is hindered by the strong neutron radiation.
Among the 18 known isotopes of einsteinium with mass numbers from 240 to 257, the most affordable is 253Es. It is an α-emitter with a half-life of 20.47 days, a relatively weak γ-emission and small spontaneous fission rate as compared with the isotopes of californium. Prolonged neutron irradiation also produces a long-lived isotope 254Es (t1/2 = 275.5 days).
Twenty isotopes of fermium are known with mass numbers of 241–260. 254Fm, 255Fm and 256Fm are α-emitters with a short half-life (hours), which can be isolated in significant amounts. 257Fm (t1/2 = 100 days) can accumulate upon prolonged and strong irradiation. All these isotopes are characterized by high rates of spontaneous fission.
Among the 17 known isotopes of mendelevium (mass numbers from 244 to 260), the most studied is 256Md, which mainly decays through electron capture (α-radiation is ≈10%) with a half-life of 77 minutes. Another alpha emitter, 258Md, has a half-life of 53 days. Both these isotopes are produced from rare einsteinium (253Es and 255Es respectively), that therefore limits their availability.
Long-lived isotopes of nobelium and isotopes of lawrencium (and of heavier elements) have relatively short half-lives. For nobelium, 13 isotopes are known, with mass numbers 249–260 and 262. The chemical properties of nobelium and lawrencium were studied with 255No (t1/2 = 3 min) and 256Lr (t1/2 = 35 s). The longest-lived nobelium isotope, 259No, has a half-life of approximately 1 hour. Lawrencium has 14 known isotopes with mass numbers 251–262, 264, and 266. The most stable of them is 266Lr with a half life of 11 hours.
Among all of these, the only isotopes that occur in sufficient quantities in nature to be detected in anything more than traces and have a measurable contribution to the atomic weights of the actinides are the primordial 232Th, 235U, and 238U, and three long-lived decay products of natural uranium, 230Th, 231Pa, and 234U. Natural thorium consists of 0.02(2)% 230Th and 99.98(2)% 232Th; natural protactinium consists of 100% 231Pa; and natural uranium consists of 0.0054(5)% 234U, 0.7204(6)% 235U, and 99.2742(10)% 238U.
Formation in nuclear reactors
The figure buildup of actinides is a table of nuclides with the number of neutrons on the horizontal axis (isotopes) and the number of protons on the vertical axis (elements). The red dot divides the nuclides in two groups, so the figure is more compact. Each nuclide is represented by a square with the mass number of the element and its half-life. Naturally existing actinide isotopes (Th, U) are marked with a bold border, alpha emitters have a yellow colour, and beta emitters have a blue colour. Pink indicates electron capture (236Np), whereas white stands for a long-lasting metastable state (242Am).
The formation of actinide nuclides is primarily characterised by:
Neutron capture reactions (n,γ), which are represented in the figure by a short right arrow.
The (n,2n) reactions and the less frequently occurring (γ,n) reactions are also taken into account, both of which are marked by a short left arrow.
Even more rarely and only triggered by fast neutrons, the (n,3n) reaction occurs, which is represented in the figure with one example, marked by a long left arrow.
In addition to these neutron- or gamma-induced nuclear reactions, the radioactive conversion of actinide nuclides also affects the nuclide inventory in a reactor. These decay types are marked in the figure by diagonal arrows. The beta-minus decay, marked with an arrow pointing up-left, plays a major role for the balance of the particle densities of the nuclides. Nuclides decaying by positron emission (beta-plus decay) or electron capture (ϵ) do not occur in a nuclear reactor except as products of knockout reactions; their decays are marked with arrows pointing down-right. Due to the long half-lives of the given nuclides, alpha decay plays almost no role in the formation and decay of the actinides in a power reactor, as the residence time of the nuclear fuel in the reactor core is rather short (a few years). Exceptions are the two relatively short-lived nuclides 242Cm (T1/2 = 163 d) and 236Pu (T1/2 = 2.9 y). Only for these two cases, the α decay is marked on the nuclide map by a long arrow pointing down-left. A few long-lived actinide isotopes, such as 244Pu and 250Cm, cannot be produced in reactors because neutron capture does not happen quickly enough to bypass the short-lived beta-decaying nuclides 243Pu and 249Cm; they can however be generated in nuclear explosions, which have much higher neutron fluxes.
Distribution in nature
Thorium and uranium are the most abundant actinides in nature with the respective mass concentrations of 16 ppm and 4 ppm. Uranium mostly occurs in the Earth's crust as a mixture of its oxides in the mineral uraninite, which is also called pitchblende because of its black color. There are several dozens of other uranium minerals such as carnotite (KUO2VO4·3H2O) and autunite (Ca(UO2)2(PO4)2·nH2O). The isotopic composition of natural uranium is 238U (relative abundance 99.2742%), 235U (0.7204%) and 234U (0.0054%); of these 238U has the largest half-life of 4.51 years. The worldwide production of uranium in 2009 amounted to 50,572 tonnes, of which 27.3% was mined in Kazakhstan. Other important uranium mining countries are Canada (20.1%), Australia (15.7%), Namibia (9.1%), Russia (7.0%), and Niger (6.4%).
The most abundant thorium minerals are thorianite (), thorite () and monazite, (). Most thorium minerals contain uranium and vice versa; and they all have significant fraction of lanthanides. Rich deposits of thorium minerals are located in the United States (440,000 tonnes), Australia and India (~300,000 tonnes each) and Canada (~100,000 tonnes).
The abundance of actinium in the Earth's crust is only about 5%. Actinium is mostly present in uranium-containing, but also in other minerals, though in much smaller quantities. The content of actinium in most natural objects corresponds to the isotopic equilibrium of parent isotope 235U, and it is not affected by the weak Ac migration. Protactinium is more abundant (10−12%) in the Earth's crust than actinium. It was discovered in uranium ore in 1913 by Fajans and Göhring. As actinium, the distribution of protactinium follows that of 235U.
The half-life of the longest-lived isotope of neptunium, 237Np, is negligible compared to the age of the Earth. Thus neptunium is present in nature in negligible amounts produced as intermediate decay products of other isotopes. Traces of plutonium in uranium minerals were first found in 1942, and the more systematic results on 239Pu are summarized in the table (no other plutonium isotopes could be detected in those samples). The upper limit of abundance of the longest-living isotope of plutonium, 244Pu, is 3%. Plutonium could not be detected in samples of lunar soil. Owing to its scarcity in nature, most plutonium is produced synthetically.
Extraction
Owing to the low abundance of actinides, their extraction is a complex, multistep process. Fluorides of actinides are usually used because they are insoluble in water and can be easily separated with redox reactions. Fluorides are reduced with calcium, magnesium or barium:
Among the actinides, thorium and uranium are the easiest to isolate. Thorium is extracted mostly from monazite: thorium pyrophosphate (ThP2O7) is reacted with nitric acid, and the produced thorium nitrate treated with tributyl phosphate. Rare-earth impurities are separated by increasing the pH in sulfate solution.
In another extraction method, monazite is decomposed with a 45% aqueous solution of sodium hydroxide at 140 °C. Mixed metal hydroxides are extracted first, filtered at 80 °C, washed with water and dissolved with concentrated hydrochloric acid. Next, the acidic solution is neutralized with hydroxides to pH = 5.8 that results in precipitation of thorium hydroxide (Th(OH)4) contaminated with ~3% of rare-earth hydroxides; the rest of rare-earth hydroxides remains in solution. Thorium hydroxide is dissolved in an inorganic acid and then purified from the rare earth elements. An efficient method is the dissolution of thorium hydroxide in nitric acid, because the resulting solution can be purified by extraction with organic solvents:
Th(OH)4 + 4 HNO3 → Th(NO3)4 + 4 H2O
Metallic thorium is separated from the anhydrous oxide, chloride or fluoride by reacting it with calcium in an inert atmosphere:
ThO2 + 2 Ca → 2 CaO + Th
Sometimes thorium is extracted by electrolysis of a fluoride in a mixture of sodium and potassium chloride at 700–800 °C in a graphite crucible. Highly pure thorium can be extracted from its iodide with the crystal bar process.
Uranium is extracted from its ores in various ways. In one method, the ore is burned and then reacted with nitric acid to convert uranium into a dissolved state. Treating the solution with a solution of tributyl phosphate (TBP) in kerosene transforms uranium into an organic form UO2(NO3)2(TBP)2. The insoluble impurities are filtered and the uranium is extracted by reaction with hydroxides as (NH4)2U2O7 or with hydrogen peroxide as UO4·2H2O.
When the uranium ore is rich in such minerals as dolomite, magnesite, etc., those minerals consume much acid. In this case, the carbonate method is used for uranium extraction. Its main component is an aqueous solution of sodium carbonate, which converts uranium into a complex [UO2(CO3)3]4−, which is stable in aqueous solutions at low concentrations of hydroxide ions. The advantages of the sodium carbonate method are that the chemicals have low corrosivity (compared to nitrates) and that most non-uranium metals precipitate from the solution. The disadvantage is that tetravalent uranium compounds precipitate as well. Therefore, the uranium ore is treated with sodium carbonate at elevated temperature and under oxygen pressure:
2 UO2 + O2 + 6 → 2 [UO2(CO3)3]4−
This equation suggests that the best solvent for the uranyl carbonate processing is a mixture of carbonate with bicarbonate. At high pH, this results in precipitation of diuranate, which is treated with hydrogen in the presence of nickel yielding an insoluble uranium tetracarbonate.
Another separation method uses polymeric resins as a polyelectrolyte. Ion exchange processes in the resins result in separation of uranium. Uranium from resins is washed with a solution of ammonium nitrate or nitric acid that yields uranyl nitrate, UO2(NO3)2·6H2O. When heated, it turns into UO3, which is converted to UO2 with hydrogen:
UO3 + H2 → UO2 + H2O
Reacting uranium dioxide with hydrofluoric acid changes it to uranium tetrafluoride, which yields uranium metal upon reaction with magnesium metal:
4 HF + UO2 → UF4 + 2 H2O
To extract plutonium, neutron-irradiated uranium is dissolved in nitric acid, and a reducing agent (FeSO4, or H2O2) is added to the resulting solution. This addition changes the oxidation state of plutonium from +6 to +4, while uranium remains in the form of uranyl nitrate (UO2(NO3)2). The solution is treated with a reducing agent and neutralized with ammonium carbonate to pH = 8 that results in precipitation of Pu4+ compounds.
In another method, Pu4+ and are first extracted with tributyl phosphate, then reacted with hydrazine washing out the recovered plutonium.
The major difficulty in separation of actinium is the similarity of its properties with those of lanthanum. Thus actinium is either synthesized in nuclear reactions from isotopes of radium or separated using ion-exchange procedures.
Properties
Actinides have similar properties to lanthanides. Just as the 4f electron shells are filled in the lanthanides, the 5f electron shells are filled in the actinides. Because the 5f, 6d, 7s, and 7p shells are close in energy, many irregular configurations arise; thus, in gas-phase atoms, just as the first 4f electron only appears in cerium, so the first 5f electron appears even later, in protactinium. However, just as lanthanum is the first element to use the 4f shell in compounds, so actinium is the first element to use the 5f shell in compounds. The f-shells complete their filling together, at ytterbium and nobelium. The first experimental evidence for the filling of the 5f shell in actinides was obtained by McMillan and Abelson in 1940. As in lanthanides (see lanthanide contraction), the ionic radius of actinides monotonically decreases with atomic number (see also actinoid contraction).
The shift of electron configurations in the gas phase does not always match the chemical behaviour. For example, the early-transition-metal-like prominence of the highest oxidation state, corresponding to removal of all valence electrons, extends up to uranium even though the 5f shells begin filling before that. On the other hand, electron configurations resembling the lanthanide congeners already begin at plutonium, even though lanthanide-like behaviour does not become dominant until the second half of the series begins at curium. The elements between uranium and curium form a transition between these two kinds of behaviour, where higher oxidation states continue to exist, but lose stability with respect to the +3 state. The +2 state becomes more important near the end of the series, and is the most stable oxidation state for nobelium, the last 5f element. Oxidation states rise again only after nobelium, showing that a new series of 6d transition metals has begun: lawrencium shows only the +3 oxidation state, and rutherfordium only the +4 state, making them respectively congeners of lutetium and hafnium in the 5d row.
Physical properties
Actinides are typical metals. All of them are soft and have a silvery color (but tarnish in air), relatively high density and plasticity. Some of them can be cut with a knife. Their electrical resistivity varies between 15 and 150 μΩ·cm. The hardness of thorium is similar to that of soft steel, so heated pure thorium can be rolled in sheets and pulled into wire. Thorium is nearly half as dense as uranium and plutonium, but is harder than either of them. All actinides are radioactive, paramagnetic, and, with the exception of actinium, have several crystalline phases: plutonium has seven, and uranium, neptunium and californium three. The crystal structures of protactinium, uranium, neptunium and plutonium do not have clear analogs among the lanthanides and are more similar to those of the 3d-transition metals.
All actinides are pyrophoric, especially when finely divided, that is, they spontaneously ignite upon reaction with air at room temperature. The melting point of actinides does not have a clear dependence on the number of f-electrons. The unusually low melting point of neptunium and plutonium (~640 °C) is explained by hybridization of 5f and 6d orbitals and the formation of directional bonds in these metals.
Chemical properties
Like the lanthanides, all actinides are highly reactive with halogens and chalcogens; however, the actinides react more easily. Actinides, especially those with a small number of 5f-electrons, are prone to hybridization. This is explained by the similarity of the electron energies at the 5f, 7s and 6d shells. Most actinides exhibit a larger variety of valence states, and the most stable are +6 for uranium, +5 for protactinium and neptunium, +4 for thorium and plutonium and +3 for actinium and other actinides.
Actinium is chemically similar to lanthanum, which is explained by their similar ionic radii and electronic structures. Like lanthanum, actinium almost always has an oxidation state of +3 in compounds, but it is less reactive and has more pronounced basic properties. Among other trivalent actinides Ac3+ is least acidic, i.e. has the weakest tendency to hydrolyze in aqueous solutions.
Thorium is rather active chemically. Owing to lack of electrons on 6d and 5f orbitals, tetravalent thorium compounds are colorless. At pH < 3, solutions of thorium salts are dominated by the cations [Th(H2O)8]4+. The Th4+ ion is relatively large, and depending on the coordination number can have a radius between 0.95 and 1.14 Å. As a result, thorium salts have a weak tendency to hydrolyse. The distinctive ability of thorium salts is their high solubility both in water and polar organic solvents.
Protactinium exhibits two valence states; the +5 is stable, and the +4 state easily oxidizes to protactinium(V). Thus tetravalent protactinium in solutions is obtained by the action of strong reducing agents in a hydrogen atmosphere. Tetravalent protactinium is chemically similar to uranium(IV) and thorium(IV). Fluorides, phosphates, hypophosphates, iodates and phenylarsonates of protactinium(IV) are insoluble in water and dilute acids. Protactinium forms soluble carbonates. The hydrolytic properties of pentavalent protactinium are close to those of tantalum(V) and niobium(V). The complex chemical behavior of protactinium is a consequence of the start of the filling of the 5f shell in this element.
Uranium has a valence from 3 to 6, the last being most stable. In the hexavalent state, uranium is very similar to the group 6 elements. Many compounds of uranium(IV) and uranium(VI) are non-stoichiometric, i.e. have variable composition. For example, the actual chemical formula of uranium dioxide is UO2+x, where x varies between −0.4 and 0.32. Uranium(VI) compounds are weak oxidants. Most of them contain the linear "uranyl" group, . Between 4 and 6 ligands can be accommodated in an equatorial plane perpendicular to the uranyl group. The uranyl group acts as a hard acid and forms stronger complexes with oxygen-donor ligands than with nitrogen-donor ligands. and are also the common form of Np and Pu in the +6 oxidation state. Uranium(IV) compounds exhibit reducing properties, e.g., they are easily oxidized by atmospheric oxygen. Uranium(III) is a very strong reducing agent. Owing to the presence of d-shell, uranium (as well as many other actinides) forms organometallic compounds, such as UIII(C5H5)3 and UIV(C5H5)4.
Neptunium has valence states from 3 to 7, which can be simultaneously observed in solutions. The most stable state in solution is +5, but the valence +4 is preferred in solid neptunium compounds. Neptunium metal is very reactive. Ions of neptunium are prone to hydrolysis and formation of coordination compounds.
Plutonium also exhibits valence states between 3 and 7 inclusive, and thus is chemically similar to neptunium and uranium. It is highly reactive, and quickly forms an oxide film in air. Plutonium reacts with hydrogen even at temperatures as low as 25–50 °C; it also easily forms halides and intermetallic compounds. Hydrolysis reactions of plutonium ions of different oxidation states are quite diverse. Plutonium(V) can enter polymerization reactions.
The largest chemical diversity among actinides is observed in americium, which can have valence between 2 and 6. Divalent americium is obtained only in dry compounds and non-aqueous solutions (acetonitrile). Oxidation states +3, +5 and +6 are typical for aqueous solutions, but also in the solid state. Tetravalent americium forms stable solid compounds (dioxide, fluoride and hydroxide) as well as complexes in aqueous solutions. It was reported that in alkaline solution americium can be oxidized to the heptavalent state, but these data proved erroneous. The most stable valence of americium is 3 in aqueous solution and 3 or 4 in solid compounds.
Valence 3 is dominant in all subsequent elements up to lawrencium (with the exception of nobelium). Curium can be tetravalent in solids (fluoride, dioxide). Berkelium, along with a valence of +3, also shows the valence of +4, more stable than that of curium; the valence 4 is observed in solid fluoride and dioxide. The stability of Bk4+ in aqueous solution is close to that of Ce4+. Only valence 3 was observed for californium, einsteinium and fermium. The divalent state is proven for mendelevium and nobelium, and in nobelium it is more stable than the trivalent state. Lawrencium shows valence 3 both in solutions and solids.
The redox potential \mathit E_\frac{M^4+}{AnO2^2+} increases from −0.32 V in uranium, through 0.34 V (Np) and 1.04 V (Pu) to 1.34 V in americium revealing the increasing reduction ability of the An4+ ion from americium to uranium. All actinides form AnH3 hydrides of black color with salt-like properties. Actinides also produce carbides with the general formula of AnC or AnC2 (U2C3 for uranium) as well as sulfides An2S3 and AnS2.
Compounds
Oxides and hydroxides
An – actinide **Depending on the isotopes
Some actinides can exist in several oxide forms such as An2O3, AnO2, An2O5 and AnO3. For all actinides, oxides AnO3 are amphoteric and An2O3, AnO2 and An2O5 are basic, they easily react with water, forming bases:
An2O3 + 3 H2O → 2 An(OH)3.
These bases are poorly soluble in water and by their activity are close to the hydroxides of rare-earth metals.
Np(OH)3 has not yet been synthesized, Pu(OH)3 has a blue color while Am(OH)3 is pink and Cm(OH)3 is colorless. Bk(OH)3 and Cf(OH)3 are also known, as are tetravalent hydroxides for Np, Pu and Am and pentavalent for Np and Am.
The strongest base is of actinium. All compounds of actinium are colorless, except for black actinium sulfide (Ac2S3). Dioxides of tetravalent actinides crystallize in the cubic system, same as in calcium fluoride.
Thorium reacting with oxygen exclusively forms the dioxide:
Th{} + O2 ->[\ce{1000^\circ C}] \overbrace{ThO2}^{Thorium~dioxide}
Thorium dioxide is a refractory material with the highest melting point among any known oxide (3390 °C). Adding 0.8–1% ThO2 to tungsten stabilizes its structure, so the doped filaments have better mechanical stability to vibrations. To dissolve ThO2 in acids, it is heated to 500–600 °C; heating above 600 °C produces a very resistant to acids and other reagents form of ThO2. Small addition of fluoride ions catalyses dissolution of thorium dioxide in acids.
Two protactinium oxides have been obtained: PaO2 (black) and Pa2O5 (white); the former is isomorphic with ThO2 and the latter is easier to obtain. Both oxides are basic, and Pa(OH)5 is a weak, poorly soluble base.
Decomposition of certain salts of uranium, for example UO2(NO3)·6H2O in air at 400 °C, yields orange or yellow UO3. This oxide is amphoteric and forms several hydroxides, the most stable being uranyl hydroxide UO2(OH)2. Reaction of uranium(VI) oxide with hydrogen results in uranium dioxide, which is similar in its properties with ThO2. This oxide is also basic and corresponds to the uranium hydroxide U(OH)4.
Plutonium, neptunium and americium form two basic oxides: An2O3 and AnO2. Neptunium trioxide is unstable; thus, only Np3O8 could be obtained so far. However, the oxides of plutonium and neptunium with the chemical formula AnO2 and An2O3 are well characterized.
Salts
*An – actinide **Depending on the isotopes
Actinides easily react with halogens forming salts with the formulas MX3 and MX4 (X = halogen). So the first berkelium compound, BkCl3, was synthesized in 1962 with an amount of 3 nanograms. Like the halogens of rare earth elements, actinide chlorides, bromides, and iodides are water-soluble, and fluorides are insoluble. Uranium easily yields a colorless hexafluoride, which sublimates at a temperature of 56.5 °C; because of its volatility, it is used in the separation of uranium isotopes with gas centrifuge or gaseous diffusion. Actinide hexafluorides have properties close to anhydrides. They are very sensitive to moisture and hydrolyze forming AnO2F2. The pentachloride and black hexachloride of uranium were synthesized, but they are both unstable.
Action of acids on actinides yields salts, and if the acids are non-oxidizing then the actinide in the salt is in low-valence state:
U + 2 H2SO4 → U(SO4)2 + 2 H2
2 Pu + 6 HCl → 2 PuCl3 + 3 H2
However, in these reactions the regenerating hydrogen can react with the metal, forming the corresponding hydride. Uranium reacts with acids and water much more easily than thorium.
Actinide salts can also be obtained by dissolving the corresponding hydroxides in acids. Nitrates, chlorides, sulfates and perchlorates of actinides are water-soluble. When crystallizing from aqueous solutions, these salts form hydrates, such as Th(NO3)4·6H2O, Th(SO4)2·9H2O and Pu2(SO4)3·7H2O. Salts of high-valence actinides easily hydrolyze. So, colorless sulfate, chloride, perchlorate and nitrate of thorium transform into basic salts with formulas Th(OH)2SO4 and Th(OH)3NO3. The solubility and insolubility of trivalent and tetravalent actinides is like that of lanthanide salts. So phosphates, fluorides, oxalates, iodates and carbonates of actinides are weakly soluble in water; they precipitate as hydrates, such as ThF4·3H2O and Th(CrO4)2·3H2O.
Actinides with oxidation state +6, except for the AnO22+-type cations, form [AnO4]2−, [An2O7]2− and other complex anions. For example, uranium, neptunium and plutonium form salts of the Na2UO4 (uranate) and (NH4)2U2O7 (diuranate) types. In comparison with lanthanides, actinides more easily form coordination compounds, and this ability increases with the actinide valence. Trivalent actinides do not form fluoride coordination compounds, whereas tetravalent thorium forms K2ThF6, KThF5, and even K5ThF9 complexes. Thorium also forms the corresponding sulfates (for example Na2SO4·Th(SO4)2·5H2O), nitrates and thiocyanates. Salts with the general formula An2Th(NO3)6·nH2O are of coordination nature, with the coordination number of thorium equal to 12. Even easier is to produce complex salts of pentavalent and hexavalent actinides. The most stable coordination compounds of actinides – tetravalent thorium and uranium – are obtained in reactions with diketones, e.g. acetylacetone.
Applications
While actinides have some established daily-life applications, such as in smoke detectors (americium) and gas mantles (thorium), they are mostly used in nuclear weapons and as fuel in nuclear reactors. The last two areas exploit the property of actinides to release enormous energy in nuclear reactions, which under certain conditions may become self-sustaining chain reactions.
The most important isotope for nuclear power applications is uranium-235. It is used in the thermal reactor, and its concentration in natural uranium does not exceed 0.72%. This isotope strongly absorbs thermal neutrons releasing much energy. One fission act of 1 gram of 235U converts into about 1 MW·day. Of importance, is that emits more neutrons than it absorbs; upon reaching the critical mass, enters into a self-sustaining chain reaction. Typically, uranium nucleus is divided into two fragments with the release of 2–3 neutrons, for example:
+ ⟶ + + 3
Other promising actinide isotopes for nuclear power are thorium-232 and its product from the thorium fuel cycle, uranium-233.
Emission of neutrons during the fission of uranium is important not only for maintaining the nuclear chain reaction, but also for the synthesis of the heavier actinides. Uranium-239 converts via β-decay into plutonium-239, which, like uranium-235, is capable of spontaneous fission. The world's first nuclear reactors were built not for energy, but for producing plutonium-239 for nuclear weapons.
About half of produced thorium is used as the light-emitting material of gas mantles. Thorium is also added into multicomponent alloys of magnesium and zinc. Mg-Th alloys are light and strong, but also have high melting point and ductility and thus are widely used in the aviation industry and in the production of missiles. Thorium also has good electron emission properties, with long lifetime and low potential barrier for the emission. The relative content of thorium and uranium isotopes is widely used to estimate the age of various objects, including stars (see radiometric dating).
The major application of plutonium has been in nuclear weapons, where the isotope plutonium-239 was a key component due to its ease of fission and availability. Plutonium-based designs allow reducing the critical mass to about a third of that for uranium-235. The "Fat Man"-type plutonium bombs produced during the Manhattan Project used explosive compression of plutonium to obtain significantly higher densities than normal, combined with a central neutron source to begin the reaction and increase efficiency. Thus only 6.2 kg of plutonium was needed for an explosive yield equivalent to 20 kilotons of TNT. (See also Nuclear weapon design.) Hypothetically, as little as 4 kg of plutonium—and maybe even less—could be used to make a single atomic bomb using very sophisticated assembly designs.
Plutonium-238 is potentially more efficient isotope for nuclear reactors, since it has smaller critical mass than uranium-235, but it continues to release much thermal energy (0.56 W/g) by decay even when the fission chain reaction is stopped by control rods. Its application is limited by its high price (about US$1000/g). This isotope has been used in thermopiles and water distillation systems of some space satellites and stations. The Galileo and Apollo spacecraft (e.g. Apollo 14) had heaters powered by kilogram quantities of plutonium-238 oxide; this heat is also transformed into electricity with thermopiles. The decay of plutonium-238 produces relatively harmless alpha particles and is not accompanied by gamma rays. Therefore, this isotope (~160 mg) is used as the energy source in heart pacemakers where it lasts about 5 times longer than conventional batteries.
Actinium-227 is used as a neutron source. Its high specific energy (14.5 W/g) and the possibility of obtaining significant quantities of thermally stable compounds are attractive for use in long-lasting thermoelectric generators for remote use. 228Ac is used as an indicator of radioactivity in chemical research, as it emits high-energy electrons (2.18 MeV) that can be easily detected. 228Ac-228Ra mixtures are widely used as an intense gamma-source in industry and medicine.
Development of self-glowing actinide-doped materials with durable crystalline matrices is a new area of actinide utilization as the addition of alpha-emitting radionuclides to some glasses and crystals may confer luminescence.
Toxicity
Radioactive substances can harm human health via (i) local skin contamination, (ii) internal exposure due to ingestion of radioactive isotopes, and (iii) external overexposure by β-activity and γ-radiation. Together with radium and transuranium elements, actinium is one of the most dangerous radioactive poisons with high specific α-activity. The most important feature of actinium is its ability to accumulate and remain in the surface layer of skeletons. At the initial stage of poisoning, actinium accumulates in the liver. Another danger of actinium is that it undergoes radioactive decay faster than being excreted. Adsorption from the digestive tract is much smaller (~0.05%) for actinium than radium.
Protactinium in the body tends to accumulate in the kidneys and bones. The maximum safe dose of protactinium in the human body is 0.03 μCi that corresponds to 0.5 micrograms of 231Pa. This isotope, which might be present in the air as aerosol, is 2.5 times more toxic than hydrocyanic acid.
Plutonium, when entering the body through air, food or blood (e.g. a wound), mostly settles in the lungs, liver and bones with only about 10% going to other organs, and remains there for decades. The long residence time of plutonium in the body is partly explained by its poor solubility in water. Some isotopes of plutonium emit ionizing α-radiation, which damages the surrounding cells. The median lethal dose (LD50) for 30 days in dogs after intravenous injection of plutonium is 0.32 milligram per kg of body mass, and thus the lethal dose for humans is approximately 22 mg for a person weighing 70 kg; the amount for respiratory exposure should be approximately four times greater. Another estimate assumes that plutonium is 50 times less toxic than radium, and thus permissible content of plutonium in the body should be 5 μg or 0.3 μCi. Such amount is nearly invisible under microscope. After trials on animals, this maximum permissible dose was reduced to 0.65 μg or 0.04 μCi. Studies on animals also revealed that the most dangerous plutonium exposure route is through inhalation, after which 5–25% of inhaled substances is retained in the body. Depending on the particle size and solubility of the plutonium compounds, plutonium is localized either in the lungs or in the lymphatic system, or is absorbed in the blood and then transported to the liver and bones. Contamination via food is the least likely way. In this case, only about 0.05% of soluble and 0.01% of insoluble compounds of plutonium absorbs into blood, and the rest is excreted. Exposure of damaged skin to plutonium would retain nearly 100% of it.
Using actinides in nuclear fuel, sealed radioactive sources or advanced materials such as self-glowing crystals has many potential benefits. However, a serious concern is the extremely high radiotoxicity of actinides and their migration in the environment. Use of chemically unstable forms of actinides in MOX and sealed radioactive sources is not appropriate by modern safety standards. There is a challenge to develop stable and durable actinide-bearing materials, which provide safe storage, use and final disposal. A key need is application of actinide solid solutions in durable crystalline host phases.
See also
Actinides in the environment
Lanthanides
Major actinides
Minor actinides
Transuranics
Notes
References
Bibliography
External links
Lawrence Berkeley Laboratory image of historic periodic table by Seaborg showing actinide series for the first time
Lawrence Livermore National Laboratory, Uncovering the Secrets of the Actinides
Los Alamos National Laboratory, Actinide Research Quarterly
Periodic table | Actinide | [
"Chemistry"
] | 12,900 | [
"Periodic table"
] |
2,322 | https://en.wikipedia.org/wiki/Audio%20signal%20processing | Audio signal processing is a subfield of signal processing that is concerned with the electronic manipulation of audio signals. Audio signals are electronic representations of sound waves—longitudinal waves which travel through air, consisting of compressions and rarefactions. The energy contained in audio signals or sound power level is typically measured in decibels. As audio signals may be represented in either digital or analog format, processing may occur in either domain. Analog processors operate directly on the electrical signal, while digital processors operate mathematically on its digital representation.
History
The motivation for audio signal processing began at the beginning of the 20th century with inventions like the telephone, phonograph, and radio that allowed for the transmission and storage of audio signals. Audio processing was necessary for early radio broadcasting, as there were many problems with studio-to-transmitter links. The theory of signal processing and its application to audio was largely developed at Bell Labs in the mid 20th century. Claude Shannon and Harry Nyquist's early work on communication theory, sampling theory and pulse-code modulation (PCM) laid the foundations for the field. In 1957, Max Mathews became the first person to synthesize audio from a computer, giving birth to computer music.
Major developments in digital audio coding and audio data compression include differential pulse-code modulation (DPCM) by C. Chapin Cutler at Bell Labs in 1950, linear predictive coding (LPC) by Fumitada Itakura (Nagoya University) and Shuzo Saito (Nippon Telegraph and Telephone) in 1966, adaptive DPCM (ADPCM) by P. Cummiskey, Nikil S. Jayant and James L. Flanagan at Bell Labs in 1973, discrete cosine transform (DCT) coding by Nasir Ahmed, T. Natarajan and K. R. Rao in 1974, and modified discrete cosine transform (MDCT) coding by J. P. Princen, A. W. Johnson and A. B. Bradley at the University of Surrey in 1987. LPC is the basis for perceptual coding and is widely used in speech coding, while MDCT coding is widely used in modern audio coding formats such as MP3 and Advanced Audio Coding (AAC).
Types
Analog
An analog audio signal is a continuous signal represented by an electrical voltage or current that is analogous to the sound waves in the air. Analog signal processing then involves physically altering the continuous signal by changing the voltage or current or charge via electrical circuits.
Historically, before the advent of widespread digital technology, analog was the only method by which to manipulate a signal. Since that time, as computers and software have become more capable and affordable, digital signal processing has become the method of choice. However, in music applications, analog technology is often still desirable as it often produces nonlinear responses that are difficult to replicate with digital filters.
Digital
A digital representation expresses the audio waveform as a sequence of symbols, usually binary numbers. This permits signal processing using digital circuits such as digital signal processors, microprocessors and general-purpose computers. Most modern audio systems use a digital approach as the techniques of digital signal processing are much more powerful and efficient than analog domain signal processing.
Applications
Processing methods and application areas include storage, data compression, music information retrieval, speech processing, localization, acoustic detection, transmission, noise cancellation, acoustic fingerprinting, sound recognition, synthesis, and enhancement (e.g. equalization, filtering, level compression, echo and reverb removal or addition, etc.).
Audio broadcasting
Audio signal processing is used when broadcasting audio signals in order to enhance their fidelity or optimize for bandwidth or latency. In this domain, the most important audio processing takes place just before the transmitter. The audio processor here must prevent or minimize overmodulation, compensate for non-linear transmitters (a potential issue with medium wave and shortwave broadcasting), and adjust overall loudness to the desired level.
Active noise control
Active noise control is a technique designed to reduce unwanted sound. By creating a signal that is identical to the unwanted noise but with the opposite polarity, the two signals cancel out due to destructive interference.
Audio synthesis
Audio synthesis is the electronic generation of audio signals. A musical instrument that accomplishes this is called a synthesizer. Synthesizers can either imitate sounds or generate new ones. Audio synthesis is also used to generate human speech using speech synthesis.
Audio effects
Audio effects alter the sound of a musical instrument or other audio source. Common effects include distortion, often used with electric guitar in electric blues and rock music; dynamic effects such as volume pedals and compressors, which affect loudness; filters such as wah-wah pedals and graphic equalizers, which modify frequency ranges; modulation effects, such as chorus, flangers and phasers; pitch effects such as pitch shifters; and time effects, such as reverb and delay, which create echoing sounds and emulate the sound of different spaces.
Musicians, audio engineers and record producers use effects units during live performances or in the studio, typically with electric guitar, bass guitar, electronic keyboard or electric piano. While effects are most frequently used with electric or electronic instruments, they can be used with any audio source, such as acoustic instruments, drums, and vocals.
Computer audition
Computer audition (CA) or machine listening is the general field of study of algorithms and systems for audio interpretation by machines. Since the notion of what it means for a machine to "hear" is very broad and somewhat vague, computer audition attempts to bring together several disciplines that originally dealt with specific problems or had a concrete application in mind. The engineer Paris Smaragdis, interviewed in Technology Review, talks about these systems "software that uses sound to locate people moving through rooms, monitor machinery for impending breakdowns, or activate traffic cameras to record accidents."
Inspired by models of human audition, CA deals with questions of representation, transduction, grouping, use of musical knowledge and general sound semantics for the purpose of performing intelligent operations on audio and music signals by the computer. Technically this requires a combination of methods from the fields of signal processing, auditory modelling, music perception and cognition, pattern recognition, and machine learning, as well as more traditional methods of artificial intelligence for musical knowledge representation.
See also
Sound card
Sound effect
References
Further reading
Audio electronics
Signal processing | Audio signal processing | [
"Technology",
"Engineering"
] | 1,282 | [
"Audio electronics",
"Telecommunications engineering",
"Computer engineering",
"Signal processing",
"Audio engineering"
] |
2,328 | https://en.wikipedia.org/wiki/Ayahuasca | Ayahuasca is a South American psychoactive beverage, traditionally used by Indigenous cultures and folk healers in the Amazon and Orinoco basins for spiritual ceremonies, divination, and healing a variety of psychosomatic complaints.
Originally restricted to areas of Peru, Brazil, Colombia and Ecuador, in the middle of the 20th century it became widespread in Brazil in the context of the appearance of syncretic religions that use ayahuasca as a sacrament, like Santo Daime, União do Vegetal and Barquinha, which blend elements of Amazonian Shamanism, Christianity, Kardecist Spiritism, and African-Brazilian religions such as Umbanda, Candomblé and Tambor de Mina, later expanding to several countries across all continents, notably the United States and Western Europe, and, more incipiently, in Eastern Europe, South Africa, Australia, and Japan.
More recently, new phenomena regarding ayahuasca use have evolved and moved to urban centers in North America and Europe, with the emergence of neoshamanic hybrid rituals and spiritual and recreational drug tourism. Also, anecdotal evidence, studies conducted among ayahuasca consumers and clinical trials suggest that ayahuasca has therapeutic potential, especially for the treatment of substance dependence, anxiety, and mood disorders. Thus, currently, despite continuing to be used in a traditional way, ayahuasca is also consumed recreationally worldwide, and is considered as a potential future treatment in modern medicine. Ayahuasca often causes nausea and vomiting and has a number of rarer more serious possible side effects including breathing difficulties and seizure; it may cause psychosis in those predisposed to the condition.
Ayahuasca is a hallucinogen commonly made by the prolonged decoction of the stems of the Banisteriopsis caapi vine and the leaves of the Psychotria viridis shrub, although hundreds of species are used in addition or substitution (See "Preparation" below). P. viridis contains N,N-Dimethyltryptamine (DMT), a highly psychedelic substance. Although orally inactive, B. caapi is rich with harmala alkaloids, such as harmine, harmaline and tetrahydroharmine (THH), which can act as monoamine oxidase inhibitors (MAOi). This halts the liver and gastrointestinal metabolism of DMT, allowing it to reach the systemic circulation and the brain, where it activates 5-HT1A/2A/2C receptors in frontal and paralimbic areas.
Etymology
Ayahuasca is the hispanicized spelling (i.e., spelled according to Spanish orthography) of a word that originates from the Quechuan languages, which are spoken in the Andean states of Ecuador, Bolivia, Peru, and Colombia. Speakers of Quechuan languages who use modern Quechuan orthography spell it ayawaska. The word refers both to the liana Banisteriopsis caapi, and to the brew prepared from it. In the Quechua languages, aya means "spirit, soul", or "corpse, dead body", and waska means "rope" or "woody vine", "liana". The word ayahuasca has been variously translated as "liana of the soul", "liana of the dead", and "spirit liana". In the cosmovision of its users, the ayahuasca is the vine that allows the spirit to wander detached from the body, entering the spiritual world, otherwise forbidden for the alive.
Common Names
Although ayahuasca is the most widely used term in Peru, Bolivia, Ecuador and Brazil, the brew is known by many names throughout northern South America:
hoasca or oasca in Brazil
yagé (or yajé, from the Cofán language or iagê in Portuguese). Relatively widespread use in Andean and Amazonian regions throughout the border areas of Colombia, Peru, Ecuador and Brazil. The Cofán people also use the word oofa.
caapi (or kahpi/gahpi in Tupi–Guarani language or kaapi in proto-Arawak language), used to address both the brew and the B. caapi itself. Meaning "weed" or "thin leaf", was the word utilized by Spruce for naming the liana
pinde (or pindê/pilde), used by the Colorado people
patem (or nátema), from the Chicham languages
shori, mii (or miiyagi) and uni, from the Yaminawa language
nishi cobin, from the Shipibo language
nixi pae, shuri, ondi, rambi and rame, from the Kashinawa language
kaji, kadana and kadanapira, used by Tucano people
kamarampi (or kamalampi) and hananeroca, from the Arawakan languages
bakko, from Bora-Muinane languages
jono pase, used by Ese'Ejja people
uipa, from Guahibo language
napa (or nepe/nepi), used by Tsáchila people
Biaxije, from Camsá language
Cipó ("liana") or Vegetal, in Portuguese language, used by União do Vegetal church members
Daime or Santo Daime, meaning "give me" in Portuguese, the term was coined by Santo Daime's founder Mestre Irineu in the 1940s, from a prayer dai-me alegria, dai-me resistência ("give me happiness, give me strength"). Daime members also uses the words Luz ("light") or Santa Luz ("holy light")
Some nomenclature are created by the cultural and symbolic signification of ayahuasca, with names like planta professora ("plant teacher"), professor dos professores ("teacher of the teachers"), sagrada medicina ("holy medicine") or la purga ("the purge").
Other names in the Western world
In the last decades, two new important terminologies emerged. Both are commonly used in the Western world in neoshamanic, recreative or pharmaceutical contexts to address ayahuasca-like substances created without the traditional botanical species, due to it being expensive and/or hard to find in these countries. These concepts are surrounded by some controversies involving ethnobotany, patents, commodification and biopiracy:
Anahuasca (ayahuasca analogues). A term usually used to refer to the ayahuasca produced with other plant species as sources of DMT (e.g., Mimosa hostilis) or β-carbolines (e.g., Peganum harmala).
Pharmahuasca (pharmaceutical ayahuasca). This indicates the pills produced from freebase DMT, synthetic harmaline, MAOI medications (such as moclobemide) and other isolated or purified compounds or extracts.
History
Origins
Archaeological evidence of the use of psychoactive plants in northeastern Amazon dates back to 1500–2000 BCE. Anthropomorphic figurines, snuffing trays and pottery vessels, often adorned with mythological figures and sacred animals, offer a glimpse of the pre-Columbian culture regarding use of the sacred plants, their preparation and ritual consumption [citar naranjo 86]. Although several botanical specimens (like tobacco, coca and Anadenanthera spp.) were identified among these objects, there is no unequivocal evidence of this date referring directly to ayahuasca. Banisteriopsis caapi use is suggested from a pouch containing carved snuffing trays, bone spatulas and other paraphernalia with traces of harmine and DMT, discovered in a cave in southwestern Bolivia in 2008, and chemical traces of harmine in the hair of two mummies found in northern Chile. Both cases are linked to Tiwanaku people, circa 900 CE. There are several reports of oral and nasal use of Anadenanthera spp. (rich in bufotenin) ritualistically and therapeutically during labor and infancy, and researchers suggest that addition of Banisteriopsis spp. to catalyze its psychoactivity emerged later, due to contact between different groups of Amazon and Altiplano.
Despite claims by numerous anthropologists and ethnologists, such as Plutarco Naranjo, regarding the millennial usage of ayahuasca, compelling evidence substantiating its pre-Columbian consumption is yet to be firmly established. As articulated by Dennis McKenna: "No one can say for certain where the practice may have originated, and about all that can be stated with certainty is that is already spread among numerous indigenous tribes throughout Amazon basin by the time ayahuasca came to the attention of Western ethnographers in the mid-nineteenth century" The first western references of the ayahuasca beverage dates back to seventeenth century, during the European colonization of the Americas. The earlier report is a letter from Vincente de Valverde to the Holy Office of the Inquisition. Jose Chantre y Herrera still in the seventeenth century, provided the first detailed description of a "devilish potion" cooked from bitter herbs and lianas (called ayaguasca) and its rituals: "[...] In other nations, they set aside an entire night for divination. For this purpose, they select the most capable house in the vicinity because many people are expected to attend the event. The diviner hangs his bed in the middle and places an infernal potion, known as ayahuasca, by his side, which is particularly effective at altering one's senses. They prepare a brew from bitter vines or herbs, which, when boiled sufficiently, must become quite potent. Since it's so strong at altering one's judgment in small quantities, the precaution is not excessive, and it fits into two small pots. The witch doctor drinks a very small amount each time and knows well how many times he can sample the brew without losing his senses to properly conduct the ritual and lead the choir". Another report produced in 1737 by the missionary Pablo Maroni, describes the use of a psychoactive liana called ayahuasca for divination in the Napo River, Ecuador: "For divination, they use a beverage, some of white datura flowers, which they also call Campana due to its shape, and others from a vine commonly known as Ayahuasca, both highly effective at numbing the senses and even at taking one's life if taken in excess. They also occasionally use these substances for the treatment of common illnesses, especially headaches. So, the person who wants to divine drinks the chosen substance with certain rituals, and while deprived of their senses from the mouth downwards, to prevent the strength of the plant from harming them, they remain in this state for many hours and sometimes even two or three days until the effects run their course, and the intoxication subsides. After this, they reflect on what their imagination revealed, which occasionally remains with them for delirium. This is what they consider accomplished and propagate as an oracle." Latter reports were produced by Juan Magnin in 1740, describing ayahuasca use as a medicinal plant by the Jivaroan peoples (called ayahuessa) and by Franz Xaver Veigl in 1768, that reports about several "dangerous plants", including a bitter liana used for precognition and sorcery. All these reports were written in context of Jesuit missions in South America, specially the Mainas missions, in Latin and sent only to Rome, so their audience wasn't very large and they were promptly lost in the archives. For this reason, ayahuasca didn't receive interest for the entire subsequent century.
Early academic research
In academic discourse, the initial mention of ayahuasca dates back to Manuel Villavicencio's 1848 book, "Geografía de la República del Ecuador." This work vividly delineates the employment and rituals involving ayahuasca by the Jivaro people. Concurrently, Richard Spruce embarked on an Amazonian expedition in 1852 to collect and classify previously unidentified botanical specimens. During this journey, Spruce encountered and documented Banisteriopsis caapi (at time named Banisteria caapi) and observed an ayahuasca ceremony among the Tucano community situated along the Vaupés River. Subsequently, Spruce uncovered the usage and cultivation of B. caapi among various indigenous groups dispersed across the Amazon and Orinoco basins, like the Guahibo and Sápara. These multifarious encounters, together with Spruce's personal accounts of subjective ayahuasca experiences, were collated in his 1873 work, "Notes of a Botanist On The Amazon and Andes.". By the end of the century, other explorers and anthropologists contributed more extensive documentation concerning ayahuasca, notably the Theodor Koch-Grünberg's documents about Tucano and Arecuna's rituals and ceremonies, Stradelli's first-hand reports of ayahuasca rituals and mythology along the Jurupari and Vaupés and Alfred Simson's first description of admixture of several ingredients in the making of ayahuasca in Putumayo region, published in 1886.
In 1905, Rafael Zerda Bayón named the active extract of ayahuasca as telepathine, a name latter used by the Colombian chemist Guillermo Fischer Cárdenas when he isolated the substance in 1932. Contemporaneously, Lewin and Gunn were independently studying the properties of the banisterine, extracted of the B. caapi, and its effects on animal models. Further clinical trials were being conducted, exploring the effects of banisterine on Parkinson's disease. Later it was found that both telepathine and banisterine are the same substance, identical to a chemical already isolated from Peganum harmala and given the name Harmine.
Shamanism, mestizos and vegetalistas
Researchers like Peter Gow and Brabec de Mori argue that ayahuasca use indeed developed alongside the Jesuit missions after the 17th century. By examining the ícaros (ayahuasca-related healing chants), they found that the chants are always sung in Quechua (a lingua franca along the Jesuit and Franciscan missions in the region), no matter the linguistic background of the group, with similar language structures between different ícaros that are markedly different from other indigenous songs. Moreover, often the cosmology of ayahuasca often mirrors the Catholicism, with particular similarities in the belief that ayahuasca is thought to be the body of ayahuascamama that is imbibed as part of the ritual, like wine and bread are taken as being the body and blood of Jesus Christ during Christian Eucharist. Brabec de Mori called this “Christian camouflage” and suggested that rather than being a way for disguising the ayahuasca ritual, it suggests that practice evolved entirely within these contexts.
Indeed, the colonial processes in Western Amazon are intrinsically related with the development of ayahuasca use in the last three centuries, as it promoted a deep reshape in traditional ways of life in the region. Many indigenous groups moved into the Missions, seeking protection from death and slavery promoted by the Bandeiras, inter-tribal violence, starvation and disease (smallpox). This movement resulted in an intense cultural exchange and resulted in the formation of mestizos (in Spanish) or caboclos (in Portuguese), a social category formed by people with mixture of European and native ancestry, who were an important part of the economy and culture of the region. According to Peter Gow, the ayahuasca shamanism (the use of ayahuasca by a trained shaman to diagnose and cure illnesses) was developed by these mestizos in the processes of colonial transformation. The Amazon rubber cycles (1879–1912 and 1945–1945) sped up these transformations, due to slavery, genocide and brutality against indigenous populations and large migratory movements, specially from the Brazilian Northeast Region as a workforce for the rubber plantations. The mestizo practices became deeply intertwined with the culture of rubber workers, called caucheros (in Spanish) or seringueiros (in Portuguese). Ayahuasca use with therapeutic goals is the main result of this Trans-cultural diffusion, with some practitioners pointing the caucheros as the main responsible for using ayahuasca to cure all sort of ailments of the body, mind and soul, with even some regions using the term Yerba de Cauchero ("rubber-worker herb"). As a result, the ayahuasca shamans in urban areas and mestizo settlements, specially in the regions of Iquitos and Pucallpa (in Peru), became the vegetalistas, folk healers who are said to gain all their knowledge from the plants and the spirits bound to it.
So the vegetalist movement was a heterogeneous mixture of Western Amazon (mestizo shamanic practices and cauchero culture) and Andean elements (shaped by other migratory movements, like those originated from Cuzco through Urubamba Valley and from western Ecuador), influenced by Christian aspects derived from the Jesuit missions, as reflected by the mythology, rituals and moral codes related to vegetalista ayahuasca use.
Ayahuasca religions
Although mestizo, vegetalista and indigenous ayahuasca use was part of a longer tradition, these several configurations of mestizo vegetalismo were not isolated phenomena. In the end of the nineteenth century, several messianic/millennialist cults sparkled across semi-urban areas across the entire Amazon region, merging different elements of indigenous and mestizo folk culture with Catholicism, Spiritism and Protestantism. In this context, the use of ayahuasca will take form of urban, organized non-indigenous religions in outskirts of main cities of northwest of Brazil, (along the basins of Madeira, Juruá and Purus River) within the cauchero/seringueiro cultural complex, resignifying and adapting both the vegetalista and mestizo shamanism to new urban formations, unifying essential elements to building a cosmology for the new emerging cult/faith, merging with elements of folk Catholicism, African-Brazilian religions and Kardecist spiritism. These new cults arise from charismatic leaderships, often messianic and prophetic, who came from rural areas after migration movements, sometimes called ayahuasqueiros, in semi-urban communities across the borders of Brazil, Bolívia and Peru (a region that will later form the state of Acre). This new configuration of these belief systems is referred by Goulart as tradição religiosa ayahuasqueira urbana amazônica ("urban-amazonian ayahuasqueiro religious tradition") or campo ayahuasqueiro brasileiro ("brazilian ayahuasqueiro field") by Labate, emerging as three main structured religions, the Santo Daime and Barquinha, in Rio Branco and the União do Vegetal (UDV) in Porto Velho, three denominations that, notwithstanding shared characteristics besides ayahuasca utilization, have several particularities regarding its practices, conceptions and processes building social legitimacy and relationships with Brazilian government, media, science and other society stances. Since the latter half of twentieth century, the ayahuasca religious expanded to other parts of Brazil and several countries in the world, notably in the West.
Modern use
Beat writer William S. Burroughs read a paper by Richard Evans Schultes on the subject and while traveling through South America in the early 1950s sought out ayahuasca in the hopes that it could relieve or cure opiate addiction (see The Yage Letters). Ayahuasca became more widely known when the McKenna brothers published their experience in the Amazon in True Hallucinations. Dennis McKenna later studied pharmacology, botany, and chemistry of ayahuasca and oo-koo-he, which became the subject of his master's thesis.
Richard Evans Schultes allowed Claudio Naranjo to make a special journey by canoe up the Amazon River to study ayahuasca with the South American Indians. He brought back samples of the beverage and published the first scientific description of the effects of its active alkaloids.
In recent years, the brew has been popularized by Wade Davis (One River), English novelist Martin Goodman in I Was Carlos Castaneda, Chilean novelist Isabel Allende, writer Kira Salak, author Jeremy Narby (The Cosmic Serpent), author Jay Griffiths (Wild: An Elemental Journey), American novelist Steven Peck, radio personality Robin Quivers,, writer Paul Theroux (Figures in a Landscape: People and Places), and NFL quarterback Aaron Rodgers.
Preparation
Sections of Banisteriopsis caapi vine are macerated and boiled alone or with leaves from any of a number of other plants, including Psychotria viridis (chacruna), Diplopterys cabrerana (also known as chaliponga and chacropanga), and Mimosa tenuiflora, among other ingredients which can vary greatly from one shaman to the next. The resulting brew may contain the powerful psychedelic drug dimethyltryptamine and monoamine oxidase inhibiting harmala alkaloids, which are necessary to make the DMT orally active by allowing it (DMT) to be processed by the liver. The traditional making of ayahuasca follows a ritual process that requires the user to pick the lower Chacruna leaf at sunrise, then say a prayer. The vine must be "cleaned meticulously with wooden spoons" and pounded "with wooden mallets until it's fibre."
Brews can also be made with plants that do not contain DMT, Psychotria viridis being replaced by plants such as Justicia pectoralis, Brugmansia, or sacred tobacco, also known as mapacho (Nicotiana rustica), or sometimes left out with no replacement. This brew varies radically from one batch to the next, both in potency and psychoactive effect, based mainly on the skill of the shaman or brewer, as well as other admixtures sometimes added and the intent of the ceremony. Natural variations in plant alkaloid content and profiles also affect the final concentration of alkaloids in the brew, and the physical act of cooking may also serve to modify the alkaloid profile of harmala alkaloids.
The actual preparation of the brew takes several hours, often taking place over the course of more than one day. After adding the plant material, each separately at this stage, to a large pot of water, it is boiled until the water is reduced by half in volume. The individual brews are then added together and brewed until reduced significantly. This combined brew is what is taken by participants in ayahuasca ceremonies.
Traditional use
The uses of ayahuasca in traditional societies in South America vary greatly. Some cultures do use it for shamanic purposes, but in other cases, it is consumed socially among friends, in order to learn more about the natural environment, and even in order to visit friends and family who are far away.
Nonetheless, people who work with ayahuasca in non-traditional contexts often align themselves with the philosophies and cosmologies associated with ayahuasca shamanism, as practiced among Indigenous peoples like the Urarina of the Peruvian Amazon. Dietary taboos are often associated with the use of ayahuasca, although these seem to be specific to the culture around Iquitos, Peru, a major center of ayahuasca tourism. Ayahuasca retreats or healing centers can also be found in the Sacred Valley of Peru, in areas such as Cusco and Urubamba, where similar dietary preparations can be observed. These retreats often employ members of the Shipibo-Konibo tribe, an indigenous community native to the Peruvian Amazon.
In the rainforest, these taboos tend towards the purification of one's self—abstaining from spicy and heavily seasoned foods, excess fat, salt, caffeine, acidic foods (such as citrus) and sex before, after, or during a ceremony. A diet low in foods containing tyramine has been recommended, as the speculative interaction of tyramine and MAOIs could lead to a hypertensive crisis; however, evidence indicates that harmala alkaloids act only on MAO-A, in a reversible way similar to moclobemide (an antidepressant that does not require dietary restrictions). Dietary restrictions are not used by the highly urban Brazilian ayahuasca church União do Vegetal, suggesting the risk is much lower than perceived and probably non-existent.
Ceremony and the role of shamans
Shamans, curanderos and experienced users of ayahuasca advise against consuming ayahuasca when not in the presence of one or several well-trained shamans.
In some areas, there are purported brujos (Spanish for "witches") who masquerade as real shamans and who entice tourists to drink ayahuasca in their presence. Shamans believe one of the purposes for this is to steal one's energy and/or power, of which they believe every person has a limited stockpile.
The shamans lead the ceremonial consumption of the ayahuasca beverage, in a rite that typically takes place over the entire night. During the ceremony, the effect of the drink lasts for hours. Prior to the ceremony, participants are instructed to abstain from spicy foods, red meat and sex. The ceremony is usually accompanied with purging which include vomiting and diarrhea, which is believed to release built-up emotions and negative energy.
Shipibo-Konibo and their relation to Ayahuasca
It is believed that the Shipibo-Konibo are among the earliest practitioners of Ayahuasca ceremonies, with their connection to the brew and ceremonies surrounding it dating back centuries, perhaps a millennium.
Some members of the Shipibo community have taken to the media to express their views on Ayahuasca entering the mainstream, with some calling it "the commercialization of ayahuasca." Some of them have even expressed their worry regarding the increased popularity, saying "the contemporary 'ayahuasca ceremony' may be understood as a substitute for former cosmogonical rituals that are nowadays not performed anymore."
Icaros
The Shipibo have their own language, called Shipibo, a Panoan language spoken by approximately 26,000 people in Peru and Brazil. This language is commonly sung by the shaman in the form of a chant, called an Icaro, during the Ayahuasca ritual as a way to establish a "balance of energy" during the ritual to help protect and guide the user during their experience.
Traditional brew
Traditional ayahuasca brews are usually made with Banisteriopsis caapi as an MAOI, while dimethyltryptamine sources and other admixtures vary from region to region. There are several varieties of caapi, often known as different "colors", with varying effects, potencies, and uses.
DMT admixtures:
Psychotria viridis (Chacruna) – leaves
Psychotria carthagenensis (Amyruca) – leaves
Diplopterys cabrerana (Chaliponga, Chagropanga, Banisteriopsis rusbyana) – leaves
Mimosa tenuiflora (M. hostilis) - root bark
Other common admixtures:
Justicia pectoralis
Brugmansia sp. (Toé)
Opuntia sp.
Epiphyllum sp.
Cyperus sp.
Nicotiana rustica (Mapacho, variety of tobacco)
Ilex guayusa, a relative of yerba mate
Lygodium venustum, (Tchai del monte)
Phrygilanthus eugenioides and Clusia sp (both called Miya)
Lomariopsis japurensis (Shoka)
Common admixtures with their associated ceremonial values and spirits:
Ayahuma bark: Cannon Ball tree. Provides protection and is used in healing susto (soul loss from spiritual fright or trauma).
Capirona bark: Provides cleansing, balance and protection. It is noted for its smooth bark, white flowers, and hard wood.
Chullachaki caspi bark (Brysonima christianeae): Provides cleansing to the physical body. Used to transcend physical body ailments.
Lopuna blanca bark: Provides protection.
Punga amarilla bark: Yellow Punga. Provides protection. Used to pull or draw out negative spirits or energies.
Remo caspi bark: Oar Tree. Used to move dense or dark energies.
Wyra (huaira) caspi bark (Cedrelinga catanaeformis): Air Tree. Used to create purging, transcend gastro/intestinal ailments, calm the mind, and bring tranquility.
Shiwawaku bark: Brings purple medicine to the ceremony.
Uchu sanango: Head of the sanango plants.
Huacapurana: Giant tree of the Amazon with very hard bark.
Bobinsana (Calliandra angustifolia): Mermaid Spirit. Provides major heart chakra opening, healing of emotions and relationships.
Non-traditional usage
In the late 20th century, the practice of ayahuasca drinking began spreading to Europe, North America and elsewhere. The first ayahuasca churches, affiliated with the Brazilian Santo Daime, were established in the Netherlands. A legal case was filed against two of the Church's leaders, Hans Bogers (one of the original founders of the Dutch Santo Daime community) and Geraldine Fijneman (the head of the Amsterdam Santo Daime community). Bogers and Fijneman were charged with distributing a controlled substance (DMT); however, the prosecution was unable to prove that the use of ayahuasca by members of the Santo Daime constituted a sufficient threat to public health and order such that it warranted denying their rights to religious freedom under ECHR Article 9. The 2001 verdict of the Amsterdam district court is an important precedent. Since then groups that are not affiliated to the Santo Daime have used ayahuasca, and a number of different "styles" have been developed, including non-religious approaches.
Ayahuasca analogs
In modern Europe and North America, ayahuasca analogs are often prepared using non-traditional plants which contain the same alkaloids. For example, seeds of the Syrian rue plant can be used as a substitute for the ayahuasca vine, and the DMT-rich Mimosa hostilis is used in place of chacruna. Australia has several indigenous plants which are popular among modern ayahuasqueros there, such as various DMT-rich species of Acacia.
The name "ayahuasca" specifically refers to a botanical decoction that contains Banisteriopsis caapi. A synthetic version, known as pharmahuasca, is a combination of an appropriate MAOI and typically DMT. In this usage, the DMT is generally considered the main psychoactive active ingredient, while the MAOI merely preserves the psychoactivity of orally ingested DMT, which would otherwise be destroyed in the gut before it could be absorbed in the body. In contrast, traditionally among Amazonian tribes, the B. Caapi vine is considered to be the "spirit" of ayahuasca, the gatekeeper, and guide to the otherworldly realms.
Brews similar to ayahuasca may be prepared using several plants not traditionally used in South America:
DMT admixtures:
Acacia maidenii (Maiden's wattle) – bark *not all plants are "active strains", meaning some plants will have very little DMT and others larger amounts
Acacia phlebophylla, and other Acacias, most commonly employed in Australia – bark
Anadenanthera peregrina, A. colubrina, A. excelsa, A. macrocarpa
Desmanthus illinoensis (Illinois bundleflower) – root bark is mixed with a native source of beta-Carbolines (e.g., passion flower in North America) to produce a hallucinogenic drink called prairiehuasca.
MAOI admixtures:
Harmal (Peganum harmala, Syrian rue) – seeds
Passion flower
synthetic MAOIs, especially RIMAs (due to the dangers presented by irreversible MAOIs)
Effects
Adverse effects
Ingesting Ayahuasca can cause nausea, vomiting and diarrhea in the short term. Other short-term side effects include increased blood pressure and tachycardia. Rarer side effects include dyspnea, seizures and serotonin syndrome. Ayahuasca is suspected of triggering psychosis in people with a predisposition to the condition, and there is a lack of safety information for Ayahuasca's possible effects on pregnancy and breastfeeding.
Psychological effects
People who have consumed ayahuasca report having mystical experiences and spiritual revelations regarding their purpose on earth, the true nature of the universe, and deep insight into how to be the best person they possibly can. Many people also report therapeutic effects, especially around depression and personal traumas.
This is viewed by many as a spiritual awakening and what is often described as a near-death experience or rebirth. It is often reported that individuals feel they gain access to higher spiritual dimensions and make contact with various spiritual or extra-dimensional beings who can act as guides or healers.
The experiences that people have while under the influence of ayahuasca are also culturally influenced. Westerners typically describe experiences with psychological terms like "ego death" and understand the hallucinations as repressed memories or metaphors of mental states. However, at least in Iquitos, Peru (a center of ayahuasca ceremonies), those from the area describe the experiences more in terms of the actions in the body and understand the visions as reflections of their environment, sometimes including the person who they believe caused their illness, as well as interactions with spirits.
Potential therapeutic effects
There are potential antidepressant and anxiolytic effects of ayahuasca.
Ayahuasca has also been studied for the treatment of addictions and shown to be effective, with lower Addiction Severity Index scores seen in users of ayahuasca compared to controls. Ayahuasca users have also been seen to consume less alcohol.
Pharmacology
Harmala alkaloids
Harmala alkaloids are MAO-inhibiting beta-carbolines. The three most studied harmala alkaloids in the B. caapi vine are harmine, harmaline and tetrahydroharmine. Harmine and harmaline are selective and reversible inhibitors of monoamine oxidase A (MAO-A), while tetrahydroharmine is a weak serotonin reuptake inhibitor (SRI).
Individual polymorphisms of the cytochrome P450-2D6 enzyme, and more over the isolated indocine metabolite from the inhabitation of CPY134a, with a varied rate of gustation due to physiological factors affect the ability of individuals to metabolize harmine.
Legal status
Internationally, DMT is a Schedule I drug under the Convention on Psychotropic Substances. The Commentary on the Convention on Psychotropic Substances notes, however, that the plants containing it are not subject to international control:
A fax from the Secretary of the International Narcotics Control Board (INCB) to the Netherlands Ministry of Public Health sent in 2001 goes on to state that "Consequently, preparations (e.g. decoctions) made of these plants, including ayahuasca, are not under international control and, therefore, not subject to any of the articles of the 1971 Convention."
Despite the INCB's 2001 affirmation that ayahuasca is not subject to drug control by international convention, in its 2010 Annual Report the Board recommended that governments consider controlling (i.e. criminalizing) ayahuasca at the national level. This recommendation by the INCB has been criticized as an attempt by the Board to overstep its legitimate mandate and as establishing a reason for governments to violate the human rights (i.e., religious freedom) of ceremonial ayahuasca drinkers.
Under American federal law, DMT is a Schedule I drug that is illegal to possess or consume; however, certain religious groups have been legally permitted to consume ayahuasca. A court case allowing the União do Vegetal to import and use the tea for religious purposes in the United States, Gonzales v. O Centro Espírita Beneficente União do Vegetal, was heard by the U.S. Supreme Court on November 1, 2005; the decision, released February 21, 2006, allows the UDV to use the tea in its ceremonies pursuant to the Religious Freedom Restoration Act. In a similar case in Ashland, Oregon-based Santo Daime church sued for their right to import and consume ayahuasca tea. In March 2009, U.S. District Court Judge Panner ruled in favor of the Santo Daime, acknowledging its protection from prosecution under the Religious Freedom Restoration Act.
In 2017 the Santo Daime Church Céu do Montréal in Canada received religious exemption to use ayahuasca as a sacrament in their rituals.
Religious use in Brazil was legalized after two official inquiries into the tea in the mid-1980s, which concluded that ayahuasca is not a recreational drug and has valid spiritual uses.
In France, Santo Daime won a court case allowing them to use the tea in early 2005; however, they were not allowed an exception for religious purposes, but rather for the simple reason that they did not perform chemical extractions to end up with pure DMT and harmala and the plants used were not scheduled. Four months after the court victory, the common ingredients of ayahuasca as well as harmala were declared stupéfiants, or narcotic schedule I substances, making the tea and its ingredients illegal to use or possess.
In June 2019, Oakland, California, decriminalized natural entheogens. The City Council passed the resolution in a unanimous vote, ending the investigation and imposition of criminal penalties for use and possession of entheogens derived from plants or fungi. The resolution states: "Practices with Entheogenic Plants have long existed and have been considered to be sacred to human cultures and human interrelationships with nature for thousands of years, and continue to be enhanced and improved to this day by religious and spiritual leaders, practicing professionals, mentors, and healers throughout the world, many of whom have been forced underground."
In January 2020, Santa Cruz, California, and in September 2020, Ann Arbor, Michigan, decriminalized natural entheogens.
Intellectual property issues
Ayahuasca has stirred debate regarding intellectual property protection of traditional knowledge. In 1986 the US Patent and Trademarks Office (PTO) allowed the granting of a patent on the ayahuasca vine B. caapi. It allowed this patent based on the assumption that ayahuasca's properties had not been previously described in writing. Several public interest groups, including the Coordinating Body of Indigenous Organizations of the Amazon Basin (COICA) and the Coalition for Amazonian Peoples and Their Environment (Amazon Coalition) objected. In 1999 they brought a legal challenge to this patent which had granted a private US citizen "ownership" of the knowledge of a plant that is well-known and sacred to many Indigenous peoples of the Amazon, and used by them in religious and healing ceremonies.
Later that year the PTO issued a decision rejecting the patent, on the basis that the petitioners' arguments that the plant was not "distinctive or novel" were valid; however, the decision did not acknowledge the argument that the plant's religious or cultural values prohibited a patent. In 2001, after an appeal by the patent holder, the US Patent Office reinstated the patent, albeit to only a specific plant and its asexually reproduced offspring. The law at the time did not allow a third party such as COICA to participate in that part of the reexamination process. The patent, held by US entrepreneur Loren Miller, expired in 2003.
See also
Changa
Icaro
Kambo (drug)
Ibogaine
Yachay
Notes
Explanatory notes
Citations
Further reading
Burroughs, William S. and Allen Ginsberg. The Yage Letters. San Francisco: City Lights, 1963.
Langdon, E. Jean Matteson & Gerhard Baer, eds. Portals of Power: Shamanism in South America. Albuquerque: University of New Mexico Press, 1992.
Shannon, Benny. The Antipodes of the Mind: Charting the Phenomenology of the Ayahuasca Experience. Oxford: Oxford University Press, 2002.
Taussig, Michael. Shamanism, Colonialism, and the Wild Man: A Study in Terror and Healing. Chicago: University of Chicago Press, 1986.
External links
Biopiracy
Entheogens
Herbal and fungal hallucinogens
Indigenous culture of the Amazon
Mixed drinks
Monoamine oxidase inhibitors
Polysubstance drinks | Ayahuasca | [
"Biology"
] | 8,660 | [
"Biopiracy",
"Biodiversity"
] |
2,330 | https://en.wikipedia.org/wiki/Abbe%20number | In optics and lens design, the Abbe number, also known as the Vd-number or constringence of a transparent material, is an approximate measure of the material's dispersion (change of refractive index versus wavelength), with high values of Vd indicating low dispersion. It is named after Ernst Abbe (1840–1905), the German physicist who defined it. The term Vd-number should not be confused with the normalized frequency in fibers.
The Abbe number, of a material is defined as
,
where and are the refractive indices of the material at the wavelengths of the Fraunhofer's C, d, and F spectral lines (656.3 nm, 587.56 nm, and 486.1 nm respectively). This formulation only applies to the human vision. Outside this range requires the use of different spectral lines. For non-visible spectral lines the term "V-number" is more commonly used. The more general formulation defined as,
,
where and are the refractive indices of the material at three different wavelengths. The shortest wavelength's index is , and the longest's is .
Abbe numbers are used to classify glass and other optical materials in terms of their chromaticity. For example, the higher dispersion flint glasses have relatively small Abbe numbers whereas the lower dispersion crown glasses have larger Abbe numbers. Values of range from below 25 for very dense flint glasses, around 34 for polycarbonate plastics, up to 65 for common crown glasses, and 75 to 85 for some fluorite and phosphate crown glasses.
Abbe numbers are used in the design of achromatic lenses, as their reciprocal is proportional to dispersion (slope of refractive index versus wavelength) in the wavelength region where the human eye is most sensitive (see graph). For different wavelength regions, or for higher precision in characterizing a system's chromaticity (such as in the design of apochromats), the full dispersion relation (refractive index as a function of wavelength) is used.
Abbe diagram
An Abbe diagram, also called 'the glass veil', is produced by plotting the Abbe number of a material versus its refractive index Glasses can then be categorised and selected according to their positions on the diagram. This can be a letter-number code, as used in the Schott Glass catalogue, or a 6 digit glass code.
Glasses' Abbe numbers, along with their mean refractive indices, are used in the calculation of the required refractive powers of the elements of achromatic lenses in order to cancel chromatic aberration to first order. These two parameters which enter into the equations for design of achromatic doublets are exactly what is plotted on an Abbe diagram.
Due to the difficulty and inconvenience in producing sodium and hydrogen lines, alternate definitions of the Abbe number are often substituted (ISO 7944). For example, rather than the standard definition given above, that uses the refractive index variation between the F and C hydrogen lines, one alternative measure using the subscript "e" for mercury's e line compared to cadmium's and lines is
This alternate takes the difference between cadmium's blue () and red () refractive indices at wavelengths 480.0 nm and 643.8 nm, relative to for mercury's e line at 546.073 nm, all of which are close by, and somewhat easier to produce than the C, F, and e lines. Other definitions can similarly be employed; the following table lists standard wavelengths at which is commonly determined, including the standard subscripts used.
Derivation
Starting from the Lensmaker's equation we obtain the thin lens equation by dropping a small term that accounts for lens thickness, :
when
The change of refractive power between the two wavelengths and is given by
where and are the short and long wavelengths' refractive indexes, respectively, and below, is for the center.
The power difference can be expressed relative to the power at the center wavelength ()
by multiplying and dividing by and regrouping, get
The relative change is inversely proportional to
See also
Abbe prism
Abbe refractometer
Calculation of glass properties, including Abbe number
Glass code
Sellmeier equation, more comprehensive and physically based modeling of dispersion
References
External links
Abbe graph and data for 356 glasses from Ohara, Hoya, and Schott
Dimensionless numbers of physics
Optical quantities
Glass physics | Abbe number | [
"Physics",
"Materials_science",
"Mathematics",
"Engineering"
] | 937 | [
"Glass engineering and science",
"Physical quantities",
"Quantity",
"Glass physics",
"Condensed matter physics",
"Optical quantities"
] |
2,341 | https://en.wikipedia.org/wiki/Alkaloid | Alkaloids are a broad class of naturally occurring organic compounds that contain at least one nitrogen atom. Some synthetic compounds of similar structure may also be termed alkaloids.
Alkaloids are produced by a large variety of organisms including bacteria, fungi, plants, and animals. They can be purified from crude extracts of these organisms by acid-base extraction, or solvent extractions followed by silica-gel column chromatography. Alkaloids have a wide range of pharmacological activities including antimalarial (e.g. quinine), antiasthma (e.g. ephedrine), anticancer (e.g. homoharringtonine), cholinomimetic (e.g. galantamine), vasodilatory (e.g. vincamine), antiarrhythmic (e.g. quinidine), analgesic (e.g. morphine), antibacterial (e.g. chelerythrine), and antihyperglycemic activities (e.g. berberine). Many have found use in traditional or modern medicine, or as starting points for drug discovery. Other alkaloids possess psychotropic (e.g. psilocin) and stimulant activities (e.g. cocaine, caffeine, nicotine, theobromine), and have been used in entheogenic rituals or as recreational drugs. Alkaloids can be toxic too (e.g. atropine, tubocurarine). Although alkaloids act on a diversity of metabolic systems in humans and other animals, they almost uniformly evoke a bitter taste.
The boundary between alkaloids and other nitrogen-containing natural compounds is not clear-cut. Most alkaloids are basic, although some have neutral and even weakly acidic properties. In addition to carbon, hydrogen and nitrogen, alkaloids may also contain oxygen or sulfur. Rarer still, they may contain elements such as phosphorus, chlorine, and bromine. Compounds like amino acid peptides, proteins, nucleotides, nucleic acid, amines, and antibiotics are usually not called alkaloids. Natural compounds containing nitrogen in the exocyclic position (mescaline, serotonin, dopamine, etc.) are usually classified as amines rather than as alkaloids. Some authors, however, consider alkaloids a special case of amines.
Naming
The name "alkaloids" () was introduced in 1819 by German chemist Carl Friedrich Wilhelm Meissner, and is derived from late Latin root and the Greek-language suffix -('like'). However, the term came into wide use only after the publication of a review article, by Oscar Jacobsen in the chemical dictionary of Albert Ladenburg in the 1880s.
There is no unique method for naming alkaloids. Many individual names are formed by adding the suffix "ine" to the species or genus name. For example, atropine is isolated from the plant Atropa belladonna; strychnine is obtained from the seed of the Strychnine tree (Strychnos nux-vomica L.). Where several alkaloids are extracted from one plant their names are often distinguished by variations in the suffix: "idine", "anine", "aline", "inine" etc. There are also at least 86 alkaloids whose names contain the root "vin" because they are extracted from vinca plants such as Vinca rosea (Catharanthus roseus); these are called vinca alkaloids.
History
Alkaloid-containing plants have been used by humans since ancient times for therapeutic and recreational purposes. For example, medicinal plants have been known in Mesopotamia from about 2000 BC. The Odyssey of Homer referred to a gift given to Helen by the Egyptian queen, a drug bringing oblivion. It is believed that the gift was an opium-containing drug. A Chinese book on houseplants written in 1st–3rd centuries BC mentioned a medical use of ephedra and opium poppies. Also, coca leaves have been used by Indigenous South Americans since ancient times.
Extracts from plants containing toxic alkaloids, such as aconitine and tubocurarine, were used since antiquity for poisoning arrows.
Studies of alkaloids began in the 19th century. In 1804, the German chemist Friedrich Sertürner isolated from opium a "soporific principle" (), which he called "morphium", referring to Morpheus, the Greek god of dreams; in German and some other Central-European languages, this is still the name of the drug. The term "morphine", used in English and French, was given by the French physicist Joseph Louis Gay-Lussac.
A significant contribution to the chemistry of alkaloids in the early years of its development was made by the French researchers Pierre Joseph Pelletier and Joseph Bienaimé Caventou, who discovered quinine (1820) and strychnine (1818). Several other alkaloids were discovered around that time, including xanthine (1817), atropine (1819), caffeine (1820), coniine (1827), nicotine (1828), colchicine (1833), sparteine (1851), and cocaine (1860). The development of the chemistry of alkaloids was accelerated by the emergence of spectroscopic and chromatographic methods in the 20th century, so that by 2008 more than 12,000 alkaloids had been identified.
The first complete synthesis of an alkaloid was achieved in 1886 by the German chemist Albert Ladenburg. He produced coniine by reacting 2-methylpyridine with acetaldehyde and reducing the resulting 2-propenyl pyridine with sodium.
Classifications
Compared with most other classes of natural compounds, alkaloids are characterized by a great structural diversity. There is no uniform classification. Initially, when knowledge of chemical structures was lacking, botanical classification of the source plants was relied on. This classification is now considered obsolete.
More recent classifications are based on similarity of the carbon skeleton (e.g., indole-, isoquinoline-, and pyridine-like) or biochemical precursor (ornithine, lysine, tyrosine, tryptophan, etc.). However, they require compromises in borderline cases; for example, nicotine contains a pyridine fragment from nicotinamide and a pyrrolidine part from ornithine and therefore can be assigned to both classes.
Alkaloids are often divided into the following major groups:
"True alkaloids" contain nitrogen in the heterocycle and originate from amino acids. Their characteristic examples are atropine, nicotine, and morphine. This group also includes some alkaloids that besides the nitrogen heterocycle contain terpene (e.g., evonine) or peptide fragments (e.g. ergotamine). The piperidine alkaloids coniine and coniceine may be regarded as true alkaloids (rather than pseudoalkaloids: see below) although they do not originate from amino acids.
"Protoalkaloids", which contain nitrogen (but not the nitrogen heterocycle) and also originate from amino acids. Examples include mescaline, adrenaline and ephedrine.
Polyamine alkaloids – derivatives of putrescine, spermidine, and spermine.
Peptide and cyclopeptide alkaloids.
Pseudoalkaloids – alkaloid-like compounds that do not originate from amino acids. This group includes terpene-like and steroid-like alkaloids, as well as purine-like alkaloids such as caffeine, theobromine, theacrine and theophylline. Some authors classify ephedrine and cathinone as pseudoalkaloids. Those originate from the amino acid phenylalanine, but acquire their nitrogen atom not from the amino acid but through transamination.
Some alkaloids do not have the carbon skeleton characteristic of their group. So, galanthamine and homoaporphines do not contain isoquinoline fragment, but are, in general, attributed to isoquinoline alkaloids.
Main classes of monomeric alkaloids are listed in the table below:
Properties
Most alkaloids contain oxygen in their molecular structure; those compounds are usually colorless crystals at ambient conditions. Oxygen-free alkaloids, such as nicotine or coniine, are typically volatile, colorless, oily liquids. Some alkaloids are colored, like berberine (yellow) and sanguinarine (orange).
Most alkaloids are weak bases, but some, such as theobromine and theophylline, are amphoteric. Many alkaloids dissolve poorly in water but readily dissolve in organic solvents, such as diethyl ether, chloroform or 1,2-dichloroethane. Caffeine, cocaine, codeine and nicotine are slightly soluble in water (with a solubility of ≥1g/L), whereas others, including morphine and yohimbine are very slightly water-soluble (0.1–1 g/L). Alkaloids and acids form salts of various strengths. These salts are usually freely soluble in water and ethanol and poorly soluble in most organic solvents. Exceptions include scopolamine hydrobromide, which is soluble in organic solvents, and the water-soluble quinine sulfate.
Most alkaloids have a bitter taste or are poisonous when ingested. Alkaloid production in plants appeared to have evolved in response to feeding by herbivorous animals; however, some animals have evolved the ability to detoxify alkaloids. Some alkaloids can produce developmental defects in the offspring of animals that consume but cannot detoxify the alkaloids. One example is the alkaloid cyclopamine, produced in the leaves of corn lily. During the 1950s, up to 25% of lambs born by sheep that had grazed on corn lily had serious facial deformations. These ranged from deformed jaws to cyclopia. After decades of research, in the 1980s, the compound responsible for these deformities was identified as the alkaloid 11-deoxyjervine, later renamed to cyclopamine.
Distribution in nature
Alkaloids are generated by various living organisms, especially by higher plants – about 10 to 25% of those contain alkaloids. Therefore, in the past the term "alkaloid" was associated with plants.
The alkaloids content in plants is usually within a few percent and is inhomogeneous over the plant tissues. Depending on the type of plants, the maximum concentration is observed in the leaves (for example, black henbane), fruits or seeds (Strychnine tree), root (Rauvolfia serpentina) or bark (cinchona). Furthermore, different tissues of the same plants may contain different alkaloids.
Beside plants, alkaloids are found in certain types of fungus, such as psilocybin in the fruiting bodies of the genus Psilocybe, and in animals, such as bufotenin in the skin of some toads and a number of insects, markedly ants. Many marine organisms also contain alkaloids. Some amines, such as adrenaline and serotonin, which play an important role in higher animals, are similar to alkaloids in their structure and biosynthesis and are sometimes called alkaloids.
Extraction
Because of the structural diversity of alkaloids, there is no single method of their extraction from natural raw materials. Most methods exploit the property of most alkaloids to be soluble in organic solvents but not in water, and the opposite tendency of their salts.
Most plants contain several alkaloids. Their mixture is extracted first and then individual alkaloids are separated. Plants are thoroughly ground before extraction. Most alkaloids are present in the raw plants in the form of salts of organic acids. The extracted alkaloids may remain salts or change into bases. Base extraction is achieved by processing the raw material with alkaline solutions and extracting the alkaloid bases with organic solvents, such as 1,2-dichloroethane, chloroform, diethyl ether or benzene. Then, the impurities are dissolved by weak acids; this converts alkaloid bases into salts that are washed away with water. If necessary, an aqueous solution of alkaloid salts is again made alkaline and treated with an organic solvent. The process is repeated until the desired purity is achieved.
In the acidic extraction, the raw plant material is processed by a weak acidic solution (e.g., acetic acid in water, ethanol, or methanol). A base is then added to convert alkaloids to basic forms that are extracted with organic solvent (if the extraction was performed with alcohol, it is removed first, and the remainder is dissolved in water). The solution is purified as described above.
Alkaloids are separated from their mixture using their different solubility in certain solvents and different reactivity with certain reagents or by distillation.
A number of alkaloids are identified from insects, among which the fire ant venom alkaloids known as solenopsins have received greater attention from researchers. These insect alkaloids can be efficiently extracted by solvent immersion of live fire ants or by centrifugation of live ants followed by silica-gel chromatography purification. Tracking and dosing the extracted solenopsin ant alkaloids has been described as possible based on their absorbance peak around 232 nanometers.
Biosynthesis
Biological precursors of most alkaloids are amino acids, such as ornithine, lysine, phenylalanine, tyrosine, tryptophan, histidine, aspartic acid, and anthranilic acid. Nicotinic acid can be synthesized from tryptophan or aspartic acid. Ways of alkaloid biosynthesis are too numerous and cannot be easily classified. However, there are a few typical reactions involved in the biosynthesis of various classes of alkaloids, including synthesis of Schiff bases and Mannich reaction.
Synthesis of Schiff bases
Schiff bases can be obtained by reacting amines with ketones or aldehydes. These reactions are a common method of producing C=N bonds.
In the biosynthesis of alkaloids, such reactions may take place within a molecule, such as in the synthesis of piperidine:
Mannich reaction
An integral component of the Mannich reaction, in addition to an amine and a carbonyl compound, is a carbanion, which plays the role of the nucleophile in the nucleophilic addition to the ion formed by the reaction of the amine and the carbonyl.
The Mannich reaction can proceed both intermolecularly and intramolecularly:
Dimer alkaloids
In addition to the described above monomeric alkaloids, there are also dimeric, and even trimeric and tetrameric alkaloids formed upon condensation of two, three, and four monomeric alkaloids. Dimeric alkaloids are usually formed from monomers of the same type through the following mechanisms:
Mannich reaction, resulting in, e.g., voacamine
Michael reaction (villalstonine)
Condensation of aldehydes with amines (toxiferine)
Oxidative addition of phenols (dauricine, tubocurarine)
Lactonization (carpaine).
There are also dimeric alkaloids formed from two distinct monomers, such as the vinca alkaloids vinblastine and vincristine, which are formed from the coupling of catharanthine and vindoline. The newer semi-synthetic chemotherapeutic agent vinorelbine is used in the treatment of non-small-cell lung cancer. It is another derivative dimer of vindoline and catharanthine and is synthesised from anhydrovinblastine, starting either from leurosine or the monomers themselves.
Biological role
Alkaloids are among the most important and best-known secondary metabolites, i.e. biogenic substances not directly involved in the normal growth, development, or reproduction of the organism. Instead, they generally mediate ecological interactions, which may produce a selective advantage for the organism by increasing its survivability or fecundity. In some cases their function, if any, remains unclear. An early hypothesis, that alkaloids are the final products of nitrogen metabolism in plants, as urea and uric acid are in mammals, was refuted by the finding that their concentration fluctuates rather than steadily increasing.
Most of the known functions of alkaloids are related to protection. For example, aporphine alkaloid liriodenine produced by the tulip tree protects it from parasitic mushrooms. In addition, the presence of alkaloids in the plant prevents insects and chordate animals from eating it. However, some animals are adapted to alkaloids and even use them in their own metabolism. Such alkaloid-related substances as serotonin, dopamine and histamine are important neurotransmitters in animals. Alkaloids are also known to regulate plant growth. One example of an organism that uses alkaloids for protection is the Utetheisa ornatrix, more commonly known as the ornate moth. Pyrrolizidine alkaloids render these larvae and adult moths unpalatable to many of their natural enemies like coccinelid beetles, green lacewings, insectivorous hemiptera and insectivorous bats. Another example of alkaloids being utilized occurs in the poison hemlock moth (Agonopterix alstroemeriana). This moth feeds on its highly toxic and alkaloid-rich host plant poison hemlock (Conium maculatum) during its larval stage. A. alstroemeriana may benefit twofold from the toxicity of the naturally-occurring alkaloids, both through the unpalatability of the species to predators and through the ability of A. alstroemeriana to recognize Conium maculatum as the correct location for oviposition. A fire ant venom alkaloid known as solenopsin has been demonstrated to protect queens of invasive fire ants during the foundation of new nests, thus playing a central role in the spread of this pest ant species around the world.
Applications
In medicine
Medical use of alkaloid-containing plants has a long history, and, thus, when the first alkaloids were isolated in the 19th century, they immediately found application in clinical practice. Many alkaloids are still used in medicine, usually in the form of salts widely used including the following:
Many synthetic and semisynthetic drugs are structural modifications of the alkaloids, which were designed to enhance or change the primary effect of the drug and reduce unwanted side-effects. For example, naloxone, an opioid receptor antagonist, is a derivative of thebaine that is present in opium.
In agriculture
Prior to the development of a wide range of relatively low-toxic synthetic pesticides, some alkaloids, such as salts of nicotine and anabasine, were used as insecticides. Their use was limited by their high toxicity to humans.
Use as psychoactive drugs
Preparations of plants and fungi containing alkaloids and their extracts, and later pure alkaloids, have long been used as psychoactive substances. Cocaine, caffeine, and cathinone are stimulants of the central nervous system. Mescaline and many indole alkaloids (such as psilocybin, dimethyltryptamine and ibogaine) have hallucinogenic effect. Morphine and codeine are strong narcotic pain killers.
There are alkaloids that do not have strong psychoactive effect themselves, but are precursors for semi-synthetic psychoactive drugs. For example, ephedrine and pseudoephedrine are used to produce methcathinone and methamphetamine. Thebaine is used in the synthesis of many painkillers such as oxycodone.
See also
Amine
Base (chemistry)
List of poisonous plants
Mayer's reagent
Natural products
Palau'amine
Secondary metabolite
Explanatory notes
Citations
General and cited references
External links | Alkaloid | [
"Chemistry"
] | 4,335 | [
"Organic compounds",
"Biomolecules by chemical classification",
"Natural products",
"Alkaloids"
] |
2,349 | https://en.wikipedia.org/wiki/Abstract%20data%20type | In computer science, an abstract data type (ADT) is a mathematical model for data types, defined by its behavior (semantics) from the point of view of a user of the data, specifically in terms of possible values, possible operations on data of this type, and the behavior of these operations. This mathematical model contrasts with data structures, which are concrete representations of data, and are the point of view of an implementer, not a user. For example, a stack has push/pop operations that follow a Last-In-First-Out rule, and can be concretely implemented using either a list or an array. Another example is a set which stores values, without any particular order, and no repeated values. Values themselves are not retrieved from sets; rather, one tests a value for membership to obtain a Boolean "in" or "not in".
ADTs are a theoretical concept, used in formal semantics and program verification and, less strictly, in the design and analysis of algorithms, data structures, and software systems. Most mainstream computer languages do not directly support formally specifying ADTs. However, various language features correspond to certain aspects of implementing ADTs, and are easily confused with ADTs proper; these include abstract types, opaque data types, protocols, and design by contract. For example, in modular programming, the module declares procedures that correspond to the ADT operations, often with comments that describe the constraints. This information hiding strategy allows the implementation of the module to be changed without disturbing the client programs, but the module only informally defines an ADT. The notion of abstract data types is related to the concept of data abstraction, important in object-oriented programming and design by contract methodologies for software engineering.
History
ADTs were first proposed by Barbara Liskov and Stephen N. Zilles in 1974, as part of the development of the CLU language. Algebraic specification was an important subject of research in CS around 1980 and almost a synonym for abstract data types at that time. It has a mathematical foundation in universal algebra.
Definition
Formally, an ADT is analogous to an algebraic structure in mathematics, consisting of a domain, a collection of operations, and a set of constraints the operations must satisfy. The domain is often defined implicitly, for example the free object over the set of ADT operations. The interface of the ADT typically refers only to the domain and operations, and perhaps some of the constraints on the operations, such as pre-conditions and post-conditions; but not to other constraints, such as relations between the operations, which are considered behavior. There are two main styles of formal specifications for behavior, axiomatic semantics and operational semantics.
Despite not being part of the interface, the constraints are still important to the definition of the ADT; for example a stack and a queue have similar add element/remove element interfaces, but it is the constraints that distinguish last-in-first-out from first-in-first-out behavior. The constraints do not consist only of equations such as but also logical formulas.
Axiomatic semantics
In the spirit of functional programming, each state of an abstract data structure is a separate entity or value. In this view, each operation is modelled as a mathematical function with no side effects. Operations that modify the ADT are modeled as functions that take the old state as an argument and returns the new state as part of the result. The order in which operations are evaluated is immaterial, and the same operation applied to the same arguments (including the same input states) will always return the same results (and output states). The constraints are specified as axioms or algebraic laws that the operations must satisfy.
Operational semantics
In the spirit of imperative programming, an abstract data structure is conceived as an entity that is mutable—meaning that there is a notion of time and the ADT may be in different states at different times. Operations then change the state of the ADT over time; therefore, the order in which operations are evaluated is important, and the same operation on the same entities may have different effects if executed at different times. This is analogous to the instructions of a computer or the commands and procedures of an imperative language. To underscore this view, it is customary to say that the operations are executed or applied, rather than evaluated, similar to the imperative style often used when describing abstract algorithms. The constraints are typically specified in prose.
Auxiliary operations
Presentations of ADTs are often limited in scope to only key operations. More thorough presentations often specify auxiliary operations on ADTs, such as:
(), that yields a new instance of the ADT;
(s, t), that tests whether two instances' states are equivalent in some sense;
(s), that computes some standard hash function from the instance's state;
(s) or (s), that produces a human-readable representation of the instance's state.
These names are illustrative and may vary between authors. In imperative-style ADT definitions, one often finds also:
(s), that prepares a newly created instance s for further operations, or resets it to some "initial state";
(s, t), that puts instance s in a state equivalent to that of t;
(t), that performs s ← (), (s, t), and returns s;
(s) or (s), that reclaims the memory and other resources used by s.
The operation is not normally relevant or meaningful, since ADTs are theoretical entities that do not "use memory". However, it may be necessary when one needs to analyze the storage used by an algorithm that uses the ADT. In that case, one needs additional axioms that specify how much memory each ADT instance uses, as a function of its state, and how much of it is returned to the pool by .
Restricted types
The definition of an ADT often restricts the stored value(s) for its instances, to members of a specific set X called the range of those variables. For example, an abstract variable may be constrained to only store integers. As in programming languages, such restrictions may simplify the description and analysis of algorithms, and improve its readability.
Aliasing
In the operational style, it is often unclear how multiple instances are handled and if modifying one instance may affect others. A common style of defining ADTs writes the operations as if only one instance exists during the execution of the algorithm, and all operations are applied to that instance. For example, a stack may have operations (x) and (), that operate on the only existing stack. ADT definitions in this style can be easily rewritten to admit multiple coexisting instances of the ADT, by adding an explicit instance parameter (like S in the stack example below) to every operation that uses or modifies the implicit instance. Some ADTs cannot be meaningfully defined without allowing multiple instances, for example when a single operation takes two distinct instances of the ADT as parameters, such as a operation on sets or a operation on lists.
The multiple instance style is sometimes combined with an aliasing axiom, namely that the result of () is distinct from any instance already in use by the algorithm. Implementations of ADTs may still reuse memory and allow implementations of () to yield a previously created instance; however, defining that such an instance even is "reused" is difficult in the ADT formalism.
More generally, this axiom may be strengthened to exclude also partial aliasing with other instances, so that composite ADTs (such as trees or records) and reference-style ADTs (such as pointers) may be assumed to be completely disjoint. For example, when extending the definition of an abstract variable to include abstract records, operations upon a field F of a record variable R, clearly involve F, which is distinct from, but also a part of, R. A partial aliasing axiom would state that changing a field of one record variable does not affect any other records.
Complexity analysis
Some authors also include the computational complexity ("cost") of each operation, both in terms of time (for computing operations) and space (for representing values), to aid in analysis of algorithms. For example, one may specify that each operation takes the same time and each value takes the same space regardless of the state of the ADT, or that there is a "size" of the ADT and the operations are linear, quadratic, etc. in the size of the ADT. Alexander Stepanov, designer of the C++ Standard Template Library, included complexity guarantees in the STL specification, arguing:
Other authors disagree, arguing that a stack ADT is the same whether it is implemented with a linked list or an array, despite the difference in operation costs, and that an ADT specification should be independent of implementation.
Examples
Abstract variable
An abstract variable may be regarded as the simplest non-trivial ADT, with the semantics of an imperative variable. It admits two operations, and . Operational definitions are often written in terms of abstract variables. In the axiomatic semantics, letting be the type of the abstract variable and be the type of its contents, is a function and is a function of type . The main constraint is that always returns the value x used in the most recent operation on the same variable V, i.e. . We may also require that overwrites the value fully, .
In the operational semantics, (V) is a procedure that returns the current value in the location V, and (V, x) is a procedure with return type that stores the value x in the location V. The constraints are described informally as that reads are consistent with writes. As in many programming languages, the operation (V, x) is often written V ← x (or some similar notation), and (V) is implied whenever a variable V is used in a context where a value is required. Thus, for example, V ← V + 1 is commonly understood to be a shorthand for (V,(V) + 1).
In this definition, it is implicitly assumed that names are always distinct: storing a value into a variable U has no effect on the state of a distinct variable V. To make this assumption explicit, one could add the constraint that:
if U and V are distinct variables, the sequence { (U, x); (V, y) } is equivalent to { (V, y); (U, x) }.
This definition does not say anything about the result of evaluating (V) when V is un-initialized, that is, before performing any operation on V. Fetching before storing can be disallowed, defined to have a certain result, or left unspecified. There are some algorithms whose efficiency depends on the assumption that such a is legal, and returns some arbitrary value in the variable's range.
Abstract stack
An abstract stack is a last-in-first-out structure, It is generally defined by three key operations: , that inserts a data item onto the stack; , that removes a data item from it; and or , that accesses a data item on top of the stack without removal. A complete abstract stack definition includes also a Boolean-valued function (S) and a () operation that returns an initial stack instance.
In the axiomatic semantics, letting be the type of stack states and be the type of values contained in the stack, these could have the types , , , , and . In the axiomatic semantics, creating the initial stack is a "trivial" operation, and always returns the same distinguished state. Therefore, it is often designated by a special symbol like Λ or "()". The operation predicate can then be written simply as or .
The constraints are then , , () = T (a newly created stack is empty), ((S, x)) = F (pushing something into a stack makes it non-empty). These axioms do not define the effect of (s) or (s), unless s is a stack state returned by a . Since leaves the stack non-empty, those two operations can be defined to be invalid when s = Λ. From these axioms (and the lack of side effects), it can be deduced that (Λ, x) ≠ Λ. Also, (s, x) = (t, y) if and only if x = y and s = t.
As in some other branches of mathematics, it is customary to assume also that the stack states are only those whose existence can be proved from the axioms in a finite number of steps. In this case, it means that every stack is a finite sequence of values, that becomes the empty stack (Λ) after a finite number of s. By themselves, the axioms above do not exclude the existence of infinite stacks (that can be ped forever, each time yielding a different state) or circular stacks (that return to the same state after a finite number of s). In particular, they do not exclude states s such that (s) = s or (s, x) = s for some x. However, since one cannot obtain such stack states from the initial stack state with the given operations, they are assumed "not to exist".
In the operational definition of an abstract stack, (S, x) returns nothing and (S) yields the value as the result but not the new state of the stack. There is then the constraint that, for any value x and any abstract variable V, the sequence of operations { (S, x); V ← (S) } is equivalent to V ← x. Since the assignment V ← x, by definition, cannot change the state of S, this condition implies that V ← (S) restores S to the state it had before the (S, x). From this condition and from the properties of abstract variables, it follows, for example, that the sequence:
{ (S, x); (S, y); U ← (S); (S, z); V ← (S); W ← (S) }
where x, y, and z are any values, and U, V, W are pairwise distinct variables, is equivalent to:
{ U ← y; V ← z; W ← x }
Unlike the axiomatic semantics, the operational semantics can suffer from aliasing. Here it is implicitly assumed that operations on a stack instance do not modify the state of any other ADT instance, including other stacks; that is:
For any values x, y, and any distinct stacks S and T, the sequence { (S, x); (T, y) } is equivalent to { (T, y); (S, x) }.
Boom hierarchy
A more involved example is the Boom hierarchy of the binary tree, list, bag and set abstract data types. All these data types can be declared by three operations: null, which constructs the empty container, single, which constructs a container from a single element and append, which combines two containers of the same type. The complete specification for the four data types can then be given by successively adding the following rules over these operations:
Access to the data can be specified by pattern-matching over the three operations, e.g. a member function for these containers by:
Care must be taken to ensure that the function is invariant under the relevant rules for the data type. Within each of the equivalence classes implied by the chosen subset of equations, it has to yield the same result for all of its members.
Common ADTs
Some common ADTs, which have proved useful in a great variety of applications, are
Collection
Container
List
String
Set
Multiset
Map
Multimap
Graph
Tree
Stack
Queue
Priority queue
Double-ended queue
Double-ended priority queue
Each of these ADTs may be defined in many ways and variants, not necessarily equivalent. For example, an abstract stack may or may not have a operation that tells how many items have been pushed and not yet popped. This choice makes a difference not only for its clients but also for the implementation.
Abstract graphical data type
An extension of ADT for computer graphics was proposed in 1979: an abstract graphical data type (AGDT). It was introduced by Nadia Magnenat Thalmann, and Daniel Thalmann. AGDTs provide the advantages of ADTs with facilities to build graphical objects in a structured way.
Implementation
Abstract data types are theoretical entities, used (among other things) to simplify the description of abstract algorithms, to classify and evaluate data structures, and to formally describe the type systems of programming languages. However, an ADT may be implemented. This means each ADT instance or state is represented by some concrete data type or data structure, and for each abstract operation there is a corresponding procedure or function, and these implemented procedures satisfy the ADT's specifications and axioms up to some standard. In practice, the implementation is not perfect, and users must be aware of issues due to limitations of the representation and implemented procedures.
For example, integers may be specified as an ADT, defined by the distinguished values 0 and 1, the operations of addition, subtraction, multiplication, division (with care for division by zero), comparison, etc., behaving according to the familiar mathematical axioms in abstract algebra such as associativity, commutativity, and so on. However, in a computer, integers are most commonly represented as fixed-width 32-bit or 64-bit binary numbers. Users must be aware of issues with this representation, such as arithmetic overflow, where the ADT specifies a valid result but the representation is unable to accommodate this value. Nonetheless, for many purposes, the user can ignore these infidelities and simply use the implementation as if it were the abstract data type.
Usually, there are many ways to implement the same ADT, using several different concrete data structures. Thus, for example, an abstract stack can be implemented by a linked list or by an array. Different implementations of the ADT, having all the same properties and abilities, can be considered semantically equivalent and may be used somewhat interchangeably in code that uses the ADT. This provides a form of abstraction or encapsulation, and gives a great deal of flexibility when using ADT objects in different situations. For example, different implementations of the ADT may be more efficient in different situations; it is possible to use each in the situation where they are preferable, thus increasing overall efficiency. Code that uses an ADT implementation according to its interface will continue working even if the implementation of the ADT is changed.
In order to prevent clients from depending on the implementation, an ADT is often packaged as an opaque data type or handle of some sort, in one or more modules, whose interface contains only the signature (number and types of the parameters and results) of the operations. The implementation of the module—namely, the bodies of the procedures and the concrete data structure used—can then be hidden from most clients of the module. This makes it possible to change the implementation without affecting the clients. If the implementation is exposed, it is known instead as a transparent data type.
Modern object-oriented languages, such as C++ and Java, support a form of abstract data types. When a class is used as a type, it is an abstract type that refers to a hidden representation. In this model, an ADT is typically implemented as a class, and each instance of the ADT is usually an object of that class. The module's interface typically declares the constructors as ordinary procedures, and most of the other ADT operations as methods of that class. Many modern programming languages, such as C++ and Java, come with standard libraries that implement numerous ADTs in this style. However, such an approach does not easily encapsulate multiple representational variants found in an ADT. It also can undermine the extensibility of object-oriented programs. In a pure object-oriented program that uses interfaces as types, types refer to behaviours, not representations.
The specification of some programming languages is intentionally vague about the representation of certain built-in data types, defining only the operations that can be done on them. Therefore, those types can be viewed as "built-in ADTs". Examples are the arrays in many scripting languages, such as Awk, Lua, and Perl, which can be regarded as an implementation of the abstract list.
In a formal specification language, ADTs may be defined axiomatically, and the language then allows manipulating values of these ADTs, thus providing a straightforward and immediate implementation. The OBJ family of programming languages for instance allows defining equations for specification and rewriting to run them. Such automatic implementations are usually not as efficient as dedicated implementations, however.
Example: implementation of the abstract stack
As an example, here is an implementation of the abstract stack above in the C programming language.
Imperative-style interface
An imperative-style interface might be:
typedef struct stack_Rep stack_Rep; // type: stack instance representation (opaque record)
typedef stack_Rep* stack_T; // type: handle to a stack instance (opaque pointer)
typedef void* stack_Item; // type: value stored in stack instance (arbitrary address)
stack_T stack_create(void); // creates a new empty stack instance
void stack_push(stack_T s, stack_Item x); // adds an item at the top of the stack
stack_Item stack_pop(stack_T s); // removes the top item from the stack and returns it
bool stack_empty(stack_T s); // checks whether stack is empty
This interface could be used in the following manner:
#include <stack.h> // includes the stack interface
stack_T s = stack_create(); // creates a new empty stack instance
int x = 17;
stack_push(s, &x); // adds the address of x at the top of the stack
void* y = stack_pop(s); // removes the address of x from the stack and returns it
if (stack_empty(s)) { } // does something if stack is empty
This interface can be implemented in many ways. The implementation may be arbitrarily inefficient, since the formal definition of the ADT, above, does not specify how much space the stack may use, nor how long each operation should take. It also does not specify whether the stack state s continues to exist after a call x ← (s).
In practice the formal definition should specify that the space is proportional to the number of items pushed and not yet popped; and that every one of the operations above must finish in a constant amount of time, independently of that number. To comply with these additional specifications, the implementation could use a linked list, or an array (with dynamic resizing) together with two integers (an item count and the array size).
Functional-style interface
Functional-style ADT definitions are more appropriate for functional programming languages, and vice versa. However, one can provide a functional-style interface even in an imperative language like C. For example:
typedef struct stack_Rep stack_Rep; // type: stack state representation (opaque record)
typedef stack_Rep* stack_T; // type: handle to a stack state (opaque pointer)
typedef void* stack_Item; // type: value of a stack state (arbitrary address)
stack_T stack_empty(void); // returns the empty stack state
stack_T stack_push(stack_T s, stack_Item x); // adds an item at the top of the stack state and returns the resulting stack state
stack_T stack_pop(stack_T s); // removes the top item from the stack state and returns the resulting stack state
stack_Item stack_top(stack_T s); // returns the top item of the stack state
See also
Concept (generic programming)
Formal methods
Functional specification
Generalized algebraic data type
Initial algebra
Liskov substitution principle
Type theory
Walls and Mirrors
Notes
Citations
References
Further reading
External links
Abstract data type in NIST Dictionary of Algorithms and Data Structures
Data types
Type theory | Abstract data type | [
"Mathematics"
] | 5,008 | [
"Mathematical structures",
"Mathematical logic",
"Mathematical objects",
"Type theory",
"Abstract data types"
] |
2,362 | https://en.wikipedia.org/wiki/Antibody | An antibody (Ab) or immunoglobulin (Ig) is a large, Y-shaped protein belonging to the immunoglobulin superfamily which is used by the immune system to identify and neutralize antigens such as bacteria and viruses, including those that cause disease. Antibodies can recognize virtually any size antigen, able to perceive diverse chemical compositions. Each antibody recognizes one or more specific antigens. Antigen literally means "antibody generator", as it is the presence of an antigen that drives the formation of an antigen-specific antibody. Each tip of the "Y" of an antibody contains a paratope that specifically binds to one particular epitope on an antigen, allowing the two molecules to bind together with precision. Using this mechanism, antibodies can effectively "tag" a microbe or an infected cell for attack by other parts of the immune system, or can neutralize it directly (for example, by blocking a part of a virus that is essential for its invasion).
More narrowly, an antibody (Ab) can refer to the free (secreted) form of these proteins, as opposed to the membrane-bound form found in a B cell receptor. The term immunoglobulin can then refer to both forms. Since they are, broadly speaking, the same protein, the terms are often treated as synonymous.
To allow the immune system to recognize millions of different antigens, the antigen-binding sites at both tips of the antibody come in an equally wide variety. The rest of the antibody structure is much less variable; in humans, antibodies occur in five classes, sometimes called isotypes: IgA, IgD, IgE, IgG, and IgM. Human IgG and IgA antibodies are also divided into discrete subclasses (IgG1, IgG2, IgG3, IgG4; IgA1 and IgA2). The class refers to the functions triggered by the antibody (also known as effector functions), in addition to some other structural features. Antibodies from different classes also differ in where they are released in the body and at what stage of an immune response. Between species, while classes and subclasses of antibodies may be shared (at least in name), their functions and distribution throughout the body may be different. For example, mouse IgG1 is closer to human IgG2 than human IgG1 in terms of its function.
The term humoral immunity is often treated as synonymous with the antibody response, describing the function of the immune system that exists in the body's humors (fluids) in the form of soluble proteins, as distinct from cell-mediated immunity, which generally describes the responses of T cells (especially cytotoxic T cells). In general, antibodies are considered part of the adaptive immune system, though this classification can become complicated. For example, natural IgM, which are made by B-1 lineage cells that have properties more similar to innate immune cells than adaptive, refers to IgM antibodies made independently of an immune response that demonstrate polyreactivity- they recognize multiple distinct (unrelated) antigens. These can work with the complement system in the earliest phases of an immune response to help facilitate clearance of the offending antigen and delivery of the resulting immune complexes to the lymph nodes or spleen for initiation of an immune response. Hence in this capacity, the function of antibodies is more akin to that of innate immunity than adaptive. Nonetheless, in general antibodies are regarded as part of the adaptive immune system because they demonstrate exceptional specificity (with some exception), are produced through genetic rearrangements (rather than being encoded directly in germline), and are a manifestation of immunological memory.
In the course of an immune response, B cells can progressively differentiate into antibody-secreting cells or into memory B cells. Antibody-secreting cells comprise plasmablasts and plasma cells, which differ mainly in the degree to which they secrete antibody, their lifespan, metabolic adaptations, and surface markers. Plasmablasts are rapidly proliferating, short-lived cells produced in the early phases of the immune response (classically described as arising extrafollicularly rather than from the germinal center) which have the potential to differentiate further into plasma cells. Occasionally plasmablasts are described as short-lived plasma cells, formally this is incorrect. Plasma cells, in contrast, do not divide (they are terminally differentiated), and rely on survival niches comprising specific cell types and cytokines to persist. Plasma cells will secrete huge quantities of antibody regardless of whether or not their cognate antigen is present, ensuring that antibody levels to the antigen in question do not fall to 0, provided the plasma cell stays alive. The rate of antibody secretion, however, can be regulated, for example, by the presence of adjuvant molecules that stimulate the immune response such as TLR ligands. Long-lived plasma cells can live for potentially the entire lifetime of the organism. Classically, the survival niches that house long-lived plasma cells reside in the bone marrow, though it cannot be assumed that any given plasma cell in the bone marrow will be long-lived. However, other work indicates that survival niches can readily be established within the mucosal tissues- though the classes of antibodies involved show a different hierarchy from those in the bone marrow. B cells can also differentiate into memory B cells which can persist for decades similarly to long-lived plasma cells. These cells can be rapidly recalled in a secondary immune response, undergoing class switching, affinity maturation, and differentiating into antibody-secreting cells.
Antibodies are central to the immune protection elicited by most vaccines and infections (although other components of the immune system certainly participate and for some diseases are considerably more important than antibodies in generating an immune response, e.g. herpes zoster). Durable protection from infections caused by a given microbe – that is, the ability of the microbe to enter the body and begin to replicate (not necessarily to cause disease) – depends on sustained production of large quantities of antibodies, meaning that effective vaccines ideally elicit persistent high levels of antibody, which relies on long-lived plasma cells. At the same time, many microbes of medical importance have the ability to mutate to escape antibodies elicited by prior infections, and long-lived plasma cells cannot undergo affinity maturation or class switching. This is compensated for through memory B cells: novel variants of a microbe that still retain structural features of previously encountered antigens can elicit memory B cell responses that adapt to those changes. It has been suggested that long-lived plasma cells secrete B cell receptors with higher affinity than those on the surfaces of memory B cells, but findings are not entirely consistent on this point.
Structure
Antibodies are heavy (~150 kDa) proteins of about 10 nm in size,
arranged in three globular regions that roughly form a Y shape.
In humans and most other mammals, an antibody unit consists of four polypeptide chains; two identical heavy chains and two identical light chains connected by disulfide bonds.
Each chain is a series of domains: somewhat similar sequences of about 110 amino acids each.
These domains are usually represented in simplified schematics as rectangles.
Light chains consist of one variable domain VL and one constant domain CL, while heavy chains contain one variable domain VH and three to four constant domains CH1, CH2, ...
Structurally an antibody is also partitioned into two antigen-binding fragments (Fab), containing one VL, VH, CL, and CH1 domain each, as well as the crystallisable fragment (Fc), forming the trunk of the Y shape.
In between them is a hinge region of the heavy chains, whose flexibility allows antibodies to bind to pairs of epitopes at various distances, to form complexes (dimers, trimers, etc.), and to bind effector molecules more easily.
In an electrophoresis test of blood proteins, antibodies mostly migrate to the last, gamma globulin fraction.
Conversely, most gamma-globulins are antibodies, which is why the two terms were historically used as synonyms, as were the symbols Ig and γ.
This variant terminology fell out of use due to the correspondence being inexact and due to confusion with γ (gamma) heavy chains which characterize the IgG class of antibodies.
Antigen-binding site
The variable domains can also be referred to as the FV region. It is the subregion of Fab that binds to an antigen.
More specifically, each variable domain contains three hypervariable regions – the amino acids seen there vary the most from antibody to antibody.
When the protein folds, these regions give rise to three loops of β-strands, localized near one another on the surface of the antibody.
These loops are referred to as the complementarity-determining regions (CDRs), since their shape complements that of an antigen.
Three CDRs from each of the heavy and light chains together form an antibody-binding site whose shape can be anything from a pocket to which a smaller antigen binds, to a larger surface, to a protrusion that sticks out into a groove in an antigen.
Typically though, only a few residues contribute to most of the binding energy.
The existence of two identical antibody-binding sites allows antibody molecules to bind strongly to multivalent antigen (repeating sites such as polysaccharides in bacterial cell walls, or other sites at some distance apart), as well as to form antibody complexes and larger antigen-antibody complexes.
The structures of CDRs have been clustered and classified by Chothia et al.
and more recently by North et al.
and Nikoloudis et al. However, describing an antibody's binding site using only one single static structure limits the understanding and characterization of the antibody's function and properties. To improve antibody structure prediction and to take the strongly correlated CDR loop and interface movements into account, antibody paratopes should be described as interconverting states in solution with varying probabilities.
In the framework of the immune network theory, CDRs are also called idiotypes. According to immune network theory, the adaptive immune system is regulated by interactions between idiotypes.
Fc region
The Fc region (the trunk of the Y shape) is composed of constant domains from the heavy chains. Its role is in modulating immune cell activity: it is where effector molecules bind to, triggering various effects after the antibody Fab region binds to an antigen.
Effector cells (such as macrophages or natural killer cells) bind via their Fc receptors (FcR) to the Fc region of an antibody, while the complement system is activated by binding the C1q protein complex. IgG or IgM can bind to C1q, but IgA cannot, therefore IgA does not activate the classical complement pathway.
Another role of the Fc region is to selectively distribute different antibody classes across the body. In particular, the neonatal Fc receptor (FcRn) binds to the Fc region of IgG antibodies to transport it across the placenta, from the mother to the fetus. In addition to this, binding to FcRn endows IgG with an exceptionally long half-life relative to other plasma proteins of 3-4 weeks. IgG3 in most cases (depending on allotype) has mutations at the FcRn binding site which lower affinity for FcRn, which are thought to have evolved to limit the highly inflammatory effects of this subclass.
Antibodies are glycoproteins, that is, they have carbohydrates (glycans) added to conserved amino acid residues.
These conserved glycosylation sites occur in the Fc region and influence interactions with effector molecules.
Protein structure
The N-terminus of each chain is situated at the tip.
Each immunoglobulin domain has a similar structure, characteristic of all the members of the immunoglobulin superfamily:
it is composed of between 7 (for constant domains) and 9 (for variable domains) β-strands, forming two beta sheets in a Greek key motif.
The sheets create a "sandwich" shape, the immunoglobulin fold, held together by a disulfide bond.
Antibody complexes
Secreted antibodies can occur as a single Y-shaped unit, a monomer.
However, some antibody classes also form dimers with two Ig units (as with IgA), tetramers with four Ig units (like teleost fish IgM), or pentamers with five Ig units (like shark IgW or mammalian IgM, which occasionally forms hexamers as well, with six units). IgG can also form hexamers, though no J chain is required. IgA tetramers and pentamers have also been reported.
Antibodies also form complexes by binding to antigen: this is called an antigen-antibody complex or immune complex.
Small antigens can cross-link two antibodies, also leading to the formation of antibody dimers, trimers, tetramers, etc.
Multivalent antigens (e.g., cells with multiple epitopes) can form larger complexes with antibodies.
An extreme example is the clumping, or agglutination, of red blood cells with antibodies in blood typing to determine blood groups: the large clumps become insoluble, leading to visually apparent precipitation.
B cell receptors
The membrane-bound form of an antibody may be called a surface immunoglobulin (sIg) or a membrane immunoglobulin (mIg). It is part of the B cell receptor (BCR), which allows a B cell to detect when a specific antigen is present in the body and triggers B cell activation. The BCR is composed of surface-bound IgD or IgM antibodies and associated Ig-α and Ig-β heterodimers, which are capable of signal transduction. A typical human B cell will have 50,000 to 100,000 antibodies bound to its surface. Upon antigen binding, they cluster in large patches, which can exceed 1 micrometer in diameter, on lipid rafts that isolate the BCRs from most other cell signaling receptors.
These patches may improve the efficiency of the cellular immune response. In humans, the cell surface is bare around the B cell receptors for several hundred nanometers, which further isolates the BCRs from competing influences.
Classes
Antibodies can come in different varieties known as isotypes or classes. In humans there are five antibody classes known as IgA, IgD, IgE, IgG, and IgM, which are further subdivided into subclasses such as IgA1, IgA2.
The prefix "Ig" stands for immunoglobulin, while the suffix denotes the type of heavy chain the antibody contains: the heavy chain types α (alpha), γ (gamma), δ (delta), ε (epsilon), μ (mu) give rise to IgA, IgG, IgD, IgE, IgM, respectively.
The distinctive features of each class are determined by the part of the heavy chain within the hinge and Fc region.
The classes differ in their biological properties, functional locations and ability to deal with different antigens, as depicted in the table.
For example, IgE antibodies are responsible for an allergic response consisting of histamine release from mast cells, often a sole contributor to asthma (though other pathways exist as do exist symptoms very similar to yet not technically asthma). The antibody's variable region binds to allergic antigen, for example house dust mite particles, while its Fc region (in the ε heavy chains) binds to Fc receptor ε on a mast cell, triggering its degranulation: the release of molecules stored in its granules.
The antibody isotype of a B cell changes during cell development and activation. Immature B cells, which have never been exposed to an antigen, express only the IgM isotype in a cell surface bound form. The B lymphocyte, in this ready-to-respond form, is known as a "naive B lymphocyte." The naive B lymphocyte expresses both surface IgM and IgD. The co-expression of both of these immunoglobulin isotypes renders the B cell ready to respond to antigen. B cell activation follows engagement of the cell-bound antibody molecule with an antigen, causing the cell to divide and differentiate into an antibody-producing cell called a plasma cell. In this activated form, the B cell starts to produce antibody in a secreted form rather than a membrane-bound form. Some daughter cells of the activated B cells undergo isotype switching, a mechanism that causes the production of antibodies to change from IgM or IgD to the other antibody isotypes, IgE, IgA, or IgG, that have defined roles in the immune system.
Light chain types
In mammals there are two types of immunoglobulin light chain, which are called lambda (λ) and kappa (κ). However, there is no known functional difference between them, and both can occur with any of the five major types of heavy chains. Each antibody contains two identical light chains: both κ or both λ. Proportions of κ and λ types vary by species and can be used to detect abnormal proliferation of B cell clones. Other types of light chains, such as the iota (ι) chain, are found in other vertebrates like sharks (Chondrichthyes) and bony fishes (Teleostei).
In non-mammalian animals
In most placental mammals, the structure of antibodies is generally the same.
Jawed fish appear to be the most primitive animals that are able to make antibodies similar to those of mammals, although many features of their adaptive immunity appeared somewhat earlier.
Cartilaginous fish (such as sharks) produce heavy-chain-only antibodies (i.e., lacking light chains) which moreover feature longer chain pentamers (with five constant units per molecule). Camelids (such as camels, llamas, alpacas) are also notable for producing heavy-chain-only antibodies.
Antibody–antigen interactions
The antibody's paratope interacts with the antigen's epitope. An antigen usually contains different epitopes along its surface arranged discontinuously, and dominant epitopes on a given antigen are called determinants.
Antibody and antigen interact by spatial complementarity (lock and key). The molecular forces involved in the Fab-epitope interaction are weak and non-specific – for example electrostatic forces, hydrogen bonds, hydrophobic interactions, and van der Waals forces. This means binding between antibody and antigen is reversible, and the antibody's affinity towards an antigen is relative rather than absolute. Relatively weak binding also means it is possible for an antibody to cross-react with different antigens of different relative affinities.
Function
The main categories of antibody action include the following:
Neutralisation, in which neutralizing antibodies block parts of the surface of a bacterial cell or virion to render its attack ineffective
Agglutination, in which antibodies "glue together" foreign cells into clumps that are attractive targets for phagocytosis
Precipitation, in which antibodies "glue together" serum-soluble antigens, forcing them to precipitate out of solution in clumps that are attractive targets for phagocytosis
Complement activation (fixation), in which antibodies that are latched onto a foreign cell encourage complement to attack it with a membrane attack complex, which leads to the following:
Lysis of the foreign cell
Encouragement of inflammation by chemotactically attracting inflammatory cells
More indirectly, an antibody can signal immune cells to present antibody fragments to T cells, or downregulate other immune cells to avoid autoimmunity.
Activated B cells differentiate into either antibody-producing cells called plasma cells that secrete soluble antibody or memory cells that survive in the body for years afterward in order to allow the immune system to remember an antigen and respond faster upon future exposures.
At the prenatal and neonatal stages of life, the presence of antibodies is provided by passive immunization from the mother. Early endogenous antibody production varies for different kinds of antibodies, and usually appear within the first years of life. Since antibodies exist freely in the bloodstream, they are said to be part of the humoral immune system. Circulating antibodies are produced by clonal B cells that specifically respond to only one antigen (an example is a virus capsid protein fragment). Antibodies contribute to immunity in three ways: They prevent pathogens from entering or damaging cells by binding to them; they stimulate removal of pathogens by macrophages and other cells by coating the pathogen; and they trigger destruction of pathogens by stimulating other immune responses such as the complement pathway. Antibodies will also trigger vasoactive amine degranulation to contribute to immunity against certain types of antigens (helminths, allergens).
Activation of complement
Antibodies that bind to surface antigens (for example, on bacteria) will attract the first component of the complement cascade with their Fc region and initiate activation of the "classical" complement system. This results in the killing of bacteria in two ways. First, the binding of the antibody and complement molecules marks the microbe for ingestion by phagocytes in a process called opsonization; these phagocytes are attracted by certain complement molecules generated in the complement cascade. Second, some complement system components form a membrane attack complex to assist antibodies to kill the bacterium directly (bacteriolysis).
Activation of effector cells
To combat pathogens that replicate outside cells, antibodies bind to pathogens to link them together, causing them to agglutinate. Since an antibody has at least two paratopes, it can bind more than one antigen by binding identical epitopes carried on the surfaces of these antigens. By coating the pathogen, antibodies stimulate effector functions against the pathogen in cells that recognize their Fc region.
Those cells that recognize coated pathogens have Fc receptors, which, as the name suggests, interact with the Fc region of IgA, IgG, and IgE antibodies. The engagement of a particular antibody with the Fc receptor on a particular cell triggers an effector function of that cell; phagocytes will phagocytose, mast cells and neutrophils will degranulate, natural killer cells will release cytokines and cytotoxic molecules; that will ultimately result in destruction of the invading microbe. The activation of natural killer cells by antibodies initiates a cytotoxic mechanism known as antibody-dependent cell-mediated cytotoxicity (ADCC) – this process may explain the efficacy of monoclonal antibodies used in biological therapies against cancer. The Fc receptors are isotype-specific, which gives greater flexibility to the immune system, invoking only the appropriate immune mechanisms for distinct pathogens.
Natural antibodies
Humans and higher primates also produce "natural antibodies" that are present in serum before viral infection. Natural antibodies have been defined as antibodies that are produced without any previous infection, vaccination, other foreign antigen exposure or passive immunization. These antibodies can activate the classical complement pathway leading to lysis of enveloped virus particles long before the adaptive immune response is activated. Antibodies are produced exclusively by B cells in response to antigens where initially, antibodies are formed as membrane-bound receptors, but upon activation by antigens and helper T cells, B cells differentiate to produce soluble antibodies. Many natural antibodies are directed against the disaccharide galactose α(1,3)-galactose (α-Gal), which is found as a terminal sugar on glycosylated cell surface proteins, and generated in response to production of this sugar by bacteria contained in the human gut. These antibodies undergo quality checks in the endoplasmic reticulum (ER), which contains proteins that assist in proper folding and assembly. Rejection of xenotransplantated organs is thought to be, in part, the result of natural antibodies circulating in the serum of the recipient binding to α-Gal antigens expressed on the donor tissue.
Immunoglobulin diversity
Virtually all microbes can trigger an antibody response. Successful recognition and eradication of many different types of microbes requires diversity among antibodies; their amino acid composition varies allowing them to interact with many different antigens. It has been estimated that humans generate about 10 billion different antibodies, each capable of binding a distinct epitope of an antigen. Although a huge repertoire of different antibodies is generated in a single individual, the number of genes available to make these proteins is limited by the size of the human genome. Several complex genetic mechanisms have evolved that allow vertebrate B cells to generate a diverse pool of antibodies from a relatively small number of antibody genes.
Domain variability
The chromosomal region that encodes an antibody is large and contains several distinct gene loci for each domain of the antibody—the chromosome region containing heavy chain genes (IGH@) is found on chromosome 14, and the loci containing lambda and kappa light chain genes (IGL@ and IGK@) are found on chromosomes 22 and 2 in humans. One of these domains is called the variable domain, which is present in each heavy and light chain of every antibody, but can differ in different antibodies generated from distinct B cells. Differences between the variable domains are located on three loops known as hypervariable regions (HV-1, HV-2 and HV-3) or complementarity-determining regions (CDR1, CDR2 and CDR3). CDRs are supported within the variable domains by conserved framework regions. The heavy chain locus contains about 65 different variable domain genes that all differ in their CDRs. Combining these genes with an array of genes for other domains of the antibody generates a large cavalry of antibodies with a high degree of variability. This combination is called V(D)J recombination discussed below.
V(D)J recombination
Somatic recombination of immunoglobulins, also known as V(D)J recombination, involves the generation of a unique immunoglobulin variable region. The variable region of each immunoglobulin heavy or light chain is encoded in several pieces—known as gene segments (subgenes). These segments are called variable (V), diversity (D) and joining (J) segments. V, D and J segments are found in Ig heavy chains, but only V and J segments are found in Ig light chains. Multiple copies of the V, D and J gene segments exist, and are tandemly arranged in the genomes of mammals. In the bone marrow, each developing B cell will assemble an immunoglobulin variable region by randomly selecting and combining one V, one D and one J gene segment (or one V and one J segment in the light chain). As there are multiple copies of each type of gene segment, and different combinations of gene segments can be used to generate each immunoglobulin variable region, this process generates a huge number of antibodies, each with different paratopes, and thus different antigen specificities. The rearrangement of several subgenes (i.e. V2 family) for lambda light chain immunoglobulin is coupled with the activation of microRNA miR-650, which further influences biology of B-cells.
RAG proteins play an important role with V(D)J recombination in cutting DNA at a particular region. Without the presence of these proteins, V(D)J recombination would not occur.
After a B cell produces a functional immunoglobulin gene during V(D)J recombination, it cannot express any other variable region (a process known as allelic exclusion) thus each B cell can produce antibodies containing only one kind of variable chain.
Somatic hypermutation and affinity maturation
Following activation with antigen, B cells begin to proliferate rapidly. In these rapidly dividing cells, the genes encoding the variable domains of the heavy and light chains undergo a high rate of point mutation, by a process called somatic hypermutation (SHM). SHM results in approximately one nucleotide change per variable gene, per cell division. As a consequence, any daughter B cells will acquire slight amino acid differences in the variable domains of their antibody chains.
This serves to increase the diversity of the antibody pool and impacts the antibody's antigen-binding affinity. Some point mutations will result in the production of antibodies that have a weaker interaction (low affinity) with their antigen than the original antibody, and some mutations will generate antibodies with a stronger interaction (high affinity). B cells that express high affinity antibodies on their surface will receive a strong survival signal during interactions with other cells, whereas those with low affinity antibodies will not, and will die by apoptosis. Thus, B cells expressing antibodies with a higher affinity for the antigen will outcompete those with weaker affinities for function and survival allowing the average affinity of antibodies to increase over time. The process of generating antibodies with increased binding affinities is called affinity maturation. Affinity maturation occurs in mature B cells after V(D)J recombination, and is dependent on help from helper T cells.
Class switching
Isotype or class switching is a biological process occurring after activation of the B cell, which allows the cell to produce different classes of antibody (IgA, IgE, or IgG). The different classes of antibody, and thus effector functions, are defined by the constant (C) regions of the immunoglobulin heavy chain. Initially, naive B cells express only cell-surface IgM and IgD with identical antigen binding regions. Each isotype is adapted for a distinct function; therefore, after activation, an antibody with an IgG, IgA, or IgE effector function might be required to effectively eliminate an antigen. Class switching allows different daughter cells from the same activated B cell to produce antibodies of different isotypes. Only the constant region of the antibody heavy chain changes during class switching; the variable regions, and therefore antigen specificity, remain unchanged. Thus the progeny of a single B cell can produce antibodies, all specific for the same antigen, but with the ability to produce the effector function appropriate for each antigenic challenge. Class switching is triggered by cytokines; the isotype generated depends on which cytokines are present in the B cell environment.
Class switching occurs in the heavy chain gene locus by a mechanism called class switch recombination (CSR). This mechanism relies on conserved nucleotide motifs, called switch (S) regions, found in DNA upstream of each constant region gene (except in the δ-chain). The DNA strand is broken by the activity of a series of enzymes at two selected S-regions. The variable domain exon is rejoined through a process called non-homologous end joining (NHEJ) to the desired constant region (γ, α or ε). This process results in an immunoglobulin gene that encodes an antibody of a different isotype.
Specificity designations
An antibody can be called monospecific if it has specificity for a single antigen or epitope, or bispecific if it has affinity for two different antigens or two different epitopes on the same antigen. A group of antibodies can be called polyvalent (or unspecific) if they have affinity for various antigens or microorganisms. Intravenous immunoglobulin, if not otherwise noted, consists of a variety of different IgG (polyclonal IgG). In contrast, monoclonal antibodies are identical antibodies produced by a single B cell.
Asymmetrical antibodies
Heterodimeric antibodies, which are also asymmetrical antibodies, allow for greater flexibility and new formats for attaching a variety of drugs to the antibody arms. One of the general formats for a heterodimeric antibody is the "knobs-into-holes" format. This format is specific to the heavy chain part of the constant region in antibodies. The "knobs" part is engineered by replacing a small amino acid with a larger one. It fits into the "hole", which is engineered by replacing a large amino acid with a smaller one. What connects the "knobs" to the "holes" are the disulfide bonds between each chain. The "knobs-into-holes" shape facilitates antibody dependent cell mediated cytotoxicity. Single-chain variable fragments (scFv) are connected to the variable domain of the heavy and light chain via a short linker peptide. The linker is rich in glycine, which gives it more flexibility, and serine/threonine, which gives it specificity. Two different scFv fragments can be connected together, via a hinge region, to the constant domain of the heavy chain or the constant domain of the light chain. This gives the antibody bispecificity, allowing for the binding specificities of two different antigens. The "knobs-into-holes" format enhances heterodimer formation but does not suppress homodimer formation.
To further improve the function of heterodimeric antibodies, many scientists are looking towards artificial constructs. Artificial antibodies are largely diverse protein motifs that use the functional strategy of the antibody molecule, but are not limited by the loop and framework structural constraints of the natural antibody. Being able to control the combinational design of the sequence and three-dimensional space could transcend the natural design and allow for the attachment of different combinations of drugs to the arms.
Heterodimeric antibodies have a greater range in shapes they can take and the drugs that are attached to the arms do not have to be the same on each arm, allowing for different combinations of drugs to be used in cancer treatment. Pharmaceuticals are able to produce highly functional bispecific, and even multispecific, antibodies. The degree to which they can function is impressive given that such a change of shape from the natural form should lead to decreased functionality.
Interchromosomal DNA Transposition
Antibody diversification typically occurs through somatic hypermutation, class switching, and affinity maturation targeting the BCR gene loci, but on occasion more unconventional forms of diversification have been documented. For example, in the case of malaria caused by Plasmodium falciparum, some antibodies from those who had been infected demonstrated an insertion from chromosome 19 containing a 98-amino acid stretch from leukocyte-associated immunoglobulin-like receptor 1, LAIR1, in the elbow joint. This represents a form of interchromosomal transposition. LAIR1 normally binds collagen, but can recognize repetitive interspersed families of polypeptides (RIFIN) family members that are highly expressed on the surface of P. falciparum-infected red blood cells. In fact, these antibodies underwent affinity maturation that enhanced affinity for RIFIN but abolished affinity for collagen. These "LAIR1-containing" antibodies have been found in 5-10% of donors from Tanzania and Mali, though not in European donors. European donors did show 100-1000 nucleotide stretches inside the elbow joints as well, however. This particular phenomenon may be specific to malaria, as infection is known to induce genomic instability.
History
The first use of the term "antibody" occurred in a text by Paul Ehrlich. The term Antikörper (the German word for antibody) appears in the conclusion of his article "Experimental Studies on Immunity", published in October 1891, which states that, "if two substances give rise to two different Antikörper, then they themselves must be different". However, the term was not accepted immediately and several other terms for antibody were proposed; these included Immunkörper, Amboceptor, Zwischenkörper, substance sensibilisatrice, copula, Desmon, philocytase, fixateur, and Immunisin. The word antibody has formal analogy to the word antitoxin and a similar concept to Immunkörper (immune body in English). As such, the original construction of the word contains a logical flaw; the antitoxin is something directed against a toxin, while the antibody is a body directed against something.
The study of antibodies began in 1890 when Emil von Behring and Kitasato Shibasaburō described antibody activity against diphtheria and tetanus toxins. Von Behring and Kitasato put forward the theory of humoral immunity, proposing that a mediator in serum could react with a foreign antigen. His idea prompted Paul Ehrlich to propose the side-chain theory for antibody and antigen interaction in 1897, when he hypothesized that receptors (described as "side-chains") on the surface of cells could bind specifically to toxins – in a "lock-and-key" interaction – and that this binding reaction is the trigger for the production of antibodies. Other researchers believed that antibodies existed freely in the blood and, in 1904, Almroth Wright suggested that soluble antibodies coated bacteria to label them for phagocytosis and killing; a process that he named opsoninization.
In the 1920s, Michael Heidelberger and Oswald Avery observed that antigens could be precipitated by antibodies and went on to show that antibodies are made of protein. The biochemical properties of antigen-antibody-binding interactions were examined in more detail in the late 1930s by John Marrack. The next major advance was in the 1940s, when Linus Pauling confirmed the lock-and-key theory proposed by Ehrlich by showing that the interactions between antibodies and antigens depend more on their shape than their chemical composition. In 1948, Astrid Fagraeus discovered that B cells, in the form of plasma cells, were responsible for generating antibodies.
Further work concentrated on characterizing the structures of the antibody proteins. A major advance in these structural studies was the discovery in the early 1960s by Gerald Edelman and Joseph Gally of the antibody light chain, and their realization that this protein is the same as the Bence-Jones protein described in 1845 by Henry Bence Jones. Edelman went on to discover that antibodies are composed of disulfide bond-linked heavy and light chains. Around the same time, antibody-binding (Fab) and antibody tail (Fc) regions of IgG were characterized by Rodney Porter. Together, these scientists deduced the structure and complete amino acid sequence of IgG, a feat for which they were jointly awarded the 1972 Nobel Prize in Physiology or Medicine. The Fv fragment was prepared and characterized by David Givol. While most of these early studies focused on IgM and IgG, other immunoglobulin isotypes were identified in the 1960s: Thomas Tomasi discovered secretory antibody (IgA); David S. Rowe and John L. Fahey discovered IgD; and Kimishige Ishizaka and Teruko Ishizaka discovered IgE and showed it was a class of antibodies involved in allergic reactions. In a landmark series of experiments beginning in 1976, Susumu Tonegawa showed that genetic material can rearrange itself to form the vast array of available antibodies.
Medical applications
Disease diagnosis
Detection of particular antibodies is a very common form of medical diagnostics, and applications such as serology depend on these methods. For example, in biochemical assays for disease diagnosis, a titer of antibodies directed against Epstein-Barr virus or Lyme disease is estimated from the blood. If those antibodies are not present, either the person is not infected or the infection occurred a very long time ago, and the B cells generating these specific antibodies have naturally decayed.
In clinical immunology, levels of individual classes of immunoglobulins are measured by nephelometry (or turbidimetry) to characterize the antibody profile of patient. Elevations in different classes of immunoglobulins are sometimes useful in determining the cause of liver damage in patients for whom the diagnosis is unclear. For example, elevated IgA indicates alcoholic cirrhosis, elevated IgM indicates viral hepatitis and primary biliary cirrhosis, while IgG is elevated in viral hepatitis, autoimmune hepatitis and cirrhosis.
Autoimmune disorders can often be traced to antibodies that bind the body's own epitopes; many can be detected through blood tests. Antibodies directed against red blood cell surface antigens in immune mediated hemolytic anemia are detected with the Coombs test. The Coombs test is also used for antibody screening in blood transfusion preparation and also for antibody screening in antenatal women.
Practically, several immunodiagnostic methods based on detection of complex antigen-antibody are used to diagnose infectious diseases, for example ELISA, immunofluorescence, Western blot, immunodiffusion, immunoelectrophoresis, and magnetic immunoassay. Antibodies raised against human chorionic gonadotropin are used in over the counter pregnancy tests.
New dioxaborolane chemistry enables radioactive fluoride (18F) labeling of antibodies, which allows for positron emission tomography (PET) imaging of cancer.
Disease therapy
Targeted monoclonal antibody therapy is employed to treat diseases such as rheumatoid arthritis, multiple sclerosis, psoriasis, and many forms of cancer including non-Hodgkin's lymphoma, colorectal cancer, head and neck cancer and breast cancer.
Some immune deficiencies, such as X-linked agammaglobulinemia and hypogammaglobulinemia, result in partial or complete lack of antibodies. These diseases are often treated by inducing a short-term form of immunity called passive immunity. Passive immunity is achieved through the transfer of ready-made antibodies in the form of human or animal serum, pooled immunoglobulin or monoclonal antibodies, into the affected individual.
Prenatal therapy
Rh factor, also known as Rh D antigen, is an antigen found on red blood cells; individuals that are Rh-positive (Rh+) have this antigen on their red blood cells and individuals that are Rh-negative (Rh–) do not. During normal childbirth, delivery trauma or complications during pregnancy, blood from a fetus can enter the mother's system. In the case of an Rh-incompatible mother and child, consequential blood mixing may sensitize an Rh- mother to the Rh antigen on the blood cells of the Rh+ child, putting the remainder of the pregnancy, and any subsequent pregnancies, at risk for hemolytic disease of the newborn.
Rho(D) immune globulin antibodies are specific for human RhD antigen. Anti-RhD antibodies are administered as part of a prenatal treatment regimen to prevent sensitization that may occur when a Rh-negative mother has a Rh-positive fetus. Treatment of a mother with Anti-RhD antibodies prior to and immediately after trauma and delivery destroys Rh antigen in the mother's system from the fetus. This occurs before the antigen can stimulate maternal B cells to "remember" Rh antigen by generating memory B cells. Therefore, her humoral immune system will not make anti-Rh antibodies, and will not attack the Rh antigens of the current or subsequent babies. Rho(D) Immune Globulin treatment prevents sensitization that can lead to Rh disease, but does not prevent or treat the underlying disease itself.
Research applications
Specific antibodies are produced by injecting an antigen into a mammal, such as a mouse, rat, rabbit, goat, sheep, or horse for large quantities of antibody. Blood isolated from these animals contains polyclonal antibodies—multiple antibodies that bind to the same antigen—in the serum, which can now be called antiserum. Antigens are also injected into chickens for generation of polyclonal antibodies in egg yolk. To obtain antibody that is specific for a single epitope of an antigen, antibody-secreting lymphocytes are isolated from the animal and immortalized by fusing them with a cancer cell line. The fused cells are called hybridomas, and will continually grow and secrete antibody in culture. Single hybridoma cells are isolated by dilution cloning to generate cell clones that all produce the same antibody; these antibodies are called monoclonal antibodies. Polyclonal and monoclonal antibodies are often purified using Protein A/G or antigen-affinity chromatography.
In research, purified antibodies are used in many applications. Antibodies for research applications can be found directly from antibody suppliers, or through use of a specialist search engine. Research antibodies are most commonly used to identify and locate intracellular and extracellular proteins. Antibodies are used in flow cytometry to differentiate cell types by the proteins they express; different types of cells express different combinations of cluster of differentiation molecules on their surface, and produce different intracellular and secretable proteins. They are also used in immunoprecipitation to separate proteins and anything bound to them (co-immunoprecipitation) from other molecules in a cell lysate, in Western blot analyses to identify proteins separated by electrophoresis, and in immunohistochemistry or immunofluorescence to examine protein expression in tissue sections or to locate proteins within cells with the assistance of a microscope. Proteins can also be detected and quantified with antibodies, using ELISA and ELISpot techniques.
Antibodies used in research are some of the most powerful, yet most problematic reagents with a tremendous number of factors that must be controlled in any experiment including cross reactivity, or the antibody recognizing multiple epitopes and affinity, which can vary widely depending on experimental conditions such as pH, solvent, state of tissue etc. Multiple attempts have been made to improve both the way that researchers validate antibodies and ways in which they report on antibodies. Researchers using antibodies in their work need to record them correctly in order to allow their research to be reproducible (and therefore tested, and qualified by other researchers). Less than half of research antibodies referenced in academic papers can be easily identified. Papers published in F1000 in 2014 and 2015 provide researchers with a guide for reporting research antibody use. The RRID paper, is co-published in 4 journals that implemented the RRIDs Standard for research resource citation, which draws data from the antibodyregistry.org as the source of antibody identifiers (see also group at Force11).
Antibody regions can be used to further biomedical research by acting as a guide for drugs to reach their target. Several application involve using bacterial plasmids to tag plasmids with the Fc region of the antibody such as pFUSE-Fc plasmid.
Regulations
Production and testing
There are several ways to obtain antibodies, including in vivo techniques like animal immunization and various in vitro approaches, such as the phage display method. Traditionally, most antibodies are produced by hybridoma cell lines through immortalization of antibody-producing cells by chemically induced fusion with myeloma cells. In some cases, additional fusions with other lines have created "triomas" and "quadromas". The manufacturing process should be appropriately described and validated. Validation studies should at least include:
The demonstration that the process is able to produce in good quality (the process should be validated)
The efficiency of the antibody purification (all impurities and virus must be eliminated)
The characterization of purified antibody (physicochemical characterization, immunological properties, biological activities, contaminants, ...)
Determination of the virus clearance studies
Before clinical trials
Product safety testing: Sterility (bacteria and fungi), in vitro and in vivo testing for adventitious viruses, murine retrovirus testing..., product safety data needed before the initiation of feasibility trials in serious or immediately life-threatening conditions, it serves to evaluate dangerous potential of the product.
Feasibility testing: These are pilot studies whose objectives include, among others, early characterization of safety and initial proof of concept in a small specific patient population (in vitro or in vivo testing).
Preclinical studies
Testing cross-reactivity of antibody: to highlight unwanted interactions (toxicity) of antibodies with previously characterized tissues. This study can be performed in vitro (reactivity of the antibody or immunoconjugate should be determined with a quick-frozen adult tissues) or in vivo (with appropriates animal models).
Preclinical pharmacology and toxicity testing: preclinical safety testing of antibody is designed to identify possible toxicity in humans, to estimate the likelihood and severity of potential adverse events in humans, and to identify a safe starting dose and dose escalation, when possible.
Animal toxicity studies: Acute toxicity testing, repeat-dose toxicity testing, long-term toxicity testing
Pharmacokinetics and pharmacodynamics testing: Use for determinate clinical dosages, antibody activities, evaluation of the potential clinical effects
Structure prediction and computational antibody design
The importance of antibodies in health care and the biotechnology industry demands knowledge of their structures at high resolution. This information is used for protein engineering, modifying the antigen binding affinity, and identifying an epitope, of a given antibody. X-ray crystallography is one commonly used method for determining antibody structures. However, crystallizing an antibody is often laborious and time-consuming. Computational approaches provide a cheaper and faster alternative to crystallography, but their results are more equivocal, since they do not produce empirical structures. Online web servers such as Web Antibody Modeling (WAM) and Prediction of Immunoglobulin Structure (PIGS) enable computational modeling of antibody variable regions. Rosetta Antibody is a novel antibody FV region structure prediction server, which incorporates sophisticated techniques to minimize CDR loops and optimize the relative orientation of the light and heavy chains, as well as homology models that predict successful docking of antibodies with their unique antigen. However, describing an antibody's binding site using only one single static structure limits the understanding and characterization of the antibody's function and properties. To improve antibody structure prediction and to take the strongly correlated CDR loop and interface movements into account, antibody paratopes should be described as interconverting states in solution with varying probabilities.
The ability to describe the antibody through binding affinity to the antigen is supplemented by information on antibody structure and amino acid sequences for the purpose of patent claims. Several methods have been presented for computational design of antibodies based on the structural bioinformatics studies of antibody CDRs.
There are a variety of methods used to sequence an antibody including Edman degradation, cDNA, etc.; albeit one of the most common modern uses for peptide/protein identification is liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS). High volume antibody sequencing methods require computational approaches for the data analysis, including de novo sequencing directly from tandem mass spectra and database search methods that use existing protein sequence databases. Many versions of shotgun protein sequencing are able to increase the coverage by utilizing CID/HCD/ETD fragmentation methods and other techniques, and they have achieved substantial progress in attempt to fully sequence proteins, especially antibodies. Other methods have assumed the existence of similar proteins, a known genome sequence, or combined top-down and bottom up approaches. Current technologies have the ability to assemble protein sequences with high accuracy by integrating de novo sequencing peptides, intensity, and positional confidence scores from database and homology searches.
Antibody mimetic
Antibody mimetics are organic compounds, like antibodies, that can specifically bind antigens. They consist of artificial peptides or proteins, or aptamer-based nucleic acid molecules with a molar mass of about 3 to 20 kDa. Antibody fragments, such as Fab and nanobodies are not considered as antibody mimetics. Common advantages over antibodies are better solubility, tissue penetration, stability towards heat and enzymes, and comparatively low production costs. Antibody mimetics have been developed and commercialized as research, diagnostic and therapeutic agents.
Binding antibody unit
BAU (binding antibody unit, often as BAU/mL) is a measurement unit defined by the WHO for the comparison of assays detecting the same class of immunoglobulins with the same specificity.
See also
Affimer
Anti-mitochondrial antibodies
Anti-nuclear antibodies
Antibody mimetic
Aptamer
Colostrum
ELISA
Humoral immunity
Immunology
Immunosuppressive drug
Intravenous immunoglobulin (IVIg)
Magnetic immunoassay
Microantibody
Monoclonal antibody
Neutralizing antibody
Optimer Ligand
Secondary antibodies
Single-domain antibody
Slope spectroscopy
Surrobody
Synthetic antibody
Western blot normalization
References
External links
Mike's Immunoglobulin Structure/Function Page at University of Cambridge
Antibodies as the PDB molecule of the month Discussion of the structure of antibodies at RCSB Protein Data Bank
A hundred years of antibody therapy History and applications of antibodies in the treatment of disease at University of Oxford
How Lymphocytes Produce Antibody from Cells Alive!
Glycoproteins
Immunology
Reagents for biochemistry | Antibody | [
"Chemistry",
"Biology"
] | 11,074 | [
"Reagents for biochemistry",
"Biochemistry methods",
"Biochemistry",
"Immunology",
"Glycoproteins",
"Glycobiology"
] |
2,389 | https://en.wikipedia.org/wiki/Auger%20effect | The Auger effect (; ) or Auger−Meitner effect is a physical phenomenon in which atoms eject electrons. It occurs when an inner-shell vacancy in an atom is filled by an electron, releasing energy that causes the emission of another electron from a different shell of the same atom.
When a core electron is removed, leaving a vacancy, an electron from a higher energy level may fall into the vacancy, resulting in a release of energy. For light atoms (Z<12), this energy is most often transferred to a valence electron which is subsequently ejected from the atom. This second ejected electron is called an Auger electron. For heavier atomic nuclei, the release of the energy in the form of an emitted photon becomes gradually more probable.
Effect
Upon ejection, the kinetic energy of the Auger electron corresponds to the difference between the energy of the initial electronic transition into the vacancy and the ionization energy for the electron shell from which the Auger electron was ejected. These energy levels depend on the type of atom and the chemical environment in which the atom was located.
Auger electron spectroscopy involves the emission of Auger electrons by bombarding a sample with either X-rays or energetic electrons and measures the intensity of Auger electrons that result as a function of the Auger electron energy. The resulting spectra can be used to determine the identity of the emitting atoms and some information about their environment.
Auger recombination is a similar Auger effect which occurs in semiconductors. An electron and electron hole (electron-hole pair) can recombine giving up their energy to an electron in the conduction band, increasing its energy. The reverse effect is known as impact ionization.
The Auger effect can impact biological molecules such as DNA. Following the K-shell ionization of the component atoms of DNA, Auger electrons are ejected leading to damage of its sugar-phosphate backbone.
Discovery
The Auger emission process was observed and published in 1922 by Lise Meitner, an Austrian-Swedish physicist, as a side effect in her competitive search for the nuclear beta electrons with the British physicist Charles Drummond Ellis.
The French physicist Pierre Victor Auger independently discovered it in 1923 upon analysis of a Wilson cloud chamber experiment and it became the central part of his PhD work. High-energy X-rays were applied to ionize gas particles and observe photoelectric electrons. The observation of electron tracks that were independent of the frequency of the incident photon suggested a mechanism for electron ionization that was caused from an internal conversion of energy from a radiationless transition. Further investigation, and theoretical work using elementary quantum mechanics and transition rate/transition probability calculations, showed that the effect was a radiationless effect more than an internal conversion effect.
See also
Auger therapy
Charge carrier generation and recombination
Characteristic X-ray
Coster–Kronig transition
Electron capture
Radiative Auger effect
References
Atomic physics
Foundational quantum physics
Electron spectroscopy | Auger effect | [
"Physics",
"Chemistry"
] | 596 | [
"Spectrum (physical sciences)",
"Electron spectroscopy",
"Foundational quantum physics",
"Quantum mechanics",
"Atomic physics",
" molecular",
"Atomic",
"Spectroscopy",
" and optical physics"
] |
2,392 | https://en.wikipedia.org/wiki/Anode | An anode usually is an electrode of a polarized electrical device through which conventional current enters the device. This contrasts with a cathode, which is usually an electrode of the device through which conventional current leaves the device. A common mnemonic is ACID, for "anode current into device". The direction of conventional current (the flow of positive charges) in a circuit is opposite to the direction of electron flow, so (negatively charged) electrons flow from the anode of a galvanic cell, into an outside or external circuit connected to the cell. For example, the end of a household battery marked with a "+" is the cathode (while discharging).
In both a galvanic cell and an electrolytic cell, the anode is the electrode at which the oxidation reaction occurs. In a galvanic cell the anode is the wire or plate having excess negative charge as a result of the oxidation reaction. In an electrolytic cell, the anode is the wire or plate upon which excess positive charge is imposed. As a result of this, anions will tend to move towards the anode where they will undergo oxidation.
Historically, the anode of a galvanic cell was also known as the zincode because it was usually composed of zinc.
Charge flow
The terms anode and cathode are not defined by the voltage polarity of electrodes, but are usually defined by the direction of current through the electrode. An anode usually is the electrode of a device through which conventional current (positive charge) flows into the device from an external circuit, while a cathode usually is the electrode through which conventional current flows out of the device.
In general, if the current through the electrodes reverses direction, as occurs for example in a rechargeable battery when it is being charged, the roles of the electrodes as anode and cathode are reversed. However, the definition of anode and cathode is different for electrical devices such as diodes and vacuum tubes where the electrode naming is fixed and does not depend on the actual charge flow (current). These devices usually allow substantial current flow in one direction but negligible current in the other direction. Therefore, the electrodes are named based on the direction of this "forward" current. In a diode the anode is the terminal through which current enters and the cathode is the terminal through which current leaves, when the diode is forward biased. The names of the electrodes do not change in cases where reverse current flows through the device. Similarly, in a vacuum tube only one electrode can thermionically emit electrons into the evacuated tube, so electrons can only enter the device from the external circuit through the heated electrode. Therefore, this electrode is permanently named the cathode, and the electrode through which the electrons exit the tube is named the anode.
Conventional current depends not only on the direction the charge carriers move, but also the carriers' electric charge. The currents outside the device are usually carried by electrons in a metal conductor. Since electrons have a negative charge, the direction of electron flow is opposite to the direction of conventional current. Consequently, electrons leave the device through the anode and enter the device through the cathode.
Examples
The polarity of voltage on an anode with respect to an associated cathode varies depending on the device type and on its operating mode. In the following examples, the anode is negative in a device that provides power, and positive in a device that consumes power:
In a discharging battery or galvanic cell (diagram on left), the anode is the negative terminal: it is where conventional current flows into the cell. This inward current is carried externally by electrons moving outwards.
In a recharging battery, or an electrolytic cell, the anode is the positive terminal imposed by an external source of potential difference. The current through a recharging battery is opposite to the direction of current during discharge; in other words, the electrode which was the cathode during battery discharge becomes the anode while the battery is recharging.
In battery engineering, it is common to designate one electrode of a rechargeable battery the anode and the other the cathode according to the roles the electrodes play when the battery is discharged. This is despite the fact that the roles are reversed when the battery is charged. When this is done, "anode" simply designates the negative terminal of the battery and "cathode" designates the positive terminal.
In a diode, the anode is the terminal represented by the tail of the arrow symbol (flat side of the triangle), where conventional current flows into the device. Note the electrode naming for diodes is always based on the direction of the forward current (that of the arrow, in which the current flows "most easily"), even for types such as Zener diodes or solar cells where the current of interest is the reverse current.
In vacuum tubes or gas-filled tubes, the anode is the terminal where current enters the tube.
Etymology
The word was coined in 1834 from the Greek ἄνοδος (anodos), 'ascent', by William Whewell, who had been consulted by Michael Faraday over some new names needed to complete a paper on the recently discovered process of electrolysis. In that paper Faraday explained that when an electrolytic cell is oriented so that electric current traverses the "decomposing body" (electrolyte) in a direction "from East to West, or, which will strengthen this help to the memory, that in which the sun appears to move", the anode is where the current enters the electrolyte, on the East side: "ano upwards, odos a way; the way which the sun rises".
The use of 'East' to mean the 'in' direction (actually 'in' → 'East' → 'sunrise' → 'up') may appear contrived. Previously, as related in the first reference cited above, Faraday had used the more straightforward term "eisode" (the doorway where the current enters). His motivation for changing it to something meaning 'the East electrode' (other candidates had been "eastode", "oriode" and "anatolode") was to make it immune to a possible later change in the direction convention for current, whose exact nature was not known at the time. The reference he used to this effect was the Earth's magnetic field direction, which at that time was believed to be invariant. He fundamentally defined his arbitrary orientation for the cell as being that in which the internal current would run parallel to and in the same direction as a hypothetical magnetizing current loop around the local line of latitude which would induce a magnetic dipole field oriented like the Earth's. This made the internal current East to West as previously mentioned, but in the event of a later convention change it would have become West to East, so that the East electrode would not have been the 'way in' any more. Therefore, "eisode" would have become inappropriate, whereas "anode" meaning 'East electrode' would have remained correct with respect to the unchanged direction of the actual phenomenon underlying the current, then unknown but, he thought, unambiguously defined by the magnetic reference. In retrospect the name change was unfortunate, not only because the Greek roots alone do not reveal the anode's function any more, but more importantly because as we now know, the Earth's magnetic field direction on which the "anode" term is based is subject to reversals whereas the current direction convention on which the "eisode" term was based has no reason to change in the future.
Since the later discovery of the electron, an easier to remember and more durably correct technically although historically false, etymology has been suggested: anode, from the Greek anodos, 'way up', 'the way (up) out of the cell (or other device) for electrons'.
Electrolytic anode
In electrochemistry, the anode is where oxidation occurs and is the positive polarity contact in an electrolytic cell. At the anode, anions (negative ions) are forced by the electrical potential to react chemically and give off electrons (oxidation) which then flow up and into the driving circuit. Mnemonics: LEO Red Cat (Loss of Electrons is Oxidation, Reduction occurs at the Cathode), or AnOx Red Cat (Anode Oxidation, Reduction Cathode), or OIL RIG (Oxidation is Loss, Reduction is Gain of electrons), or Roman Catholic and Orthodox (Reduction – Cathode, anode – Oxidation), or LEO the lion says GER (Losing electrons is Oxidation, Gaining electrons is Reduction).
This process is widely used in metals refining. For example, in copper refining, copper anodes, an intermediate product from the furnaces, are electrolysed in an appropriate solution (such as sulfuric acid) to yield high purity (99.99%) cathodes. Copper cathodes produced using this method are also described as electrolytic copper.
Historically, when non-reactive anodes were desired for electrolysis, graphite (called plumbago in Faraday's time) or platinum were chosen. They were found to be some of the least reactive materials for anodes. Platinum erodes very slowly compared to other materials, and graphite crumbles and can produce carbon dioxide in aqueous solutions but otherwise does not participate in the reaction.
Battery or galvanic cell anode
In a battery or galvanic cell, the anode is the negative electrode from which electrons flow out towards the external part of the circuit. Internally the positively charged cations are flowing away from the anode (even though it is negative and therefore would be expected to attract them, this is due to electrode potential relative to the electrolyte solution being different for the anode and cathode metal/electrolyte systems); but, external to the cell in the circuit, electrons are being pushed out through the negative contact and thus through the circuit by the voltage potential as would be expected.
Battery manufacturers may regard the negative electrode as the anode, particularly in their technical literature. Though from an electrochemical viewpoint incorrect, it does resolve the problem of which electrode is the anode in a secondary (or rechargeable) cell. Using the traditional definition, the anode switches ends between charge and discharge cycles.
Vacuum tube anode
In electronic vacuum devices such as a cathode-ray tube, the anode is the positively charged electron collector. In a tube, the anode is a charged positive plate that collects the electrons emitted by the cathode through electric attraction. It also accelerates the flow of these electrons.
Diode anode
In a semiconductor diode, the anode is the P-doped layer which initially supplies holes to the junction. In the junction region, the holes supplied by the anode combine with electrons supplied from the N-doped region, creating a depleted zone. As the P-doped layer supplies holes to the depleted region, negative dopant ions are left behind in the P-doped layer ('P' for positive charge-carrier ions). This creates a base negative charge on the anode. When a positive voltage is applied to anode of the diode from the circuit, more holes are able to be transferred to the depleted region, and this causes the diode to become conductive, allowing current to flow through the circuit. The terms anode and cathode should not be applied to a Zener diode, since it allows flow in either direction, depending on the polarity of the applied potential (i.e. voltage).
Sacrificial anode
In cathodic protection, a metal anode that is more reactive to the corrosive environment than the metal system to be protected is electrically linked to the protected system. As a result, the metal anode partially corrodes or dissolves instead of the metal system. As an example, an iron or steel ship's hull may be protected by a zinc sacrificial anode, which will dissolve into the seawater and prevent the hull from being corroded. Sacrificial anodes are particularly needed for systems where a static charge is generated by the action of flowing liquids, such as pipelines and watercraft. Sacrificial anodes are also generally used in tank-type water heaters.
In 1824 to reduce the impact of this destructive electrolytic action on ships hulls, their fastenings and underwater equipment, the scientist-engineer Humphry Davy developed the first and still most widely used marine electrolysis protection system. Davy installed sacrificial anodes made from a more electrically reactive (less noble) metal attached to the vessel hull and electrically connected to form a cathodic protection circuit.
A less obvious example of this type of protection is the process of galvanising iron. This process coats iron structures (such as fencing) with a coating of zinc metal. As long as the zinc remains intact, the iron is protected from the effects of corrosion. Inevitably, the zinc coating becomes breached, either by cracking or physical damage. Once this occurs, corrosive elements act as an electrolyte and the zinc/iron combination as electrodes. The resultant current ensures that the zinc coating is sacrificed but that the base iron does not corrode. Such a coating can protect an iron structure for a few decades, but once the protecting coating is consumed, the iron rapidly corrodes.
If, conversely, tin is used to coat steel, when a breach of the coating occurs it actually accelerates oxidation of the iron.
Impressed current anode
Another cathodic protection is used on the impressed current anode. It is made from titanium and covered with mixed metal oxide. Unlike the sacrificial anode rod, the impressed current anode does not sacrifice its structure. This technology uses an external current provided by a DC source to create the cathodic protection. Impressed current anodes are used in larger structures like pipelines, boats, city water tower, water heaters and more.
Related antonym
The opposite of an anode is a cathode. When the current through the device is reversed, the electrodes switch functions, so the anode becomes the cathode and the cathode becomes anode, as long as the reversed current is applied. The exception is diodes where electrode naming is always based on the forward current direction.
See also
Anodizing
Galvanic anode
Gas-filled tube
Primary cell
Redox (reduction–oxidation)
References
External links
How to define anode and cathode
Electrodes | Anode | [
"Chemistry"
] | 3,060 | [
"Electrochemistry",
"Electrodes"
] |
2,393 | https://en.wikipedia.org/wiki/Analog%20television | Analog television is the original television technology that uses analog signals to transmit video and audio. In an analog television broadcast, the brightness, colors and sound are represented by amplitude, phase and frequency of an analog signal.
Analog signals vary over a continuous range of possible values which means that electronic noise and interference may be introduced. Thus with analog, a moderately weak signal becomes snowy and subject to interference. In contrast, picture quality from a digital television (DTV) signal remains good until the signal level drops below a threshold where reception is no longer possible or becomes intermittent.
Analog television may be wireless (terrestrial television and satellite television) or can be distributed over a cable network as cable television.
All broadcast television systems used analog signals before the arrival of DTV. Motivated by the lower bandwidth requirements of compressed digital signals, beginning just after the year 2000, a digital television transition is proceeding in most countries of the world, with different deadlines for the cessation of analog broadcasts. Several countries have made the switch already, with the remaining countries still in progress mostly in Africa, Asia, and South America.
Development
The earliest systems of analog television were mechanical television systems that used spinning disks with patterns of holes punched into the disc to scan an image. A similar disk reconstructed the image at the receiver. Synchronization of the receiver disc rotation was handled through sync pulses broadcast with the image information. Camera systems used similar spinning discs and required intensely bright illumination of the subject for the light detector to work. The reproduced images from these mechanical systems were dim, very low resolution and flickered severely.
Analog television did not begin in earnest as an industry until the development of the cathode-ray tube (CRT), which uses a focused electron beam to trace lines across a phosphor coated surface. The electron beam could be swept across the screen much faster than any mechanical disc system, allowing for more closely spaced scan lines and much higher image resolution. Also, far less maintenance was required of an all-electronic system compared to a mechanical spinning disc system. All-electronic systems became popular with households after World War II.
Standards
Broadcasters of analog television encode their signal using different systems. The official systems of transmission were defined by the ITU in 1961 as: A, B, C, D, E, F, G, H, I, K, K1, L, M and N. These systems determine the number of scan lines, frame rate, channel width, video bandwidth, video-audio separation, and so on. A color encoding scheme (NTSC, PAL, or SECAM) could be added to the base monochrome signal. Using RF modulation the signal is then modulated onto a very high frequency (VHF) or ultra high frequency (UHF) carrier wave. Each frame of a television image is composed of scan lines drawn on the screen. The lines are of varying brightness; the whole set of lines is drawn quickly enough that the human eye perceives it as one image. The process repeats and the next sequential frame is displayed, allowing the depiction of motion. The analog television signal contains timing and synchronization information so that the receiver can reconstruct a two-dimensional moving image from a one-dimensional time-varying signal.
The first commercial television systems were black-and-white; the beginning of color television was in the 1950s.
A practical television system needs to take luminance, chrominance (in a color system), synchronization (horizontal and vertical), and audio signals, and broadcast them over a radio transmission. The transmission system must include a means of television channel selection.
Analog broadcast television systems come in a variety of frame rates and resolutions. Further differences exist in the frequency and modulation of the audio carrier. The monochrome combinations still existing in the 1950s were standardized by the International Telecommunication Union (ITU) as capital letters A through N. When color television was introduced, the chrominance information was added to the monochrome signals in a way that black and white televisions ignore. In this way backward compatibility was achieved.
There are three standards for the way the additional color information can be encoded and transmitted. The first was the American NTSC system. The European and Australian PAL and the French and former Soviet Union SECAM standards were developed later and attempt to cure certain defects of the NTSC system. PAL's color encoding is similar to the NTSC systems. SECAM, though, uses a different modulation approach than PAL or NTSC. PAL had a late evolution called PALplus, allowing widescreen broadcasts while remaining fully compatible with existing PAL equipment.
In principle, all three color encoding systems can be used with any scan line/frame rate combination. Therefore, in order to describe a given signal completely, it is necessary to quote the color system plus the broadcast standard as a capital letter. For example, the United States, Canada, Mexico and South Korea used (or use) NTSC-M, Japan used NTSC-J, the UK used PAL-I, France used SECAM-L, much of Western Europe and Australia used (or use) PAL-B/G, most of Eastern Europe uses SECAM-D/K or PAL-D/K and so on.
Not all of the possible combinations exist. NTSC is only used with system M, even though there were experiments with NTSC-A (405 line) in the UK and NTSC-N (625 line) in part of South America. PAL is used with a variety of 625-line standards (B, G, D, K, I, N) but also with the North American 525-line standard, accordingly named PAL-M. Likewise, SECAM is used with a variety of 625-line standards.
For this reason, many people refer to any 625/25 type signal as PAL and to any 525/30 signal as NTSC, even when referring to digital signals; for example, on DVD-Video, which does not contain any analog color encoding, and thus no PAL or NTSC signals at all.
Although a number of different broadcast television systems are in use worldwide, the same principles of operation apply.
Displaying an image
A cathode-ray tube (CRT) television displays an image by scanning a beam of electrons across the screen in a pattern of horizontal lines known as a raster. At the end of each line, the beam returns to the start of the next line; at the end of the last line, the beam returns to the beginning of the first line at the top of the screen. As it passes each point, the intensity of the beam is varied, varying the luminance of that point. A color television system is similar except there are three beams that scan together and an additional signal known as chrominance controls the color of the spot.
When analog television was developed, no affordable technology for storing video signals existed; the luminance signal had to be generated and transmitted at the same time at which it is displayed on the CRT. It was therefore essential to keep the raster scanning in the camera (or other device for producing the signal) in exact synchronization with the scanning in the television.
The physics of the CRT require that a finite time interval be allowed for the spot to move back to the start of the next line (horizontal retrace) or the start of the screen (vertical retrace). The timing of the luminance signal must allow for this.
The human eye has a characteristic called phi phenomenon. Quickly displaying successive scan images creates the illusion of smooth motion. Flickering of the image can be partially solved using a long persistence phosphor coating on the CRT so that successive images fade slowly. However, slow phosphor has the negative side-effect of causing image smearing and blurring when there is rapid on-screen motion occurring.
The maximum frame rate depends on the bandwidth of the electronics and the transmission system, and the number of horizontal scan lines in the image. A frame rate of 25 or 30 hertz is a satisfactory compromise, while the process of interlacing two video fields of the picture per frame is used to build the image. This process doubles the apparent number of video frames per second and further reduces flicker and other defects in transmission.
Receiving signals
The television system for each country will specify a number of television channels within the UHF or VHF frequency ranges. A channel actually consists of two signals: the picture information is transmitted using amplitude modulation on one carrier frequency, and the sound is transmitted with frequency modulation at a frequency at a fixed offset (typically 4.5 to 6 MHz) from the picture signal.
The channel frequencies chosen represent a compromise between allowing enough bandwidth for video (and hence satisfactory picture resolution), and allowing enough channels to be packed into the available frequency band. In practice a technique called vestigial sideband is used to reduce the channel spacing, which would be nearly twice the video bandwidth if pure AM was used.
Signal reception is invariably done via a superheterodyne receiver: the first stage is a tuner which selects a television channel and frequency-shifts it to a fixed intermediate frequency (IF). The signal amplifier performs amplification to the IF stages from the microvolt range to fractions of a volt.
Extracting the sound
At this point the IF signal consists of a video carrier signal at one frequency and the sound carrier at a fixed offset in frequency. A demodulator recovers the video signal. Also at the output of the same demodulator is a new frequency modulated sound carrier at the offset frequency. In some sets made before 1948, this was filtered out, and the sound IF of about 22 MHz was sent to an FM demodulator to recover the basic sound signal. In newer sets, this new carrier at the offset frequency was allowed to remain as intercarrier sound, and it was sent to an FM demodulator to recover the basic sound signal. One particular advantage of intercarrier sound is that when the front panel fine tuning knob is adjusted, the sound carrier frequency does not change with the tuning, but stays at the above-mentioned offset frequency. Consequently, it is easier to tune the picture without losing the sound.
So the FM sound carrier is then demodulated, amplified, and used to drive a loudspeaker. Until the advent of the NICAM and MTS systems, television sound transmissions were monophonic.
Structure of a video signal
The video carrier is demodulated to give a composite video signal containing luminance, chrominance and synchronization signals. The result is identical to the composite video format used by analog video devices such as VCRs or CCTV cameras. To ensure good linearity and thus fidelity, consistent with affordable manufacturing costs of transmitters and receivers, the video carrier is never modulated to the extent that it is shut off altogether. When intercarrier sound was introduced later in 1948, not completely shutting off the carrier had the side effect of allowing intercarrier sound to be economically implemented.
Each line of the displayed image is transmitted using a signal as shown above. The same basic format (with minor differences mainly related to timing and the encoding of color) is used for PAL, NTSC, and SECAM television systems. A monochrome signal is identical to a color one, with the exception that the elements shown in color in the diagram (the colorburst, and the chrominance signal) are not present.
The front porch is a brief (about 1.5 microsecond) period inserted between the end of each transmitted line of picture and the leading edge of the next line's sync pulse. Its purpose was to allow voltage levels to stabilise in older televisions, preventing interference between picture lines. The front porch is the first component of the horizontal blanking interval which also contains the horizontal sync pulse and the back porch.
The back porch is the portion of each scan line between the end (rising edge) of the horizontal sync pulse and the start of active video. It is used to restore the black level (300 mV) reference in analog video. In signal processing terms, it compensates for the fall time and settling time following the sync pulse.
In color television systems such as PAL and NTSC, this period also includes the colorburst signal. In the SECAM system, it contains the reference subcarrier for each consecutive color difference signal in order to set the zero-color reference.
In some professional systems, particularly satellite links between locations, the digital audio is embedded within the line sync pulses of the video signal, to save the cost of renting a second channel. The name for this proprietary system is Sound-in-Syncs.
Monochrome video signal extraction
The luminance component of a composite video signal varies between 0 V and approximately 0.7 V above the black level. In the NTSC system, there is a blanking signal level used during the front porch and back porch, and a black signal level 75 mV above it; in PAL and SECAM these are identical.
In a monochrome receiver, the luminance signal is amplified to drive the control grid in the electron gun of the CRT. This changes the intensity of the electron beam and therefore the brightness of the spot being scanned. Brightness and contrast controls determine the DC shift and amplification, respectively.
Color video signal extraction
U and V signals
A color signal conveys picture information for each of the red, green, and blue components of an image. However, these are not simply transmitted as three separate signals, because: such a signal would not be compatible with monochrome receivers, an important consideration when color broadcasting was first introduced. It would also occupy three times the bandwidth of existing television, requiring a decrease in the number of television channels available.
Instead, the RGB signals are converted into YUV form, where the Y signal represents the luminance of the colors in the image. Because the rendering of colors in this way is the goal of both monochrome film and television systems, the Y signal is ideal for transmission as the luminance signal. This ensures a monochrome receiver will display a correct picture in black and white, where a given color is reproduced by a shade of gray that correctly reflects how light or dark the original color is.
The U and V signals are color difference signals. The U signal is the difference between the B signal and the Y signal, also known as B minus Y (B-Y), and the V signal is the difference between the R signal and the Y signal, also known as R minus Y (R-Y). The U signal then represents how purplish-blue or its complementary color, yellowish-green, the color is, and the V signal how purplish-red or its complementary, greenish-cyan, it is. The advantage of this scheme is that the U and V signals are zero when the picture has no color content. Since the human eye is more sensitive to detail in luminance than in color, the U and V signals can be transmitted with reduced bandwidth with acceptable results.
In the receiver, a single demodulator can extract an additive combination of U plus V. An example is the X demodulator used in the X/Z demodulation system. In that same system, a second demodulator, the Z demodulator, also extracts an additive combination of U plus V, but in a different ratio. The X and Z color difference signals are further matrixed into three color difference signals, (R-Y), (B-Y), and (G-Y). The combinations of usually two, but sometimes three demodulators were:
In the end, further matrixing of the above color-difference signals c through f yielded the three color-difference signals, (R-Y), (B-Y), and (G-Y).
The R, G, and B signals in the receiver needed for the display device (CRT, Plasma display, or LCD display) are electronically derived by matrixing as follows: R is the additive combination of (R-Y) with Y, G is the additive combination of (G-Y) with Y, and B is the additive combination of (B-Y) with Y. All of this is accomplished electronically. It can be seen that in the combining process, the low-resolution portion of the Y signals cancel out, leaving R, G, and B signals able to render a low-resolution image in full color. However, the higher resolution portions of the Y signals do not cancel out, and so are equally present in R, G, and B, producing the higher-resolution image detail in monochrome, although it appears to the human eye as a full-color and full-resolution picture.
NTSC and PAL systems
In the NTSC and PAL color systems, U and V are transmitted by using quadrature amplitude modulation of a subcarrier. This kind of modulation applies two independent signals to one subcarrier, with the idea that both signals will be recovered independently at the receiving end. For NTSC, the subcarrier is at 3.58 MHz. For the PAL system it is at 4.43 MHz. The subcarrier itself is not included in the modulated signal (suppressed carrier), it is the subcarrier sidebands that carry the U and V information. The usual reason for using suppressed carrier is that it saves on transmitter power. In this application a more important advantage is that the color signal disappears entirely in black and white scenes. The subcarrier is within the bandwidth of the main luminance signal and consequently can cause undesirable artifacts on the picture, all the more noticeable in black and white receivers.
A small sample of the subcarrier, the colorburst, is included in the horizontal blanking portion, which is not visible on the screen. This is necessary to give the receiver a phase reference for the modulated signal. Under quadrature amplitude modulation the modulated chrominance signal changes phase as compared to its subcarrier and also changes amplitude. The chrominance amplitude (when considered together with the Y signal) represents the approximate saturation of a color, and the chrominance phase against the subcarrier reference approximately represents the hue of the color. For particular test colors found in the test color bar pattern, exact amplitudes and phases are sometimes defined for test and troubleshooting purposes only.
Due to the nature of the quadrature amplitude modulation process that created the chrominance signal, at certain times, the signal represents only the U signal, and 70 nanoseconds (NTSC) later, it represents only the V signal. About 70 nanoseconds later still, -U, and another 70 nanoseconds, -V. So to extract U, a synchronous demodulator is utilized, which uses the subcarrier to briefly gate the chroma every 280 nanoseconds, so that the output is only a train of discrete pulses, each having an amplitude that is the same as the original U signal at the corresponding time. In effect, these pulses are discrete-time analog samples of the U signal. The pulses are then low-pass filtered so that the original analog continuous-time U signal is recovered. For V, a 90-degree shifted subcarrier briefly gates the chroma signal every 280 nanoseconds, and the rest of the process is identical to that used for the U signal.
Gating at any other time than those times mentioned above will yield an additive mixture of any two of U, V, -U, or -V. One of these off-axis (that is, of the U and V axis) gating methods is called I/Q demodulation. Another much more popular off-axis scheme was the X/Z demodulation system. Further matrixing recovered the original U and V signals. This scheme was actually the most popular demodulator scheme throughout the 1960s.
The above process uses the subcarrier. But as previously mentioned, it was deleted before transmission, and only the chroma is transmitted. Therefore, the receiver must reconstitute the subcarrier. For this purpose, a short burst of the subcarrier, known as the colorburst, is transmitted during the back porch (re-trace blanking period) of each scan line. A subcarrier oscillator in the receiver locks onto this signal (see phase-locked loop) to achieve a phase reference, resulting in the oscillator producing the reconstituted subcarrier.
NTSC uses this process unmodified. Unfortunately, this often results in poor color reproduction due to phase errors in the received signal, caused sometimes by multipath, but mostly by poor implementation at the studio end. With the advent of solid-state receivers, cable TV, and digital studio equipment for conversion to an over-the-air analog signal, these NTSC problems have been largely fixed, leaving operator error at the studio end as the sole color rendition weakness of the NTSC system. In any case, the PAL D (delay) system mostly corrects these kinds of errors by reversing the phase of the signal on each successive line, and averaging the results over pairs of lines. This process is achieved by the use of a 1H (where H = horizontal scan frequency) duration delay line. Phase shift errors between successive lines are therefore canceled out and the wanted signal amplitude is increased when the two in-phase (coincident) signals are re-combined.
NTSC is more spectrum efficient than PAL, giving more picture detail for a given bandwidth. This is because sophisticated comb filters in receivers are more effective with NTSC's 4 color frame sequence compared to PAL's 8-field sequence. However, in the end, the larger channel width of most PAL systems in Europe still gives PAL systems the edge in transmitting more picture detail.
SECAM system
In the SECAM television system, U and V are transmitted on alternate lines, using simple frequency modulation of two different color subcarriers.
In some analog color CRT displays, starting in 1956, the brightness control signal (luminance) is fed to the cathode connections of the electron guns, and the color difference signals (chrominance signals) are fed to the control grids connections. This simple CRT matrix mixing technique was replaced in later solid state designs of signal processing with the original matrixing method used in the 1954 and 1955 color TV receivers.
Synchronization
Synchronizing pulses added to the video signal at the end of every scan line and video frame ensure that the sweep oscillators in the receiver remain locked in step with the transmitted signal so that the image can be reconstructed on the receiver screen.
A sync separator circuit detects the sync voltage levels and sorts the pulses into horizontal and vertical sync.
Horizontal synchronization
The horizontal sync pulse separates the scan lines. The horizontal sync signal is a single short pulse that indicates the start of every line. The rest of the scan line follows, with the signal ranging from 0.3 V (black) to 1 V (white), until the next horizontal or vertical synchronization pulse.
The format of the horizontal sync pulse varies. In the 525-line NTSC system it is a 4.85 μs pulse at 0 V. In the 625-line PAL system the pulse is 4.7 μs at 0 V. This is lower than the amplitude of any video signal (blacker than black) so it can be detected by the level-sensitive sync separator circuit of the receiver.
Two timing intervals are defined – the front porch between the end of the displayed video and the start of the sync pulse, and the back porch after the sync pulse and before the displayed video. These and the sync pulse itself are called the horizontal blanking (or retrace) interval and represent the time that the electron beam in the CRT is returning to the start of the next display line.
Vertical synchronization
Vertical synchronization separates the video fields. In PAL and NTSC, the vertical sync pulse occurs within the vertical blanking interval. The vertical sync pulses are made by prolonging the length of horizontal sync pulses through almost the entire length of the scan line.
The vertical sync signal is a series of much longer pulses, indicating the start of a new field. The sync pulses occupy the whole line interval of a number of lines at the beginning and end of a scan; no picture information is transmitted during vertical retrace. The pulse sequence is designed to allow horizontal sync to continue during vertical retrace; it also indicates whether each field represents even or odd lines in interlaced systems (depending on whether it begins at the start of a horizontal line, or midway through).
The format of such a signal in 525-line NTSC is:
pre-equalizing pulses (6 to start scanning odd lines, 5 to start scanning even lines)
long-sync pulses (5 pulses)
post-equalizing pulses (5 to start scanning odd lines, 4 to start scanning even lines)
Each pre- or post-equalizing pulse consists of half a scan line of black signal: 2 μs at 0 V, followed by 30 μs at 0.3 V. Each long sync pulse consists of an equalizing pulse with timings inverted: 30 μs at 0 V, followed by 2 μs at 0.3 V.
In video production and computer graphics, changes to the image are often performed during the vertical blanking interval to avoid visible discontinuity of the image. If this image in the framebuffer is updated with a new image while the display is being refreshed, the display shows a mishmash of both frames, producing page tearing partway down the image.
Horizontal and vertical hold
The sweep (or deflection) oscillators were designed to run without a signal from the television station (or VCR, computer, or other composite video source). This allows the television receiver to display a raster and to allow an image to be presented during antenna placement. With sufficient signal strength, the receiver's sync separator circuit would split timebase pulses from the incoming video and use them to reset the horizontal and vertical oscillators at the appropriate time to synchronize with the signal from the station.
The free-running oscillation of the horizontal circuit is especially critical, as the horizontal deflection circuits typically power the flyback transformer (which provides acceleration potential for the CRT) as well as the filaments for the high voltage rectifier tube and sometimes the filament(s) of the CRT itself. Without the operation of the horizontal oscillator and output stages in these television receivers, there would be no illumination of the CRT's face.
The lack of precision timing components in early equipment meant that the timebase circuits occasionally needed manual adjustment. If their free-run frequencies were too far from the actual line and field rates, the circuits would not be able to follow the incoming sync signals. Loss of horizontal synchronization usually resulted in an unwatchable picture; loss of vertical synchronization would produce an image rolling up or down the screen.
Older analog television receivers often provide manual controls to adjust horizontal and vertical timing. The adjustment takes the form of horizontal hold and vertical hold controls, usually on the front panel along with other common controls. These adjust the free-run frequencies of the corresponding timebase oscillators.
A slowly rolling vertical picture demonstrates that the vertical oscillator is nearly synchronized with the television station but is not locking to it, often due to a weak signal or a failure in the sync separator stage not resetting the oscillator.
Horizontal sync errors cause the image to be torn diagonally and repeated across the screen as if it were wrapped around a screw or a barber's pole; the greater the error, the more copies of the image will be seen at once wrapped around the barber pole.
By the early 1980s the efficacy of the synchronization circuits, plus the inherent stability of the sets' oscillators, had been improved to the point where these controls were no longer necessary. Integrated Circuits which eliminated the horizontal hold control were starting to appear as early as 1969.
The final generations of analog television receivers used IC-based designs where the receiver's timebases were derived from accurate crystal oscillators. With these sets, adjustment of the free-running frequency of either sweep oscillator was unnecessary and unavailable.
Horizontal and vertical hold controls were rarely used in CRT-based computer monitors, as the quality and consistency of components were quite high by the advent of the computer age, but might be found on some composite monitors used with the 1970s–80s home or personal computers.
Other technical information
Components of a television system
The tuner is the object which, with the aid of an antenna, isolates the television signals received over the air. There are two types of tuners in analog television, VHF and UHF tuners. The VHF tuner selects the VHF television frequency. This consists of a 4 MHz video bandwidth and about 100 kHz audio bandwidth. It then amplifies the signal and converts it to a 45.75 MHz Intermediate Frequency (IF) amplitude-modulated video and a 41.25 MHz IF frequency-modulated audio carrier.
The IF amplifiers are centered at 44 MHz for optimal frequency transference of the audio and video carriers. Like radio, television has automatic gain control (AGC). This controls the gain of the IF amplifier stages and the tuner.
The video amp and output amplifier is implemented using a pentode or a power transistor. The filter and demodulator separates the 45.75 MHz video from the 41.25 MHz audio then it simply uses a diode to detect the video signal. After the video detector, the video is amplified and sent to the sync separator and then to the picture tube.
The audio signal goes to a 4.5 MHz amplifier. This amplifier prepares the signal for the 4.5 MHz detector. It then goes through a 4.5 MHz IF transformer to the detector. In television, there are 2 ways of detecting FM signals. One way is by the ratio detector. This is simple but very hard to align. The next is a relatively simple detector. This is the quadrature detector. It was invented in 1954. The first tube designed for this purpose was the 6BN6 type. It is easy to align and simple in circuitry. It was such a good design that it is still being used today in the Integrated circuit form. After the detector, it goes to the audio amplifier.
Image synchronization is achieved by transmitting negative-going pulses. The horizontal sync signal is a single short pulse that indicates the start of every line. Two-timing intervals are defined – the front porch between the end of the displayed video and the start of the sync pulse, and the back porch after the sync pulse and before the displayed video. These and the sync pulse itself are called the horizontal blanking (or retrace) interval and represent the time that the electron beam in the CRT is returning to the start of the next display line.
The vertical sync signal is a series of much longer pulses, indicating the start of a new field. The vertical sync pulses occupy the whole of line interval of a number of lines at the beginning and end of a scan; no picture information is transmitted during vertical retrace. The pulse sequence is designed to allow horizontal sync to continue during vertical retrace.
A sync separator circuit detects the sync voltage levels and extracts and conditions signals that the horizontal and vertical oscillators can use to keep in sync with the video. It also forms the AGC voltage.
The horizontal and vertical oscillators form the raster on the CRT. They are driven by the sync separator. There are many ways to create these oscillators. The earliest is the thyratron oscillator. Although it is known to drift, it makes a perfect sawtooth wave. This sawtooth wave is so good that no linearity control is needed. This oscillator was designed for the electrostatic deflection CRTs but also found some use in electromagnetically deflected CRTs. The next oscillator developed was the blocking oscillator which uses a transformer to create a sawtooth wave. This was only used for a brief time period and never was very popular. Finally the multivibrator was probably the most successful. It needed more adjustment than the other oscillators, but it is very simple and effective. This oscillator was so popular that it was used from the early 1950s until today.
Two oscillator amplifiers are needed. The vertical amplifier directly drives the yoke. Since it operates at 50 or 60 Hz and drives an electromagnet, it is similar to an audio amplifier. Because of the rapid deflection required, the horizontal oscillator requires a high-power flyback transformer driven by a high-powered tube or transistor. Additional windings on this flyback transformer typically power other parts of the system.
Loss of horizontal synchronization usually results in a scrambled and unwatchable picture; loss of vertical synchronization produces an image rolling up or down the screen.
Timebase circuits
In an analog receiver with a CRT display sync pulses are fed to horizontal and vertical timebase circuits (commonly called sweep circuits in the United States), each consisting of an oscillator and an amplifier. These generate modified sawtooth and parabola current waveforms to scan the electron beam. Engineered waveform shapes are necessary to make up for the distance variations from the electron beam source and the screen surface. The oscillators are designed to free-run at frequencies very close to the field and line rates, but the sync pulses cause them to reset at the beginning of each scan line or field, resulting in the necessary synchronization of the beam sweep with the originating signal. The output waveforms from the timebase amplifiers are fed to the horizontal and vertical deflection coils wrapped around the CRT tube. These coils produce magnetic fields proportional to the changing current, and these deflect the electron beam across the screen.
In the 1950s, the power for these circuits was derived directly from the mains supply. A simple circuit consisted of a series voltage dropper resistance and a rectifier. This avoided the cost of a large high-voltage mains supply (50 or 60 Hz) transformer. It was inefficient and produced a lot of heat.
In the 1960s, semiconductor technology was introduced into timebase circuits. During the late 1960s in the UK, synchronous (with the scan line rate) power generation was introduced into solid state receiver designs.
In the UK use of the simple (50 Hz) types of power, circuits were discontinued as thyristor based switching circuits were introduced. The reason for design changes arose from the electricity supply contamination problems arising from EMI, and supply loading issues due to energy being taken from only the positive half cycle of the mains supply waveform.
CRT flyback power supply
Most of the receiver's circuitry (at least in transistor- or IC-based designs) operates from a comparatively low-voltage DC power supply. However, the anode connection for a cathode-ray tube requires a very high voltage (typically 10–30 kV) for correct operation.
This voltage is not directly produced by the main power supply circuitry; instead, the receiver makes use of the circuitry used for horizontal scanning. Direct current (DC), is switched through the line output transformer, and alternating current (AC) is induced into the scan coils. At the end of each horizontal scan line the magnetic field, which has built up in both transformer and scan coils by the current, is a source of latent electromagnetic energy. This stored collapsing magnetic field energy can be captured. The reverse flow, short duration, (about 10% of the line scan time) current from both the line output transformer and the horizontal scan coil is discharged again into the primary winding of the flyback transformer by the use of a rectifier which blocks this counter-electromotive force. A small value capacitor is connected across the scan-switching device. This tunes the circuit inductances to resonate at a much higher frequency. This lengthens the flyback time from the extremely rapid decay rate that would result if they were electrically isolated during this short period. One of the secondary windings on the flyback transformer then feeds this brief high-voltage pulse to a Cockcroft–Walton generator design voltage multiplier. This produces the required high-voltage supply. A flyback converter is a power supply circuit operating on similar principles.
A typical modern design incorporates the flyback transformer and rectifier circuitry into a single unit with a captive output lead, known as a diode split line output transformer or an Integrated High Voltage Transformer (IHVT), so that all high-voltage parts are enclosed. Earlier designs used a separate line output transformer and a well-insulated high-voltage multiplier unit. The high frequency (15 kHz or so) of the horizontal scanning allows reasonably small components to be used.
Transition to digital
In many countries, over-the-air broadcast television of analog audio and analog video signals has been discontinued to allow the re-use of the television broadcast radio spectrum for other services.
The first country to make a wholesale switch to digital over-the-air (terrestrial television) broadcasting was Luxembourg in 2006, followed later in 2006 by the Netherlands. The Digital television transition in the United States for high-powered transmission was completed on 12 June 2009, the date that the Federal Communications Commission (FCC) set. Almost two million households could no longer watch television because they had not prepared for the transition. The switchover had been delayed by the DTV Delay Act. While the majority of the viewers of over-the-air broadcast television in the U.S. watch full-power stations (which number about 1800), there are three other categories of television stations in the U.S.: low-power broadcasting stations, class A stations, and television translator stations. These were given later deadlines.
In Japan, the switch to digital began in northeastern Ishikawa Prefecture on 24 July 2010 and ended in 43 of the country's 47 prefectures (including the rest of Ishikawa) on 24 July 2011, but in Fukushima, Iwate, and Miyagi prefectures, the conversion was delayed to 31 March 2012, due to complications from the 2011 Tōhoku earthquake and tsunami and its related nuclear accidents.
In Canada, most of the larger cities turned off analog broadcasts on 31 August 2011.
China had scheduled to end analog broadcasting between 2015 and 2021.
Brazil switched to digital television on 2 December 2007 in São Paulo and planned to end analog broadcasting nationwide by 30 June 2016. However, the Ministry of Communications announced in 2012 that the deadline would be delayed. As of 2024, Brazil is in the process of implementing its next-generation digital television system, known as TV 3.0. In July 2024, ATSC 3.0 standard was officially selected for the country's next-generation digital television system. The transition to TV 3.0 is expected to begin in 2025, with initial deployments planned for key cities such as São Paulo, Rio de Janeiro, and Brasília.
In Malaysia, the Malaysian Communications and Multimedia Commission advertised for tender bids to be submitted in the third quarter of 2009 for the 470 through 742 MHz UHF allocation, to enable Malaysia's broadcast system to move into DTV. The new broadcast band allocation would result in Malaysia's having to build an infrastructure for all broadcasters, using a single digital terrestrial television broadcast channel. Large portions of Malaysia are covered by television broadcasts from Singapore, Thailand, Brunei, and Indonesia (from Borneo and Batam). Starting from 1 November 2019, all regions in Malaysia were no longer using the analog system after the states of Sabah and Sarawak finally turned it off on 31 October 2019.
In Singapore, digital television under DVB-T2 began on 16 December 2013. The switchover was delayed many times until analog TV was switched off at midnight on 2 January 2019.
In the Philippines, the National Telecommunications Commission required all broadcasting companies to end analog broadcasting on 31 December 2015 at 11:59 p.m. Due to delay of the release of the implementing rules and regulations for digital television broadcast, the target date was moved to 2020. Full digital broadcast was expected in 2021 and all of the analog TV services were to be shut down by the end of 2023. However, in February 2023, the NTC postponed the ASO/DTV transition to 2025 due to many provincial television stations not being ready to start their digital TV transmissions.
In the Russian Federation, the Russian Television and Radio Broadcasting Network (RTRS) disabled analog broadcasting of federal channels in five stages, shutting down broadcasting in multiple federal subjects at each stage. The first region to have analog broadcasting disabled was Tver Oblast on 3 December 2018, and the switchover was completed on 14 October 2019. During the transition, DVB-T2 receivers and monetary compensations for purchasing of terrestrial or satellite digital TV reception equipment were provided to disabled people, World War II veterans, certain categories of retirees and households with income per member below living wage.
See also
Amateur television
Narrow-bandwidth television
Overscan
Slow-scan television
Glossary of video terms
Notes
References
External links
Video signal measurement and generation
Television synchronisation
Video broadcast standard frequencies and country listings
EDN magazine describing design of a 1958 transistorised television receiver
Designing the color television signal in the early 1950s as described by two engineers working directly with the NTSC
Television technology
Television terminology | Analog television | [
"Technology"
] | 8,641 | [
"Information and communications technology",
"Television technology"
] |
2,400 | https://en.wikipedia.org/wiki/AMD | Advanced Micro Devices, Inc. (AMD) is an American multinational corporation and technology company headquartered in Santa Clara, California and maintains significant operations in Austin, Texas. AMD is a hardware and fabless company that designs and develops central processing units (CPUs), graphics processing units (GPUs), field-programmable gate arrays (FPGAs), system-on-chip (SoC), and high-performance computer solutions. AMD serves a wide range of business and consumer markets, including gaming, data centers, artificial intelligence (AI), and embedded systems.
AMD's main products include microprocessors, motherboard chipsets, embedded processors, and graphics processors for servers, workstations, personal computers, and embedded system applications. The company has also expanded into new markets, such as the data center, gaming, and high-performance computing markets. AMD's processors are used in a wide range of computing devices, including personal computers, servers, laptops, and gaming consoles. While it initially manufactured its own processors, the company later outsourced its manufacturing, after GlobalFoundries was spun off in 2009. Through its Xilinx acquisition in 2022, AMD offers field-programmable gate array (FPGA) products.
AMD was founded in 1969 by Jerry Sanders and a group of other technology professionals. The company's early products were primarily memory chips and other components for computers. In 1975, AMD entered the microprocessor market, competing with Intel, its main rival in the industry. In the early 2000s, it experienced significant growth and success, thanks in part to its strong position in the PC market and the success of its Athlon and Opteron processors. However, the company faced challenges in the late 2000s and early 2010s, as it struggled to keep up with Intel in the race to produce faster and more powerful processors.
In the late 2010s, AMD regained market share by pursuing a penetration pricing strategy and building on the success of its Ryzen processors, which were considerably more competitive with Intel microprocessors in terms of performance whilst offering attractive pricing.
History
Foundational years
Advanced Micro Devices was formally incorporated by Jerry Sanders, along with seven of his colleagues from Fairchild Semiconductor, on May 1, 1969. Sanders, an electrical engineer who was the director of marketing at Fairchild, had, like many Fairchild executives, grown frustrated with the increasing lack of support, opportunity, and flexibility within the company. He later decided to leave to start his own semiconductor company, following the footsteps of Robert Noyce (developer of the first silicon integrated circuit at Fairchild in 1959) and Gordon Moore, who together founded the semiconductor company Intel in July 1968.
In September 1969, AMD moved from its temporary location in Santa Clara to Sunnyvale, California. To immediately secure a customer base, AMD initially became a second source supplier of microchips designed by Fairchild and National Semiconductor. AMD first focused on producing logic chips. The company guaranteed quality control to United States Military Standard, an advantage in the early computer industry since unreliability in microchips was a distinct problem that customers – including computer manufacturers, the telecommunications industry, and instrument manufacturers – wanted to avoid.
In November 1969, the company manufactured its first product: the Am9300, a 4-bit MSI shift register, which began selling in 1970. Also in 1970, AMD produced its first proprietary product, the Am2501 logic counter, which was highly successful. Its bestselling product in 1971 was the Am2505, the fastest multiplier available.
In 1971, AMD entered the RAM chip market, beginning with the Am3101, a 64-bit bipolar RAM. That year AMD also greatly increased the sales volume of its linear integrated circuits, and by year-end the company's total annual sales reached US$4.6 million.
AMD went public in September 1972. The company was a second source for Intel MOS/LSI circuits by 1973, with products such as Am14/1506 and Am14/1507, dual 100-bit dynamic shift registers. By 1975, AMD was producing 212 products – of which 49 were proprietary, including the Am9102 (a static N-channel 1024-bit RAM) and three low-power Schottky MSI circuits: Am25LS07, Am25LS08, and Am25LS09.
Intel had created the first microprocessor, its 4-bit 4004, in 1971. By 1975, AMD entered the microprocessor market with the Am9080, a reverse-engineered clone of the Intel 8080, and the Am2900 bit-slice microprocessor family. When Intel began installing microcode in its microprocessors in 1976, it entered into a cross-licensing agreement with AMD, which was granted a copyright license to the microcode in its microprocessors and peripherals, effective October 1976.
In 1977, AMD entered into a joint venture with Siemens, a German engineering conglomerate wishing to enhance its technology expertise and enter the American market. Siemens purchased 20% of AMD's stock, giving the company an infusion of cash to increase its product lines. The two companies also jointly established Advanced Micro Computers (AMC), located in Silicon Valley and in Germany, allowing AMD to enter the microcomputer development and manufacturing field, in particular based on AMD's second-source Zilog Z8000 microprocessors. When the two companies' vision for Advanced Micro Computers diverged, AMD bought out Siemens' stake in the American division in 1979. AMD closed Advanced Micro Computers in late 1981 after switching focus to manufacturing second-source Intel x86 microprocessors.
Total sales in fiscal year 1978 topped $100 million, and in 1979, AMD debuted on the New York Stock Exchange. In 1979, production also began on AMD's new semiconductor fabrication plant in Austin, Texas; the company already had overseas assembly facilities in Penang and Manila, and began construction on a fabrication plant in San Antonio in 1981. In 1980, AMD began supplying semiconductor products for telecommunications, an industry undergoing rapid expansion and innovation.
Intel partnership
Intel had introduced the first x86 microprocessors in 1978. In 1981, IBM created its PC, and wanted Intel's x86 processors, but only under the condition that Intel would also provide a second-source manufacturer for its patented x86 microprocessors. Intel and AMD entered into a 10-year technology exchange agreement, first signed in October 1981 and formally executed in February 1982. The terms of the agreement were that each company could acquire the right to become a second-source manufacturer of semiconductor products developed by the other; that is, each party could "earn" the right to manufacture and sell a product developed by the other, if agreed to, by exchanging the manufacturing rights to a product of equivalent technical complexity. The technical information and licenses needed to make and sell a part would be exchanged for a royalty to the developing company. The 1982 agreement also extended the 1976 AMD–Intel cross-licensing agreement through 1995. The agreement included the right to invoke arbitration of disagreements, and after five years the right of either party to end the agreement with one year's notice. The main result of the 1982 agreement was that AMD became a second-source manufacturer of Intel's x86 microprocessors and related chips, and Intel provided AMD with database tapes for its 8086, 80186, and 80286 chips. However, in the event of a bankruptcy or takeover of AMD, the cross-licensing agreement would be effectively canceled.
Beginning in 1982, AMD began volume-producing second-source Intel-licensed 8086, 8088, 80186, and 80188 processors, and by 1984, its own Am286 clone of Intel's 80286 processor, for the rapidly growing market of IBM PCs and IBM clones. It also continued its successful concentration on proprietary bipolar chips.
The company continued to spend greatly on research and development, and created the world's first 512K EPROM in 1984. That year, AMD was listed in the book The 100 Best Companies to Work for in America, and later made the Fortune 500 list for the first time in 1985.
By mid-1985, the microchip market experienced a severe downturn, mainly due to long-term aggressive trade practices (dumping) from Japan, but also due to a crowded and non-innovative chip market in the United States. AMD rode out the mid-1980s crisis by aggressively innovating and modernizing, devising the Liberty Chip program of designing and manufacturing one new chip or chipset per week for 52 weeks in fiscal year 1986, and by heavily lobbying the U.S. government until sanctions and restrictions were put in place to prevent predatory Japanese pricing. During this time, AMD withdrew from the DRAM market, and made some headway into the CMOS market, which it had lagged in entering, having focused instead on bipolar chips.
AMD had some success in the mid-1980s with the AMD7910 and AMD7911 "World Chip" FSK modem, one of the first multi-standard devices that covered both Bell and CCITT tones at up to 1200 baud half duplex or 300/300 full duplex. Beginning in 1986, AMD embraced the perceived shift toward RISC with their own AMD Am29000 (29k) processor; the 29k survived as an embedded processor. The company also increased its EPROM memory market share in the late 1980s. Throughout the 1980s, AMD was a second-source supplier of Intel x86 processors. In 1991, it introduced its 386-compatible Am386, an AMD-designed chip. Creating its own chips, AMD began to compete directly with Intel.
AMD had a large, successful flash memory business, even during the dotcom bust. In 2003, to divest some manufacturing and aid its overall cash flow, which was under duress from aggressive microprocessor competition from Intel, AMD spun off its flash memory business and manufacturing into Spansion, a joint venture with Fujitsu, which had been co-manufacturing flash memory with AMD since 1993. In December 2005, AMD divested itself of Spansion to focus on the microprocessor market, and Spansion went public in an IPO.
2006–present
On July 24, 2006, AMD announced its acquisition of the Canadian 3D graphics card company ATI Technologies. AMD paid $4.3 billion and 58 million shares of its capital stock, for approximately $5.4 billion. The transaction was completed on October 25, 2006. On August 30, 2010, AMD announced that it would retire the ATI brand name for its graphics chipsets in favor of the AMD brand name.
In October 2008, AMD announced plans to spin off manufacturing operations in the form of GlobalFoundries Inc., a multibillion-dollar joint venture with Advanced Technology Investment Co., an investment company formed by the government of Abu Dhabi. The partnership and spin-off gave AMD an infusion of cash and allowed it to focus solely on chip design. To assure the Abu Dhabi investors of the new venture's success, AMD's CEO Hector Ruiz stepped down in July 2008, while remaining executive chairman, in preparation for becoming chairman of GlobalFoundries in March 2009. President and COO Dirk Meyer became AMD's CEO. Recessionary losses necessitated AMD cutting 1,100 jobs in 2009.
In August 2011, AMD announced that former Lenovo executive Rory Read would be joining the company as CEO, replacing Meyer. In November 2011, AMD announced plans to lay off more than 10% (1,400) of its employees from across all divisions worldwide. In October 2012, it announced plans to lay off an additional 15% of its workforce to reduce costs in the face of declining sales revenue. The inclusion of AMD chips into the PlayStation 4 and Xbox One were later seen as saving AMD from bankruptcy.
AMD acquired the low-power server manufacturer SeaMicro in early 2012, with an eye to bringing out an Arm64 server chip.
On October 8, 2014, AMD announced that Rory Read had stepped down after three years as president and chief executive officer. He was succeeded by Lisa Su, a key lieutenant who had been chief operating officer since June.
On October 16, 2014, AMD announced a new restructuring plan along with its Q3 results. Effective July 1, 2014, AMD reorganized into two business groups: Computing and Graphics, which primarily includes desktop and notebook processors and chipsets, discrete GPUs, and professional graphics; and Enterprise, Embedded, and Semi-Custom, which primarily includes server and embedded processors, dense servers, semi-custom SoC products (including solutions for gaming consoles), engineering services, and royalties. As part of this restructuring, AMD announced that 7% of its global workforce would be laid off by the end of 2014.
After the GlobalFoundries spin-off and subsequent layoffs, AMD was left with significant vacant space at 1 AMD Place, its aging Sunnyvale headquarters office complex. In August 2016, AMD's 47 years in Sunnyvale came to a close when it signed a lease with the Irvine Company for a new 220,000 sq. ft. headquarters building in Santa Clara. AMD's new location at Santa Clara Square faces the headquarters of archrival Intel across the Bayshore Freeway and San Tomas Aquino Creek. Around the same time, AMD also agreed to sell 1 AMD Place to the Irvine Company. In April 2019, the Irvine Company secured approval from the Sunnyvale City Council of its plans to demolish 1 AMD Place and redevelop the entire 32-acre site into townhomes and apartments.
In October 2020, AMD announced that it was acquiring Xilinx, one of the market leaders in field programmable gate arrays and complex programmable logic devices (FPGAs and CPLDs) in an all-stock transaction. The acquisition was completed in February 2022, with an estimated acquisition price of $50 billion.
In October 2023, AMD acquired an open-source AI software provider, Nod.ai, to bolster its AI software ecosystem.
In January 2024, AMD announced it was discontinuing the production of all complex programmable logic devices (CPLDs) acquired through Xilinx.
In March 2024, a rally in semiconductor stocks pushed AMD's valuation above $300B for the first time.
In July 2024 AMD announced that it would acquire the Finnish-based artificial intelligence startup company Silo AI in a $665 million all-cash deal in an attempt to better compete with AI chip market leader Nvidia.
List of CEOs
Products
CPUs and APUs
IBM PC and the x86 architecture
In February 1982, AMD signed a contract with Intel, becoming a licensed second-source manufacturer of 8086 and 8088 processors. IBM wanted to use the Intel 8088 in its IBM PC, but its policy at the time was to require at least two sources for its chips. AMD later produced the Am286 under the same arrangement. In 1984, Intel internally decided to no longer cooperate with AMD in supplying product information to shore up its advantage in the marketplace, and delayed and eventually refused to convey the technical details of the Intel 80386. In 1987, AMD invoked arbitration over the issue, and Intel reacted by canceling the 1982 technological-exchange agreement altogether. After three years of testimony, AMD eventually won in arbitration in 1992, but Intel disputed this decision. Another long legal dispute followed, ending in 1994 when the Supreme Court of California sided with the arbitrator and AMD.
In 1990, Intel countersued AMD, renegotiating AMD's right to use derivatives of Intel's microcode for its cloned processors. In the face of uncertainty during the legal dispute, AMD was forced to develop clean room designed versions of Intel code for its x386 and x486 processors, the former long after Intel had released its own x386 in 1985. In March 1991, AMD released the Am386, its clone of the Intel 386 processor. By October of the same year it had sold one million units.
In 1993, AMD introduced the first of the Am486 family of processors, which proved popular with a large number of original equipment manufacturers, including Compaq, which signed an exclusive agreement using the Am486. The Am5x86, another Am486-based processor, was released in November 1995, and continued AMD's success as a fast, cost-effective processor.
Finally, in an agreement effective 1996, AMD received the rights to the microcode in Intel's x386 and x486 processor families, but not the rights to the microcode in the following generations of processors.
K5, K6, Athlon, Duron, and Sempron
AMD's first in-house x86 processor was the K5, launched in 1996. The "K" in its name was a reference to Kryptonite, the only substance known to harm comic book character Superman. This itself was a reference to Intel's hegemony over the market, i.e., an anthropomorphization of them as Superman. The number "5" was a reference to the fifth generation of x86 processors; rival Intel had previously introduced its line of fifth-generation x86 processors as Pentium because the U.S. Trademark and Patent Office had ruled that mere numbers could not be trademarked.
In 1996, AMD purchased NexGen, specifically for the rights to their Nx series of x86-compatible processors. AMD gave the NexGen design team their own building, left them alone, and gave them time and money to rework the Nx686. The result was the K6 processor, introduced in 1997. Although it was based on Socket 7, variants such as K6-III/450 were faster than Intel's Pentium II (sixth-generation processor).
The K7 was AMD's seventh-generation x86 processor, making its debut under the brand name Athlon on June 23, 1999. Unlike previous AMD processors, it could not be used on the same motherboards as Intel's, due to licensing issues surrounding Intel's Slot 1 connector, and instead used a Slot A connector, referenced to the Alpha processor bus. The Duron was a lower-cost and limited version of the Athlon (64 KB instead of 256 KB L2 cache) in a 462-pin socketed PGA (socket A) or soldered directly onto the motherboard. Sempron was released as a lower-cost Athlon XP, replacing Duron in the socket A PGA era. It has since been migrated upward to all new sockets, up to AM3.
On October 9, 2001, the Athlon XP was released. On February 10, 2003, the Athlon XP with 512 KB L2 Cache was released.
Athlon 64, Opteron, and Phenom
The K8 was a major revision of the K7 architecture, with the most notable features being the addition of a 64-bit extension to the x86 instruction set (called x86-64, AMD64, or x64), the incorporation of an on-chip memory controller, and the implementation of an extremely high-performance point-to-point interconnect called HyperTransport, as part of the Direct Connect Architecture. The technology was initially launched as the Opteron server-oriented processor on April 22, 2003. Shortly thereafter, it was incorporated into a product for desktop PCs, branded Athlon 64.
On April 21, 2005, AMD released the first dual-core Opteron, an x86-based server CPU. A month later, it released the Athlon 64 X2, the first desktop-based dual-core processor family. In May 2007, AMD abandoned the string "64" in its dual-core desktop product branding, becoming Athlon X2, downplaying the significance of 64-bit computing in its processors. Further updates involved improvements to the microarchitecture, and a shift of the target market from mainstream desktop systems to value dual-core desktop systems. In 2008, AMD started to release dual-core Sempron processors exclusively in China, branded as the Sempron 2000 series, with lower HyperTransport speed and smaller L2 cache. AMD completed its dual-core product portfolio for each market segment.
In September 2007, AMD released the first server Opteron K10 processors, followed in November by the Phenom processor for desktop. K10 processors came in dual-core, triple-core, and quad-core versions, with all cores on a single die. AMD released a new platform codenamed "Spider", which used the new Phenom processor, and an R770 GPU and a 790 GX/FX chipset from the AMD 700 chipset series. However, AMD built the Spider at 65nm, which was uncompetitive with Intel's smaller and more power-efficient 45nm.
In January 2009, AMD released a new processor line dubbed Phenom II, a refresh of the original Phenom built using the 45 nm process. AMD's new platform, codenamed "Dragon", used the new Phenom II processor, and an ATI R770 GPU from the R700 GPU family, and a 790 GX/FX chipset from the AMD 700 chipset series. The Phenom II came in dual-core, triple-core and quad-core variants, all using the same die, with cores disabled for the triple-core and dual-core versions. The Phenom II resolved issues that the original Phenom had, including a low clock speed, a small L3 cache, and a Cool'n'Quiet bug that decreased performance. The Phenom II cost less but was not performance-competitive with Intel's mid-to-high-range Core 2 Quads. The Phenom II also enhanced its predecessor's memory controller, allowing it to use DDR3 in a new native socket AM3, while maintaining backward compatibility with AM2+, the socket used for the Phenom, and allowing the use of the DDR2 memory that was used with the platform.
In April 2010, AMD released a new Phenom II Hexa-core (6-core) processor codenamed "Thuban". This was a totally new die based on the hexa-core "Istanbul" Opteron processor. It included AMD's "turbo core" technology, which allows the processor to automatically switch from 6 cores to 3 faster cores when more pure speed is needed.
The Magny Cours and Lisbon server parts were released in 2010. The Magny Cours part came in 8 to 12 cores and the Lisbon part in 4 and 6 core parts. Magny Cours is focused on performance while the Lisbon part is focused on high performance per watt. Magny Cours is an MCM (multi-chip module) with two hexa-core "Istanbul" Opteron parts. This will use a new socket G34 for dual and quad-socket processors and thus will be marketed as Opteron 61xx series processors. Lisbon uses socket C32 certified for dual-socket use or single socket use only and thus will be marketed as Opteron 41xx processors. Both will be built on a 45 nm SOI process.
Fusion becomes the AMD APU
Following AMD's 2006 acquisition of Canadian graphics company ATI Technologies, an initiative codenamed Fusion was announced to integrate a CPU and GPU together on some of AMD's microprocessors, including a built in PCI Express link to accommodate separate PCI Express peripherals, eliminating the northbridge chip from the motherboard. The initiative intended to move some of the processing originally done on the CPU (e.g. floating-point unit operations) to the GPU, which is better optimized for some calculations. The Fusion was later renamed the AMD APU (Accelerated Processing Unit).
Llano was AMD's first APU built for laptops. Llano was the second APU released, targeted at the mainstream market. It incorporated a CPU and GPU on the same die, and northbridge functions, and used "Socket FM1" with DDR3 memory. The CPU part of the processor was based on the Phenom II "Deneb" processor. AMD suffered an unexpected decrease in revenue based on production problems for the Llano. More AMD APUs for laptops running Windows 7 and Windows 8 OS are being used commonly. These include AMD's price-point APUs, the E1 and E2, and their mainstream competitors with Intel's Core i-series: The Vision A- series, the A standing for accelerated. These range from the lower-performance A4 chipset to the A6, A8, and A10. These all incorporate next-generation Radeon graphics cards, with the A4 utilizing the base Radeon HD chip and the rest using a Radeon R4 graphics card, with the exception of the highest-model A10 (A10-7300) which uses an R6 graphics card.
New microarchitectures
High-power, high-performance Bulldozer cores
Bulldozer was AMD's microarchitecture codename for server and desktop AMD FX processors, first released on October 12, 2011. This family 15h microarchitecture is the successor to the family 10h (K10) microarchitecture design. Bulldozer was a clean-sheet design, not a development of earlier processors. The core was specifically aimed at 10–125 W TDP computing products. AMD claimed dramatic performance-per-watt efficiency improvements in high-performance computing (HPC) applications with Bulldozer cores. While hopes were high that Bulldozer would bring AMD to be performance-competitive with Intel once more, most benchmarks were disappointing. In some cases the new Bulldozer products were slower than the K10 models they were built to replace.
The Piledriver microarchitecture was the 2012 successor to Bulldozer, increasing clock speeds and performance relative to its predecessor. Piledriver would be released in AMD FX, APU, and Opteron product lines. Piledriver was subsequently followed by the Steamroller microarchitecture in 2013. Used exclusively in AMD's APUs, Steamroller focused on greater parallelism.
In 2015, the Excavator microarchitecture replaced Piledriver. Expected to be the last microarchitecture of the Bulldozer series, Excavator focused on improved power efficiency.
Low-power Cat cores
The Bobcat microarchitecture was revealed during a speech from AMD executive vice-president Henri Richard in Computex 2007 and was put into production during the first quarter of 2011. Based on the difficulty competing in the x86 market with a single core optimized for the 10–100 W range, AMD had developed a simpler core with a target range of 1–10 watts. In addition, it was believed that the core could migrate into the hand-held space if the power consumption can be reduced to less than 1 W.
Jaguar is a microarchitecture codename for Bobcat's successor, released in 2013, that is used in various APUs from AMD aimed at the low-power/low-cost market. Jaguar and its derivates would go on to be used in the custom APUs of the PlayStation 4, Xbox One, PlayStation 4 Pro, Xbox One S, and Xbox One X. Jaguar would be later followed by the Puma microarchitecture in 2014.
ARM architecture-based designs
In 2012, AMD announced it was working on ARM products, both as a semi-custom product and server product. The initial server product was announced as the Opteron A1100 in 2014, an 8-core Cortex-A57-based ARMv8-A SoC, and was expected to be followed by an APU incorporating a Graphics Core Next GPU. However, the Opteron A1100 was not released until 2016, with the delay attributed to adding software support. The A1100 was also criticized for not having support from major vendors upon its release.
In 2014, AMD also announced the K12 custom core for release in 2016. While being ARMv8-A instruction set architecture compliant, the K12 was expected to be entirely custom-designed, targeting the server, embedded, and semi-custom markets. While ARM architecture development continued, products based on K12 were subsequently delayed with no release planned. Development of AMD's x86-based Zen microarchitecture was preferred.
Zen-based CPUs and APUs
Zen is an architecture for x86-64 based Ryzen series of CPUs and APUs, introduced in 2017 by AMD and built from the ground up by a team led by Jim Keller, beginning with his arrival in 2012, and taping out before his departure in September 2015.
One of AMD's primary goals with Zen was an IPC increase of at least 40%, however in February 2017 AMD announced that they had actually achieved a 52% increase. Processors made on the Zen architecture are built on the 14 nm FinFET node and have a renewed focus on single-core performance and HSA compatibility. Previous processors from AMD were either built in the 32 nm process ("Bulldozer" and "Piledriver" CPUs) or the 28 nm process ("Steamroller" and "Excavator" APUs). Because of this, Zen is much more energy efficient.
The Zen architecture is the first to encompass CPUs and APUs from AMD built for a single socket (Socket AM4). Also new for this architecture is the implementation of simultaneous multithreading (SMT) technology, something Intel has had for years on some of their processors with their proprietary hyper-threading implementation of SMT. This is a departure from the "Clustered MultiThreading" design introduced with the Bulldozer architecture. Zen also has support for DDR4 memory.
AMD released the Zen-based high-end Ryzen 7 "Summit Ridge" series CPUs on March 2, 2017, mid-range Ryzen 5 series CPUs on April 11, 2017, and entry level Ryzen 3 series CPUs on July 27, 2017. AMD later released the Epyc line of Zen derived server processors for 1P and 2P systems. In October 2017, AMD released Zen-based APUs as Ryzen Mobile, incorporating Vega graphics cores. In January 2018 AMD has announced their new lineup plans, with Ryzen 2. AMD launched CPUs with the 12nm Zen+ microarchitecture in April 2018, following up with the 7nm Zen 2 microarchitecture in June 2019, including an update to the Epyc line with new processors using the Zen 2 microarchitecture in August 2019, and Zen 3 slated for release in Q3 2020.
As of 2019, AMD's Ryzen processors were reported to outsell Intel's consumer desktop processors. At CES 2020 AMD announced their Ryzen Mobile 4000, as the first 7 nm x86 mobile processor, the first 7 nm 8-core (also 16-thread) high-performance mobile processor, and the first 8-core (also 16-thread) processor for ultrathin laptops. This generation is still based on the Zen 2 architecture. In October 2020, AMD announced new processors based on the Zen 3 architecture. On PassMark's Single thread performance test the Ryzen 5 5600x bested all other CPUs besides the Ryzen 9 5950X.
In April 2020, AMD launched three new SKUs which target commercial HPC workloads & hyperconverged infrastructure applications. The launch was based on Epyc’s 7 nm second-generation Rome platform and supported by Dell EMC, Hewlett Packard Enterprise, Lenovo, Supermicro, and Nutanix. IBM Cloud was its first public cloud partner. In August 2022, AMD announced their initial lineup of CPUs based on the new Zen 4 architecture.
The Steam Deck, PlayStation 5, Xbox Series X and Series S all use chips based on the Zen 2 microarchitecture, with proprietary tweaks and different configurations in each system's implementation than AMD sells in its own commercially available APUs.
Graphics products and GPUs
ATI prior to AMD acquisition
Radeon within AMD
In 2008, the ATI division of AMD released the TeraScale microarchitecture implementing a unified shader model. This design replaced the previous fixed-function hardware of previous graphics cards with multipurpose, programmable shaders. Initially released as part of the GPU for the Xbox 360, this technology would go on to be used in Radeon branded HD 2000 parts. Three generations of TeraScale would be designed and used in parts from 2008 to 2014.
Combined GPU and CPU divisions
In a 2009 restructuring, AMD merged the CPU and GPU divisions to support the company's APUs, which fused both graphics and general purpose processing. In 2011, AMD released the successor to TeraScale, Graphics Core Next (GCN). This new microarchitecture emphasized GPGPU compute capability in addition to graphics processing, with a particular aim of supporting heterogeneous computing on AMD's APUs. GCN's reduced instruction set ISA allowed for significantly increased compute capability over TeraScale's very long instruction word ISA. Since GCN's introduction with the HD 7970, five generations of the GCN architecture have been produced from 2008 through at least 2017.
Radeon Technologies Group
In September 2015, AMD separated the graphics technology division of the company into an independent internal unit called the Radeon Technologies Group (RTG) headed by Raja Koduri. This gave the graphics division of AMD autonomy in product design and marketing. The RTG then went on to create and release the Polaris and Vega microarchitectures released in 2016 and 2017, respectively. In particular the Vega, or fifth-generation GCN, microarchitecture includes a number of major revisions to improve performance and compute capabilities.
In November 2017, Raja Koduri left RTG and CEO and President Lisa Su took his position. In January 2018, it was reported that two industry veterans joined RTG, namely Mike Rayfield as senior vice president and general manager of RTG, and David Wang as senior vice president of engineering for RTG. In January 2020, AMD announced that its second-generation RDNA graphics architecture was in development, with the aim of competing with the Nvidia RTX graphics products for performance leadership. In October 2020, AMD announced their new RX 6000 series series GPUs, their first high-end product based on RDNA2 and capable of handling ray-tracing natively, aiming to challenge Nvidia's RTX 3000 GPUs.
Semi-custom and game console products
In 2012, AMD's then CEO Rory Read began a program to offer semi-custom designs. Rather than AMD simply designing and offering a single product, potential customers could work with AMD to design a custom chip based on AMD's intellectual property. Customers pay a non-recurring engineering fee for design and development, and a purchase price for the resulting semi-custom products. In particular, AMD noted their unique position of offering both x86 and graphics intellectual property. These semi-custom designs would have design wins as the APUs in the PlayStation 4 and Xbox One and the subsequent PlayStation 4 Pro, Xbox One S, Xbox One X, Xbox Series X/S, and PlayStation 5. Financially, these semi-custom products would represent a majority of the company's revenue in 2016. In November 2017, AMD and Intel announced that Intel would market a product combining in a single package an Intel Core CPU, a semi-custom AMD Radeon GPU, and HBM2 memory.
Other hardware
AMD motherboard chipsets
Before the launch of Athlon 64 processors in 2003, AMD designed chipsets for their processors spanning the K6 and K7 processor generations. The chipsets include the AMD-640, AMD-751, and the AMD-761 chipsets. The situation changed in 2003 with the release of Athlon 64 processors, and AMD chose not to further design its own chipsets for its desktop processors while opening the desktop platform to allow other firms to design chipsets. This was the "Open Platform Management Architecture" with ATI, VIA and SiS developing their own chipset for Athlon 64 processors and later Athlon 64 X2 and Athlon 64 FX processors, including the Quad FX platform chipset from Nvidia.
The initiative went further with the release of Opteron server processors as AMD stopped the design of server chipsets in 2004 after releasing the AMD-8111 chipset, and again opened the server platform for firms to develop chipsets for Opteron processors. As of today, Nvidia and Broadcom are the sole designing firms of server chipsets for Opteron processors.
As the company completed the acquisition of ATI Technologies in 2006, the firm gained the ATI design team for chipsets which previously designed the Radeon Xpress 200 and the Radeon Xpress 3200 chipsets. AMD then renamed the chipsets for AMD processors under AMD branding (for instance, the CrossFire Xpress 3200 chipset was renamed as AMD 580X CrossFire chipset). In February 2007, AMD announced the first AMD-branded chipset since 2004 with the release of the AMD 690G chipset (previously under the development codename RS690), targeted at mainstream IGP computing. It was the industry's first to implement a HDMI 1.2 port on motherboards, shipping for more than a million units. While ATI had aimed at releasing an Intel IGP chipset, the plan was scrapped and the inventories of Radeon Xpress 1250 (codenamed RS600, sold under ATI brand) was sold to two OEMs, Abit and ASRock. Although AMD stated the firm would still produce Intel chipsets, Intel had not granted the license of FSB to ATI.
On November 15, 2007, AMD announced a new chipset series portfolio, the AMD 7-Series chipsets, covering from the enthusiast multi-graphics segment to the value IGP segment, to replace the AMD 480/570/580 chipsets and AMD 690 series chipsets, marking AMD's first enthusiast multi-graphics chipset. Discrete graphics chipsets were launched on November 15, 2007, as part of the codenamed Spider desktop platform, and IGP chipsets were launched at a later time in spring 2008 as part of the codenamed Cartwheel platform.
AMD returned to the server chipsets market with the AMD 800S series server chipsets. It includes support for up to six SATA 6.0 Gbit/s ports, the C6 power state, which is featured in Fusion processors and AHCI 1.2 with SATA FIS-based switching support. This is a chipset family supporting Phenom processors and Quad FX enthusiast platform (890FX), IGP (890GX).
With the advent of AMD's APUs in 2011, traditional northbridge features such as the connection to graphics and the PCI Express controller were incorporated into the APU die. Accordingly, APUs were connected to a single chip chipset, renamed the Fusion Controller Hub (FCH), which primarily provided southbridge functionality.
AMD released new chipsets in 2017 to support the release of their new Ryzen products. As the Zen microarchitecture already includes much of the northbridge connectivity, the AM4-based chipsets primarily varied in the number of additional PCI Express lanes, USB connections, and SATA connections available. These AM4 chipsets were designed in conjunction with ASMedia.
Embedded products
Embedded CPUs
In the early 1990s, AMD began marketing a series of embedded system-on-a-chips (SoCs) called AMD Élan, starting with the SC300 and SC310. Both combines a 32-Bit, Am386SX, low-voltage 25 MHz or 33 MHz CPU with memory controller, PC/AT peripheral controllers, real-time clock, PLL clock generators and ISA bus interface. The SC300 integrates in addition two PC card slots and a CGA-compatible LCD controller. They were followed in 1996 by the SC4xx types. Now supporting VESA Local Bus and using the Am486 with up to 100 MHz clock speed. A SC450 with 33 MHz, for example, was used in the Nokia 9000 Communicator. In 1999 the SC520 was announced. Using an Am586 with 100 MHz or 133 MHz and supporting SDRAM and PCI it was the latest member of the series.
In February 2002, AMD acquired Alchemy Semiconductor for its Alchemy line of MIPS processors for the hand-held and portable media player markets. On June 13, 2006, AMD officially announced that the line was to be transferred to Raza Microelectronics, Inc., a designer of MIPS processors for embedded applications.
In August 2003, AMD also purchased the Geode business which was originally the Cyrix MediaGX from National Semiconductor to augment its existing line of embedded x86 processor products. During the second quarter of 2004, it launched new low-power Geode NX processors based on the K7 Thoroughbred architecture with speeds of fanless processors and , and processor with fan, of TDP 25 W. This technology is used in a variety of embedded systems (Casino slot machines and customer kiosks for instance), several UMPC designs in Asia markets, and the OLPC XO-1 computer, an inexpensive laptop computer intended to be distributed to children in developing countries around the world. The Geode LX processor was announced in 2005 and is said will continue to be available through 2015.
AMD has also introduced 64-bit processors into its embedded product line starting with the AMD Opteron processor. Leveraging the high throughput enabled through HyperTransport and the Direct Connect Architecture these server-class processors have been targeted at high-end telecom and storage applications. In 2007, AMD added the AMD Athlon, AMD Turion, and Mobile AMD Sempron processors to its embedded product line. Leveraging the same 64-bit instruction set and Direct Connect Architecture as the AMD Opteron but at lower power levels, these processors were well suited to a variety of traditional embedded applications. Throughout 2007 and into 2008, AMD has continued to add both single-core Mobile AMD Sempron and AMD Athlon processors and dual-core AMD Athlon X2 and AMD Turion processors to its embedded product line and now offers embedded 64-bit solutions starting with 8 W TDP Mobile AMD Sempron and AMD Athlon processors for fan-less designs up to multi-processor systems leveraging multi-core AMD Opteron processors all supporting longer than standard availability.
The ATI acquisition in 2006 included the Imageon and Xilleon product lines. In late 2008, the entire handheld division was sold off to Qualcomm, who have since produced the Adreno series. Also in 2008, the Xilleon division was sold to Broadcom.
In April 2007, AMD announced the release of the M690T integrated graphics chipset for embedded designs. This enabled AMD to offer complete processor and chipset solutions targeted at embedded applications requiring high-performance 3D and video such as emerging digital signage, kiosk, and Point of Sale applications. The M690T was followed by the M690E specifically for embedded applications which removed the TV output, which required Macrovision licensing for OEMs, and enabled native support for dual TMDS outputs, enabling dual independent DVI interfaces.
In January 2011, AMD announced the AMD Embedded G-Series Accelerated Processing Unit. This was the first APU for embedded applications. These were followed by updates in 2013 and 2016.
In May 2012, AMD Announced the AMD Embedded R-Series Accelerated Processing Unit. This family of products incorporates the Bulldozer CPU architecture, and Discrete-class Radeon HD 7000G Series graphics. This was followed by a system-on-a-chip (SoC) version in 2015 which offered a faster CPU and faster graphics, with support for DDR4 SDRAM memory.
Embedded graphics
AMD builds graphic processors for use in embedded systems. They can be found in anything from casinos to healthcare, with a large portion of products being used in industrial machines. These products include a complete graphics processing device in a compact multi-chip module including RAM and the GPU. ATI began offering embedded GPUs with the E2400 in 2008. Since that time AMD has released regular updates to their embedded GPU lineup in 2009, 2011, 2015, and 2016; reflecting improvements in their GPU technology.
Current product lines
CPU and APU products
AMD's portfolio of CPUs and APUs
Athlon – brand of entry level CPUs (Excavator) and APUs (Ryzen)
A-series – Excavator-class consumer desktop and laptop APUs
G-series – Excavator- and Jaguar-class low-power embedded APUs
Ryzen – brand of consumer CPUs and APUs
Ryzen Threadripper – brand of prosumer/professional CPUs
R-series – Excavator class high-performance embedded APUs
Epyc – brand of server CPUs
Opteron – brand of microserver APUs
Graphics products
AMD's portfolio of dedicated graphics processors
Radeon – brand for consumer line of graphics cards; the brand name originated with ATI.
Mobility Radeon offers power-optimized versions of Radeon graphics chips for use in laptops.
Radeon Pro – Workstation graphics card brand. Successor to the FirePro brand.
Radeon Instinct – brand of server and workstation targeted machine learning and GPGPU products
Radeon-branded products
RAM
In 2011, AMD began selling Radeon branded DDR3 SDRAM to support the higher bandwidth needs of AMD's APUs. While the RAM is sold by AMD, it was manufactured by Patriot Memory and VisionTek. This was later followed by higher speeds of gaming oriented DDR3 memory in 2013. Radeon branded DDR4 SDRAM memory was released in 2015, despite no AMD CPUs or APUs supporting DDR4 at the time. AMD noted in 2017 that these products are "mostly distributed in Eastern Europe" and that it continues to be active in the business.
Solid-state drives
AMD announced in 2014 it would sell Radeon branded solid-state drives manufactured by OCZ with capacities up to 480 GB and using the SATA interface.
Technologies
CPU hardware
technologies found in AMD CPU/APU and other products include:
HyperTransport – a high-bandwidth, low-latency system bus used in AMD's CPU and APU products
Infinity Fabric – a derivative of HyperTransport used as the communication bus in AMD's Zen microarchitecture
Graphics hardware
technologies found in AMD GPU products include:
AMD Eyefinity – facilitates multi-monitor setup of up to 6 monitors per graphics card
AMD FreeSync – display synchronization based on the VESA Adaptive Sync standard
AMD TrueAudio – acceleration of audio calculations
AMD XConnect – allows the use of External GPU enclosures through Thunderbolt 3
AMD CrossFire – multi-GPU technology allowing the simultaneous use of multiple GPUs
Unified Video Decoder (UVD) – acceleration of video decompression (decoding)
Video Coding Engine (VCE) – acceleration of video compression (encoding)
Software
AMD has made considerable efforts towards opening its software tools above the firmware level in the past decade.
For the following mentions, software not expressely stated as being free can be assumed to be proprietary.
Distribution
AMD Radeon Software is the default channel for official software distribution from AMD. It includes both free and proprietary software components, and supports both Microsoft Windows and Linux.
Software by type
CPU
AOCC is AMD's optimizing proprietary C/C++ compiler based on LLVM and available for Linux.
AMDuProf is AMD's CPU performance and Power profiling tool suite, available for Linux and Windows.
AMD has also taken an active part in developing coreboot, an open-source project aimed at replacing the proprietary BIOS firmware. This cooperation ceased in 2013, but AMD has indicated recently that it is considering releasing source code so that Ryzen can be compatible with coreboot in the future.
GPU
Most notable public AMD software is on the GPU side.
AMD has opened both its graphic and compute stacks:
GPUOpen is AMD's graphics stack, which includes for example FidelityFX Super Resolution.
ROCm (Radeon Open Compute platform) is AMD's compute stack for machine learning and high-performance computing, based on the LLVM compiler technologies. Under the ROCm project, AMDgpu is AMD's open-source device driver supporting the GCN and following architectures, available for Linux. This latter driver component is used both by the graphics and compute stacks.
Other
AMD conducts open research on heterogeneous computing.
Other AMD software includes the AMD Core Math Library, and open-source software including the AMD Performance Library.
AMD contributes to open-source projects, including working with Sun Microsystems to enhance OpenSolaris and Sun xVM on the AMD platform. AMD also maintains its own Open64 compiler distribution and contributes its changes back to the community.
In 2008, AMD released the low-level programming specifications for its GPUs, and works with the X.Org Foundation to develop drivers for AMD graphics cards.
Extensions for software parallelism (xSP), aimed at speeding up programs to enable multi-threaded and multi-core processing, announced in Technology Analyst Day 2007. One of the initiatives being discussed since August 2007 is the Light Weight Profiling (LWP), providing internal hardware monitor with runtimes, to observe information about executing process and help the re-design of software to be optimized with multi-core and even multi-threaded programs. Another one is the extension of Streaming SIMD Extension (SSE) instruction set, the SSE5.
Codenamed SIMFIRE – interoperability testing tool for the Desktop and mobile Architecture for System Hardware (DASH) open architecture.
Production and fabrication
Previously, AMD produced its chips at company-owned semiconductor foundries. AMD pursued a strategy of collaboration with other semiconductor manufacturers IBM and Motorola to co-develop production technologies. AMD's founder Jerry Sanders termed this the "Virtual Gorilla" strategy to compete with Intel's significantly greater investments in fabrication.
In 2008, AMD spun off its chip foundries into an independent company named GlobalFoundries. This breakup of the company was attributed to the increasing costs of each process node. The Emirate of Abu Dhabi purchased the newly created company through its subsidiary Advanced Technology Investment Company (ATIC), purchasing the final stake from AMD in 2009.
With the spin-off of its foundries, AMD became a fabless semiconductor manufacturer, designing products to be produced at for-hire foundries. Part of the GlobalFoundries spin-off included an agreement with AMD to produce some number of products at GlobalFoundries. Both prior to the spin-off and after AMD has pursued production with other foundries including TSMC and Samsung. It has been argued that this would reduce risk for AMD by decreasing dependence on any one foundry which has caused issues in the past.
In 2018, AMD started shifting the production of their CPUs and GPUs to TSMC, following GlobalFoundries' announcement that they were halting development of their 7 nm process. AMD revised their wafer purchase requirement with GlobalFoundries in 2019, allowing AMD to freely choose foundries for 7 nm nodes and below, while maintaining purchase agreements for 12 nm and above through 2021.
Corporate affairs
Business trends
The key trends for AMD are (as of the financial year ending in late December):
Partnerships
AMD uses strategic industry partnerships to further its business interests and to rival Intel's dominance and resources:
A partnership between AMD and Alpha Processor Inc. developed HyperTransport, a point-to-point interconnect standard which was turned over to an industry standards body for finalization. It is now used in modern motherboards that are compatible with AMD processors.
AMD also formed a strategic partnership with IBM, under which AMD gained silicon on insulator (SOI) manufacturing technology, and detailed advice on 90 nm implementation. AMD announced that the partnership would extend to 2011 for 32 nm and 22 nm fabrication-related technologies.
To facilitate processor distribution and sales, AMD is loosely partnered with end-user companies, such as HP, Dell, Asus, Acer, and Microsoft.
In 1993, AMD established a 50–50 partnership with Fujitsu called FASL, and merged into a new company called FASL LLC in 2003. The joint venture went public under the name Spansion and ticker symbol SPSN in December 2005, with AMD shares dropping 37%. AMD no longer directly participates in the Flash memory devices market now as AMD entered into a non-competition agreement on December 21, 2005, with Fujitsu and Spansion, pursuant to which it agreed not to directly or indirectly engage in a business that manufactures or supplies standalone semiconductor devices (including single-chip, multiple-chip or system devices) containing only Flash memory.
On May 18, 2006, Dell announced that it would roll out new servers based on AMD's Opteron chips by year's end, thus ending an exclusive relationship with Intel. In September 2006, Dell began offering AMD Athlon X2 chips in their desktop lineup.
In June 2011, HP announced new business and consumer notebooks equipped with the latest versions of AMD APUsaccelerated processing units. AMD will power HP's Intel-based business notebooks as well.
In the spring of 2013, AMD announced that it would be powering all three major next-generation consoles. The Xbox One and Sony PlayStation 4 are both powered by a custom-built AMD APU, and the Nintendo Wii U is powered by an AMD GPU. According to AMD, having their processors in all three of these consoles will greatly assist developers with cross-platform development to competing consoles and PCs and increased support for their products across the board.
AMD has entered into an agreement with Hindustan Semiconductor Manufacturing Corporation (HSMC) for the production of AMD products in India.
AMD is a founding member of the HSA Foundation which aims to ease the use of a Heterogeneous System Architecture. A Heterogeneous System Architecture is intended to use both central processing units and graphics processors to complete computational tasks.
AMD announced in 2016 that it was creating a joint venture to produce x86 server chips for the Chinese market.
On May 7, 2019, it was reported that the U.S. Department of Energy, Oak Ridge National Laboratory, and Cray Inc., are working in collaboration with AMD to develop the Frontier exascale supercomputer. Featuring the AMD Epyc CPUs and Radeon GPUs, the supercomputer is set to produce more than 1.5 exaflops (peak double-precision) in computing performance. It is expected to debut sometime in 2021.
On March 5, 2020, it was announced that the U.S. Department of Energy, Lawrence Livermore National Laboratory, and HPE are working in collaboration with AMD to develop the El Capitan exascale supercomputer. Featuring the AMD Epyc CPUs and Radeon GPUs, the supercomputer is set to produce more than 2 exaflops (peak double-precision) in computing performance. It is expected to debut in 2023.
In the summer of 2020, it was reported that AMD would be powering the next-generation console offerings from Microsoft and Sony.
On November 8, 2021, AMD announced a partnership with Meta to make the chips used in the Metaverse.
In January 2022, AMD partnered with Samsung to develop a mobile processor to be used in future products. The processor was named Exynos 2022 and works with the AMD RDNA 2 architecture.
Litigation with Intel
AMD has a long history of litigation with former (and current) partner and x86 creator Intel.
In 1986, Intel broke an agreement it had with AMD to allow them to produce Intel's microchips for IBM; AMD filed for arbitration in 1987 and the arbitrator decided in AMD's favor in 1992. Intel disputed this, and the case ended up in the Supreme Court of California. In 1994, that court upheld the arbitrator's decision and awarded damages for breach of contract.
In 1990, Intel brought a copyright infringement action alleging illegal use of its 287 microcode. The case ended in 1994 with a jury finding for AMD and its right to use Intel's microcode in its microprocessors through the 486 generation.
In 1997, Intel filed suit against AMD and Cyrix Corp. for misuse of the term MMX. AMD and Intel settled, with AMD acknowledging MMX as a trademark owned by Intel, and with Intel granting AMD rights to market the AMD K6 MMX processor.
In 2005, following an investigation, the Japan Federal Trade Commission found Intel guilty of a number of violations. On June 27, 2005, AMD won an antitrust suit against Intel in Japan, and on the same day, AMD filed a broad antitrust complaint against Intel in the U.S. Federal District Court in Delaware. The complaint alleges systematic use of secret rebates, special discounts, threats, and other means used by Intel to lock AMD processors out of the global market. Since the start of this action, the court has issued subpoenas to major computer manufacturers including Acer, Dell, Lenovo, HP and Toshiba.
In November 2009, Intel agreed to pay AMD $1.25 billion and renew a five-year patent cross-licensing agreement as part of a deal to settle all outstanding legal disputes between them.
Guinness World Record achievement
On August 31, 2011, in Austin, Texas, AMD achieved a Guinness World Record for the "Highest frequency of a computer processor": 8.429 GHz. The company ran an 8-core FX-8150 processor with only one active module (two cores), and cooled with liquid helium. The previous record was 8.308 GHz, with an Intel Celeron 352 (one core).
On November 1, 2011, geek.com reported that Andre Yang, an overclocker from Taiwan, used an FX-8150 to set another record: 8.461 GHz.
On November 19, 2012, Andre Yang used an FX-8350 to set another record: 8.794 GHz.
Acquisitions, mergers, and investments
Corporate responsibility
In its 2022 report, AMD stated that it aimed to embed environmental sustainability across its business, promote safe and responsible workplaces in its global supply chain and advance stronger communities.
In 2022, AMD achieved a 19 percent reduction in its Scope 1 and 2 GHG emissions compared to 2020. Based on AMD calculations that are third-party verified (limited level assurance).
Other initiatives
The Green Grid, founded by AMD together with other founders, such as IBM, Sun and Microsoft, to seek lower power consumption for grids.
Sponsorships
AMD's sponsorship of Formula 1 racing began in 2002 and since 2020 has sponsored the Mercedes-AMG Petronas team. AMD was also a sponsor of the BMW Sauber and Scuderia Ferrari Formula 1 teams together with Intel, Vodafone, AT&T, Pernod Ricard and Diageo. On 18 April 2018, AMD began a multi-year sponsorship with Scuderia Ferrari. In February 2020, just prior to the start of the 2020 race season, the Mercedes Formula 1 team announced it was adding AMD to its sponsorship portfolio.
AMD began a sponsorship deal with Victory Five (V5) for the League of Legends Pro League (LPL) in 2022. AMD was a sponsor of the Chinese Dota Pro Circuit together with Perfect World.
In February 2024, AMD was a Diamond sponsor for the World Artificial Intelligence Cannes Festival (WAICF).
AMD was a Platinum sponsor for the HPE Discover 2024, an event hosted by Hewlett Packard Enterprise to showcase technology for government and business customers. The event was held from 17 to 20 June 2024 in Las Vegas.
See also
3DNow!
Bill Gaede
Cool'n'Quiet
PowerNow!
List of AMD accelerated processing units
List of AMD chipsets
List of AMD graphics processing units
List of AMD processors
List of ATI chipsets
References
Sources
Rodengen, Jeffrey L. The Spirit of AMD: Advanced Micro Devices. Write Stuff, 1998.
Ruiz, Hector. Slingshot: AMD's Fight to Free an Industry from the Ruthless Grip of Intel. Greenleaf Book Group, 2013.
External links
1969 establishments in California
1970s initial public offerings
American companies established in 1969
Fabless semiconductor companies
Companies based in Santa Clara, California
Companies formerly listed on the New York Stock Exchange
Companies listed on the Nasdaq
Companies in the Nasdaq-100
Computer companies of the United States
Computer companies established in 1969
Computer hardware companies
Electronics companies established in 1969
Graphics hardware companies
HSA Foundation founding members
Manufacturing companies based in the San Francisco Bay Area
Motherboard companies
Semiconductor companies of the United States
Superfund sites in California
Technology companies based in the San Francisco Bay Area
Technology companies established in 1969 | AMD | [
"Technology"
] | 13,052 | [
"Computer hardware companies",
"Computers"
] |
2,408 | https://en.wikipedia.org/wiki/Analytical%20chemistry | Analytical chemistry studies and uses instruments and methods to separate, identify, and quantify matter. In practice, separation, identification or quantification may constitute the entire analysis or be combined with another method. Separation isolates analytes. Qualitative analysis identifies analytes, while quantitative analysis determines the numerical amount or concentration.
Analytical chemistry consists of classical, wet chemical methods and modern analytical techniques. Classical qualitative methods use separations such as precipitation, extraction, and distillation. Identification may be based on differences in color, odor, melting point, boiling point, solubility, radioactivity or reactivity. Classical quantitative analysis uses mass or volume changes to quantify amount. Instrumental methods may be used to separate samples using chromatography, electrophoresis or field flow fractionation. Then qualitative and quantitative analysis can be performed, often with the same instrument and may use light interaction, heat interaction, electric fields or magnetic fields. Often the same instrument can separate, identify and quantify an analyte.
Analytical chemistry is also focused on improvements in experimental design, chemometrics, and the creation of new measurement tools. Analytical chemistry has broad applications to medicine, science, and engineering.
History
Analytical chemistry has been important since the early days of chemistry, providing methods for determining which elements and chemicals are present in the object in question. During this period, significant contributions to analytical chemistry included the development of systematic elemental analysis by Justus von Liebig and systematized organic analysis based on the specific reactions of functional groups.
The first instrumental analysis was flame emissive spectrometry developed by Robert Bunsen and Gustav Kirchhoff who discovered rubidium (Rb) and caesium (Cs) in 1860.
Most of the major developments in analytical chemistry took place after 1900. During this period, instrumental analysis became progressively dominant in the field. In particular, many of the basic spectroscopic and spectrometric techniques were discovered in the early 20th century and refined in the late 20th century.
The separation sciences follow a similar time line of development and also became increasingly transformed into high performance instruments. In the 1970s many of these techniques began to be used together as hybrid techniques to achieve a complete characterization of samples.
Starting in the 1970s, analytical chemistry became progressively more inclusive of biological questions (bioanalytical chemistry), whereas it had previously been largely focused on inorganic or small organic molecules. Lasers have been increasingly used as probes and even to initiate and influence a wide variety of reactions. The late 20th century also saw an expansion of the application of analytical chemistry from somewhat academic chemical questions to forensic, environmental, industrial and medical questions, such as in histology.
Modern analytical chemistry is dominated by instrumental analysis. Many analytical chemists focus on a single type of instrument. Academics tend to either focus on new applications and discoveries or on new methods of analysis. The discovery of a chemical present in blood that increases the risk of cancer would be a discovery that an analytical chemist might be involved in. An effort to develop a new method might involve the use of a tunable laser to increase the specificity and sensitivity of a spectrometric method. Many methods, once developed, are kept purposely static so that data can be compared over long periods of time. This is particularly true in industrial quality assurance (QA), forensic and environmental applications. Analytical chemistry plays an increasingly important role in the pharmaceutical industry where, aside from QA, it is used in the discovery of new drug candidates and in clinical applications where understanding the interactions between the drug and the patient are critical.
Classical methods
Although modern analytical chemistry is dominated by sophisticated instrumentation, the roots of analytical chemistry and some of the principles used in modern instruments are from traditional techniques, many of which are still used today. These techniques also tend to form the backbone of most undergraduate analytical chemistry educational labs.
Qualitative analysis
Qualitative analysis determines the presence or absence of a particular compound, but not the mass or concentration. By definition, qualitative analyses do not measure quantity.
Chemical tests
There are numerous qualitative chemical tests, for example, the acid test for gold and the Kastle-Meyer test for the presence of blood.
Flame test
Inorganic qualitative analysis generally refers to a systematic scheme to confirm the presence of certain aqueous ions or elements by performing a series of reactions that eliminate a range of possibilities and then confirm suspected ions with a confirming test. Sometimes small carbon-containing ions are included in such schemes. With modern instrumentation, these tests are rarely used but can be useful for educational purposes and in fieldwork or other situations where access to state-of-the-art instruments is not available or expedient.
Quantitative analysis
Quantitative analysis is the measurement of the quantities of particular chemical constituents present in a substance. Quantities can be measured by mass (gravimetric analysis) or volume (volumetric analysis).
Gravimetric analysis
The gravimetric analysis involves determining the amount of material present by weighing the sample before and/or after some transformation. A common example used in undergraduate education is the determination of the amount of water in a hydrate by heating the sample to remove the water such that the difference in weight is due to the loss of water.
Volumetric analysis
Titration involves the gradual addition of a measurable reactant to an exact volume of a solution being analyzed until some equivalence point is reached. Titration is a family of techniques used to determine the concentration of an analyte. Titrating accurately to either the half-equivalence point or the endpoint of a titration allows the chemist to determine the amount of moles used, which can then be used to determine a concentration or composition of the titrant. Most familiar to those who have taken chemistry during secondary education is the acid-base titration involving a color-changing indicator, such as phenolphthalein. There are many other types of titrations, for example, potentiometric titrations or precipitation titrations. Chemists might also create titration curves in order by systematically testing the pH every drop in order to understand different properties of the titrant.
Instrumental methods
Spectroscopy
Spectroscopy measures the interaction of the molecules with electromagnetic radiation. Spectroscopy consists of many different applications such as atomic absorption spectroscopy, atomic emission spectroscopy, ultraviolet-visible spectroscopy, X-ray spectroscopy, fluorescence spectroscopy, infrared spectroscopy, Raman spectroscopy, dual polarization interferometry, nuclear magnetic resonance spectroscopy, photoemission spectroscopy, Mössbauer spectroscopy and so on.
Mass spectrometry
Mass spectrometry measures mass-to-charge ratio of molecules using electric and magnetic fields. In a mass spectrometer, a small amount of sample is ionized and converted to gaseous ions, where they are separated and analyzed according to their mass-to-charge ratios. There are several ionization methods: electron ionization, chemical ionization, electrospray ionization, fast atom bombardment, matrix-assisted laser desorption/ionization, and others. Also, mass spectrometry is categorized by approaches of mass analyzers: magnetic-sector, quadrupole mass analyzer, quadrupole ion trap, time-of-flight, Fourier transform ion cyclotron resonance, and so on.
Electrochemical analysis
Electroanalytical methods measure the potential (volts) and/or current (amps) in an electrochemical cell containing the analyte. These methods can be categorized according to which aspects of the cell are controlled and which are measured. The four main categories are potentiometry (the difference in electrode potentials is measured), coulometry (the transferred charge is measured over time), amperometry (the cell's current is measured over time), and voltammetry (the cell's current is measured while actively altering the cell's potential).
Potentiometry measures the cell's potential, coulometry measures the cell's current, and voltammetry measures the change in current when cell potential changes.
Thermal analysis
Calorimetry and thermogravimetric analysis measure the interaction of a material and heat.
Separation
Separation processes are used to decrease the complexity of material mixtures. Chromatography, electrophoresis and field flow fractionation are representative of this field.
Chromatographic assays
Chromatography can be used to determine the presence of substances in a sample as different components in a mixture have different tendencies to adsorb onto the stationary phase or dissolve in the mobile phase. Thus, different components of the mixture move at different speed. Different components of a mixture can therefore be identified by their respective Rƒ values, which is the ratio between the migration distance of the substance and the migration distance of the solvent front during chromatography.
In combination with the instrumental methods, chromatography can be used in quantitative determination of the substances. Chromatography separates the analyte from the rest of the sample so that it may be measured without interference from other compounds. There are different types of chromatography that differ from the media they use to separate the analyte and the sample. In Thin-layer chromatography, the analyte mixture moves up and separates along the coated sheet under the volatile mobile phase. In Gas chromatography, gas separates the volatile analytes. A common method for chromatography using liquid as a mobile phase is High-performance liquid chromatography.
Hybrid techniques
Combinations of the above techniques produce a "hybrid" or "hyphenated" technique. Several examples are in popular use today and new hybrid techniques are under development. For example, gas chromatography-mass spectrometry, gas chromatography-infrared spectroscopy, liquid chromatography-mass spectrometry, liquid chromatography-NMR spectroscopy, liquid chromatography-infrared spectroscopy, and capillary electrophoresis-mass spectrometry.
Hyphenated separation techniques refer to a combination of two (or more) techniques to detect and separate chemicals from solutions. Most often the other technique is some form of chromatography. Hyphenated techniques are widely used in chemistry and biochemistry. A slash is sometimes used instead of hyphen, especially if the name of one of the methods contains a hyphen itself.
Microscopy
The visualization of single molecules, single cells, biological tissues, and nanomaterials is an important and attractive approach in analytical science. Also, hybridization with other traditional analytical tools is revolutionizing analytical science. Microscopy can be categorized into three different fields: optical microscopy, electron microscopy, and scanning probe microscopy. Recently, this field is rapidly progressing because of the rapid development of the computer and camera industries.
Lab-on-a-chip
Devices that integrate (multiple) laboratory functions on a single chip of only millimeters to a few square centimeters in size and that are capable of handling extremely small fluid volumes down to less than picoliters.
Errors
Error can be defined as numerical difference between observed value and true value. The experimental error can be divided into two types, systematic error and random error. Systematic error results from a flaw in equipment or the design of an experiment while random error results from uncontrolled or uncontrollable variables in the experiment.
In error the true value and observed value in chemical analysis can be related with each other by the equation
where
is the absolute error.
is the true value.
is the observed value.
An error of a measurement is an inverse measure of accurate measurement, i.e. smaller the error greater the accuracy of the measurement.
Errors can be expressed relatively. Given the relative error():
The percent error can also be calculated:
If we want to use these values in a function, we may also want to calculate the error of the function. Let be a function with variables. Therefore, the propagation of uncertainty must be calculated in order to know the error in :
Standards
Standard curve
A general method for analysis of concentration involves the creation of a calibration curve. This allows for the determination of the amount of a chemical in a material by comparing the results of an unknown sample to those of a series of known standards. If the concentration of element or compound in a sample is too high for the detection range of the technique, it can simply be diluted in a pure solvent. If the amount in the sample is below an instrument's range of measurement, the method of addition can be used. In this method, a known quantity of the element or compound under study is added, and the difference between the concentration added and the concentration observed is the amount actually in the sample.
Internal standards
Sometimes an internal standard is added at a known concentration directly to an analytical sample to aid in quantitation. The amount of analyte present is then determined relative to the internal standard as a calibrant. An ideal internal standard is an isotopically enriched analyte which gives rise to the method of isotope dilution.
Standard addition
The method of standard addition is used in instrumental analysis to determine the concentration of a substance (analyte) in an unknown sample by comparison to a set of samples of known concentration, similar to using a calibration curve. Standard addition can be applied to most analytical techniques and is used instead of a calibration curve to solve the matrix effect problem.
Signals and noise
One of the most important components of analytical chemistry is maximizing the desired signal while minimizing the associated noise. The analytical figure of merit is known as the signal-to-noise ratio (S/N or SNR).
Noise can arise from environmental factors as well as from fundamental physical processes.
Thermal noise
Thermal noise results from the motion of charge carriers (usually electrons) in an electrical circuit generated by their thermal motion. Thermal noise is white noise meaning that the power spectral density is constant throughout the frequency spectrum.
The root mean square value of the thermal noise in a resistor is given by
where kB is the Boltzmann constant, T is the temperature, R is the resistance, and is the bandwidth of the frequency .
Shot noise
Shot noise is a type of electronic noise that occurs when the finite number of particles (such as electrons in an electronic circuit or photons in an optical device) is small enough to give rise to statistical fluctuations in a signal.
Shot noise is a Poisson process, and the charge carriers that make up the current follow a Poisson distribution. The root mean square current fluctuation is given by
where e is the elementary charge and I is the average current. Shot noise is white noise.
Flicker noise
Flicker noise is electronic noise with a 1/ƒ frequency spectrum; as f increases, the noise decreases. Flicker noise arises from a variety of sources, such as impurities in a conductive channel, generation, and recombination noise in a transistor due to base current, and so on. This noise can be avoided by modulation of the signal at a higher frequency, for example, through the use of a lock-in amplifier.
Environmental noise
Environmental noise arises from the surroundings of the analytical instrument. Sources of electromagnetic noise are power lines, radio and television stations, wireless devices, compact fluorescent lamps and electric motors. Many of these noise sources are narrow bandwidth and, therefore, can be avoided. Temperature and vibration isolation may be required for some instruments.
Noise reduction
Noise reduction can be accomplished either in computer hardware or software. Examples of hardware noise reduction are the use of shielded cable, analog filtering, and signal modulation. Examples of software noise reduction are digital filtering, ensemble average, boxcar average, and correlation methods.
Applications
Analytical chemistry has applications including in forensic science, bioanalysis, clinical analysis, environmental analysis, and materials analysis. Analytical chemistry research is largely driven by performance (sensitivity, detection limit, selectivity, robustness, dynamic range, linear range, accuracy, precision, and speed), and cost (purchase, operation, training, time, and space). Among the main branches of contemporary analytical atomic spectrometry, the most widespread and universal are optical and mass spectrometry. In the direct elemental analysis of solid samples, the new leaders are laser-induced breakdown and laser ablation mass spectrometry, and the related techniques with transfer of the laser ablation products into inductively coupled plasma. Advances in design of diode lasers and optical parametric oscillators promote developments in fluorescence and ionization spectrometry and also in absorption techniques where uses of optical cavities for increased effective absorption pathlength are expected to expand. The use of plasma- and laser-based methods is increasing. An interest towards absolute (standardless) analysis has revived, particularly in emission spectrometry.
Great effort is being put into shrinking the analysis techniques to chip size. Although there are few examples of such systems competitive with traditional analysis techniques, potential advantages include size/portability, speed, and cost. (micro total analysis system (μTAS) or lab-on-a-chip). Microscale chemistry reduces the amounts of chemicals used.
Many developments improve the analysis of biological systems. Examples of rapidly expanding fields in this area are genomics, DNA sequencing and related research in genetic fingerprinting and DNA microarray; proteomics, the analysis of protein concentrations and modifications, especially in response to various stressors, at various developmental stages, or in various parts of the body, metabolomics, which deals with metabolites; transcriptomics, including mRNA and associated fields; lipidomics - lipids and its associated fields; peptidomics - peptides and its associated fields; and metallomics, dealing with metal concentrations and especially with their binding to proteins and other molecules.
Analytical chemistry has played a critical role in the understanding of basic science to a variety of practical applications, such as biomedical applications, environmental monitoring, quality control of industrial manufacturing, forensic science, and so on.
The recent developments in computer automation and information technologies have extended analytical chemistry into a number of new biological fields. For example, automated DNA sequencing machines were the basis for completing human genome projects leading to the birth of genomics. Protein identification and peptide sequencing by mass spectrometry opened a new field of proteomics. In addition to automating specific processes, there is effort to automate larger sections of lab testing, such as in companies like Emerald Cloud Lab and Transcriptic.
Analytical chemistry has been an indispensable area in the development of nanotechnology. Surface characterization instruments, electron microscopes and scanning probe microscopes enable scientists to visualize atomic structures with chemical characterizations.
See also
Calorimeter
Clinical chemistry
Ion beam analysis
List of chemical analysis methods
Important publications in analytical chemistry
List of materials analysis methods
Measurement uncertainty
Metrology
Microanalysis
Nuclear reaction analysis
Quality of analytical results
Radioanalytical chemistry
Rutherford backscattering spectroscopy
Sensory analysis - in the field of Food science
Virtual instrumentation
Working range
References
Further reading
Gurdeep, Chatwal Anand (2008). Instrumental Methods of Chemical Analysis Himalaya Publishing House (India)
Ralph L. Shriner, Reynold C. Fuson, David Y. Curtin, Terence C. Morill: The systematic identification of organic compounds - a laboratory manual, Verlag Wiley, New York 1980, 6. edition, .
Bettencourt da Silva, R; Bulska, E; Godlewska-Zylkiewicz, B; Hedrich, M; Majcen, N; Magnusson, B; Marincic, S; Papadakis, I; Patriarca, M; Vassileva, E; Taylor, P; Analytical measurement: measurement uncertainty and statistics, 2012, .
External links
Infografik and animation showing the progress of analytical chemistry
aas Atomic Absorption Spectrophotometer
Materials science | Analytical chemistry | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 4,040 | [
"Applied and interdisciplinary physics",
"Materials science",
"nan"
] |
2,428 | https://en.wikipedia.org/wiki/Analog%20computer | An analog computer or analogue computer is a type of computation machine (computer) that uses physical phenomena such as electrical, mechanical, or hydraulic quantities behaving according to the mathematical principles in question (analog signals) to model the problem being solved. In contrast, digital computers represent varying quantities symbolically and by discrete values of both time and amplitude (digital signals).
Analog computers can have a very wide range of complexity. Slide rules and nomograms are the simplest, while naval gunfire control computers and large hybrid digital/analog computers were among the most complicated. Complex mechanisms for process control and protective relays used analog computation to perform control and protective functions.
Analog computers were widely used in scientific and industrial applications even after the advent of digital computers, because at the time they were typically much faster, but they started to become obsolete as early as the 1950s and 1960s, although they remained in use in some specific applications, such as aircraft flight simulators, the flight computer in aircraft, and for teaching control systems in universities. Perhaps the most relatable example of analog computers are mechanical watches where the continuous and periodic rotation of interlinked gears drives the second, minute and hour needles in the clock. More complex applications, such as aircraft flight simulators and synthetic-aperture radar, remained the domain of analog computing (and hybrid computing) well into the 1980s, since digital computers were insufficient for the task.
Timeline of analog computers
Precursors
This is a list of examples of early computation devices considered precursors of the modern computers. Some of them may even have been dubbed 'computers' by the press, though they may fail to fit modern definitions.
The Antikythera mechanism, a type of device used to determine the positions of heavenly bodies known as an orrery, was described as an early mechanical analog computer by British physicist, information scientist, and historian of science Derek J. de Solla Price. It was discovered in 1901, in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to , during the Hellenistic period. Devices of a level of complexity comparable to that of the Antikythera mechanism would not reappear until a thousand years later.
Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use.
The planisphere was first described by Ptolemy in the 2nd century AD. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BC and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy.
The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation.
The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage.
The slide rule was invented around 1620–1630, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Aviation is one of the few fields where slide rules are still in widespread use, particularly for solving time–distance problems in light aircraft.
In 1831–1835, mathematician and engineer Giovanni Plana devised a perpetual-calendar machine, which, through a system of pulleys and cylinders, could predict the perpetual calendar for every year from AD 0 (that is, 1 BC) to AD 4000, keeping track of leap years and varying day length.
The tide-predicting machine invented by Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location.
The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876 James Thomson had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators. Several systems followed, notably those of Spanish engineer Leonardo Torres Quevedo, who built various analog machines for solving real and complex roots of polynomials; and Michelson and Stratton, whose Harmonic Analyser performed Fourier analysis, but using an array of 80 springs rather than Kelvin integrators. This work led to the mathematical understanding of the Gibbs phenomenon of overshoot in Fourier representation near discontinuities. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers.
Modern era
The Dumaresq was a mechanical calculating device invented around 1902 by Lieutenant John Dumaresq of the Royal Navy. It was an analog computer that related vital variables of the fire control problem to the movement of one's own ship and that of a target ship. It was often used with other devices, such as a Vickers range clock to generate range and deflection data so the gun sights of the ship could be continuously set. A number of versions of the Dumaresq were produced of increasing complexity as development proceeded.
By 1912, Arthur Pollen had developed an electrically driven mechanical analog computer for fire-control systems, based on the differential analyser. It was used by the Imperial Russian Navy in World War I.
Starting in 1929, AC network analyzers were constructed to solve calculation problems related to electrical power systems that were too large to solve with numerical methods at the time. These were essentially scale models of the electrical properties of the full-size system. Since network analyzers could handle problems too large for analytic methods or hand computation, they were also used to solve problems in nuclear physics and in the design of structures. More than 50 large network analyzers were built by the end of the 1950s.
World War II era gun directors, gun data computers, and bomb sights used mechanical analog computers. In 1942 Helmut Hölzer built a fully electronic analog computer at Peenemünde Army Research Center as an embedded control system (mixing device) to calculate V-2 rocket trajectories from the accelerations and orientations (measured by gyroscopes) and to stabilize and guide the missile. Mechanical analog computers were very important in gun fire control in World War II, the Korean War and well past the Vietnam War; they were made in significant numbers.
In the period 1930–1945 in the Netherlands, Johan van Veen developed an analogue computer to calculate and predict tidal currents when the geometry of the channels are changed. Around 1950, this idea was developed into the Deltar, a hydraulic analogy computer supporting the closure of estuaries in the southwest of the Netherlands (the Delta Works).
The FERMIAC was an analog computer invented by physicist Enrico Fermi in 1947 to aid in his studies of neutron transport. Project Cyclone was an analog computer developed by Reeves in 1950 for the analysis and design of dynamic systems. Project Typhoon was an analog computer developed by RCA in 1952. It consisted of over 4,000 electron tubes and used 100 dials and 6,000 plug-in connectors to program. The MONIAC Computer was a hydraulic analogy of a national economy first unveiled in 1949.
Computer Engineering Associates was spun out of Caltech in 1950 to provide commercial services using the "Direct Analogy Electric Analog Computer" ("the largest and most impressive general-purpose analyzer facility for the solution of field problems") developed there by Gilbert D. McCann, Charles H. Wilts, and Bart Locanthi.
Educational analog computers illustrated the principles of analog calculation. The Heathkit EC-1, a $199 educational analog computer, was made by the Heath Company, US . It was programmed using patch cords that connected nine operational amplifiers and other components. General Electric also marketed an "educational" analog computer kit of a simple design in the early 1960s consisting of two transistor tone generators and three potentiometers wired such that the frequency of the oscillator was nulled when the potentiometer dials were positioned by hand to satisfy an equation. The relative resistance of the potentiometer was then equivalent to the formula of the equation being solved. Multiplication or division could be performed, depending on which dials were inputs and which was the output. Accuracy and resolution was limited and a simple slide rule was more accurate. However, the unit did demonstrate the basic principle.
Analog computer designs were published in electronics magazines. One example is the PEAC (Practical Electronics analogue computer), published in Practical Electronics in the January 1968 edition. Another more modern hybrid computer design was published in Everyday Practical Electronics in 2002. An example described in the EPE hybrid computer was the flight of a VTOL aircraft such as the Harrier jump jet. The altitude and speed of the aircraft were calculated by the analog part of the computer and sent to a PC via a digital microprocessor and displayed on the PC screen.
In industrial process control, analog loop controllers were used to automatically regulate temperature, flow, pressure, or other process conditions. The technology of these controllers ranged from purely mechanical integrators, through vacuum-tube and solid-state devices, to emulation of analog controllers by microprocessors.
Electronic analog computers
The similarity between linear mechanical components, such as springs and dashpots (viscous-fluid dampers), and electrical components, such as capacitors, inductors, and resistors is striking in terms of mathematics. They can be modeled using equations of the same form.
However, the difference between these systems is what makes analog computing useful. Complex systems often are not amenable to pen-and-paper analysis, and require some form of testing or simulation. Complex mechanical systems, such as suspensions for racing cars, are expensive to fabricate and hard to modify. And taking precise mechanical measurements during high-speed tests adds further difficulty.
By contrast, it is very inexpensive to build an electrical equivalent of a complex mechanical system, to simulate its behavior. Engineers arrange a few operational amplifiers (op amps) and some passive linear components to form a circuit that follows the same equations as the mechanical system being simulated. All measurements can be taken directly with an oscilloscope. In the circuit, the (simulated) stiffness of the spring, for instance, can be changed by adjusting the parameters of an integrator. The electrical system is an analogy to the physical system, hence the name, but it is much less expensive than a mechanical prototype, much easier to modify, and generally safer.
The electronic circuit can also be made to run faster or slower than the physical system being simulated. Experienced users of electronic analog computers said that they offered a comparatively intimate control and understanding of the problem, relative to digital simulations.
Electronic analog computers are especially well-suited to representing situations described by differential equations. Historically, they were often used when a system of differential equations proved very difficult to solve by traditional means. As a simple example, the dynamics of a spring-mass system can be described by the equation , with as the vertical position of a mass , the damping coefficient, the spring constant and the gravity of Earth. For analog computing, the equation is programmed as . The equivalent analog circuit consists of two integrators for the state variables (speed) and (position), one inverter, and three potentiometers.
Electronic analog computers have drawbacks: the value of the circuit's supply voltage limits the range over which the variables may vary (since the value of a variable is represented by a voltage on a particular wire). Therefore, each problem must be scaled so its parameters and dimensions can be represented using voltages that the circuit can supply —e.g., the expected magnitudes of the velocity and the position of a spring pendulum. Improperly scaled variables can have their values "clamped" by the limits of the supply voltage. Or if scaled too small, they can suffer from higher noise levels. Either problem can cause the circuit to produce an incorrect simulation of the physical system. (Modern digital simulations are much more robust to widely varying values of their variables, but are still not entirely immune to these concerns: floating-point digital calculations support a huge dynamic range, but can suffer from imprecision if tiny differences of huge values lead to numerical instability.)
The precision of the analog computer readout was limited chiefly by the precision of the readout equipment used, generally three or four significant figures. (Modern digital simulations are much better in this area. Digital arbitrary-precision arithmetic can provide any desired degree of precision.) However, in most cases the precision of an analog computer is absolutely sufficient given the uncertainty of the model characteristics and its technical parameters.
Many small computers dedicated to specific computations are still part of industrial regulation equipment, but from the 1950s to the 1970s, general-purpose analog computers were the only systems fast enough for real time simulation of dynamic systems, especially in the aircraft, military and aerospace field.
In the 1960s, the major manufacturer was Electronic Associates of Princeton, New Jersey, with its 231R Analog Computer (vacuum tubes, 20 integrators) and subsequently its EAI 8800 Analog Computer (solid state operational amplifiers, 64 integrators). Its challenger was Applied Dynamics of Ann Arbor, Michigan.
Although the basic technology for analog computers is usually operational amplifiers (also called "continuous current amplifiers" because they have no low frequency limitation), in the 1960s an attempt was made in the French ANALAC computer to use an alternative technology: medium frequency carrier and non dissipative reversible circuits.
In the 1970s, every large company and administration concerned with problems in dynamics had an analog computing center, such as:
In the US: NASA (Huntsville, Houston), Martin Marietta (Orlando), Lockheed, Westinghouse, Hughes Aircraft
In Europe: CEA (French Atomic Energy Commission), MATRA, Aérospatiale, BAC (British Aircraft Corporation).
Construction
An analog computing machine consists of several main components:
Signal sources: These are blocks that generate analog signals, such as voltage or current, to represent input data and operations.
Amplifiers: Amplifiers are used to boost analog signals and maintain their amplitudes throughout the system. They amplify weak input signals and compensate for signal losses during transmission.
Filters: Filters are used to modify the spectrum of signals by suppressing or amplifying specific frequencies. They allow the isolation or suppression of certain signal components depending on the computational requirements.
Modulators and demodulators: Modulators convert information into analog signals that can be transmitted through a communication channel, and demodulators perform the reverse transformation, recovering the original data from modulated signals.
Adders, multipliers, log converters, and other calculation stages: These perform arithmetic operations on analog signals. They can be used for mathematical operations such as addition, multiplication, exponentiation, integration, and differentiation.
Storage and memory: Analog computing machines can use various forms of information storage, such as capacitors or inductors, to store intermediate results and memory.
Feedback and control: Feedback and control blocks are used to maintain the stability and accuracy of the analog computing machine. They may include regulation systems and error correction.
Patch panel: Analog computing machines also feature a patch panel or patch field. A patch panel is a physical structure on which connectors or contacts are placed to interconnect various components and modules within the system.
On the patch panel, various connections and routes can be set and switched to configure the machine and determine signal flows. This allows users to flexibly configure and reconfigure the analog computing system to perform specific tasks.
Patch panels are used to control data flows, connect and disconnect connections between various blocks of the system, including signal sources, amplifiers, filters, and other components. They provide convenience and flexibility in configuring and experimenting with analog computations.
Patch panels can be presented as a physical panel with connectors or, in more modern systems, as a software interface that allows virtual management of signal connections and routes.
Hardware interfaces: Interfaces provide means of interaction with the machine, for example, for parameter control or data transmission.
Output device: this device is designed to present the results of analog computations in a convenient form for the user or to transmit the obtained data to other systems.
Output devices in analog machines can vary depending on the specific goals of the system. For example, they could be graphical indicators, oscilloscopes, graphic recording devices, TV connection module, voltmeter, etc. These devices allow for the visualization of analog signals and the representation of the results of measurements or mathematical operations.
Power source and stabilizers.
These are just general blocks that can be found in a typical analog computing machine. The actual configuration and components may vary depending on the specific implementation and the intended use of the machine.
Analog–digital hybrids
Analog computing devices are fast; digital computing devices are more versatile and accurate. The idea behind an analog-digital hybrid is to combine the two processes for the best efficiency. An example of such hybrid elementary device is the hybrid multiplier, where one input is an analog signal, the other input is a digital signal and the output is analog. It acts as an analog potentiometer, upgradable digitally. This kind of hybrid technique is mainly used for fast dedicated real time computation when computing time is very critical, as signal processing for radars and generally for controllers in embedded systems.
In the early 1970s, analog computer manufacturers tried to tie together their analog computers with a digital computers to get the advantages of the two techniques. In such systems, the digital computer controlled the analog computer, providing initial set-up, initiating multiple analog runs, and automatically feeding and collecting data. The digital computer may also participate to the calculation itself using analog-to-digital and digital-to-analog converters.
The largest manufacturer of hybrid computers was Electronic Associates. Their hybrid computer model 8900 was made of a digital computer and one or more analog consoles. These systems were mainly dedicated to large projects such as the Apollo program and Space Shuttle at NASA, or Ariane in Europe, especially during the integration step where at the beginning everything is simulated, and progressively real components replace their simulated parts.
Only one company was known as offering general commercial computing services on its hybrid computers, CISI of France, in the 1970s.
The best reference in this field is the 100,000 simulation runs for each certification of the automatic landing systems of Airbus and Concorde aircraft.
After 1980, purely digital computers progressed more and more rapidly and were fast enough to compete with analog computers.
One key to the speed of analog computers was their fully parallel computation, but this was also a limitation. The more equations required for a problem, the more analog components were needed, even when the problem wasn't time critical. "Programming" a problem meant interconnecting the analog operators; even with a removable wiring panel this was not very versatile.
Implementations
Mechanical analog computers
While a wide variety of mechanisms have been developed throughout history, some stand out because of their theoretical importance, or because they were manufactured in significant quantities.
Most practical mechanical analog computers of any significant complexity used rotating shafts to carry variables from one mechanism to another. Cables and pulleys were used in a Fourier synthesizer, a tide-predicting machine, which summed the individual harmonic components. Another category, not nearly as well known, used rotating shafts only for input and output, with precision racks and pinions. The racks were connected to linkages that performed the computation. At least one U.S. Naval sonar fire control computer of the later 1950s, made by Librascope, was of this type, as was the principal computer in the Mk. 56 Gun Fire Control System.
Online, there is a remarkably clear illustrated reference (OP 1140) that describes the fire control computer mechanisms.
For adding and subtracting, precision miter-gear differentials were in common use in some computers; the Ford Instrument Mark I Fire Control Computer contained about 160 of them.
Integration with respect to another variable was done by a rotating disc driven by one variable. Output came from a pick-off device (such as a wheel) positioned at a radius on the disc proportional to the second variable. (A carrier with a pair of steel balls supported by small rollers worked especially well. A roller, its axis parallel to the disc's surface, provided the output. It was held against the pair of balls by a spring.)
Arbitrary functions of one variable were provided by cams, with gearing to convert follower movement to shaft rotation.
Functions of two variables were provided by three-dimensional cams. In one good design, one of the variables rotated the cam. A hemispherical follower moved its carrier on a pivot axis parallel to that of the cam's rotating axis. Pivoting motion was the output. The second variable moved the follower along the axis of the cam. One practical application was ballistics in gunnery.
Coordinate conversion from polar to rectangular was done by a mechanical resolver (called a "component solver" in US Navy fire control computers). Two discs on a common axis positioned a sliding block with pin (stubby shaft) on it. One disc was a face cam, and a follower on the block in the face cam's groove set the radius. The other disc, closer to the pin, contained a straight slot in which the block moved. The input angle rotated the latter disc (the face cam disc, for an unchanging radius, rotated with the other (angle) disc; a differential and a few gears did this correction).
Referring to the mechanism's frame, the location of the pin corresponded to the tip of the vector represented by the angle and magnitude inputs. Mounted on that pin was a square block.
Rectilinear-coordinate outputs (both sine and cosine, typically) came from two slotted plates, each slot fitting on the block just mentioned. The plates moved in straight lines, the movement of one plate at right angles to that of the other. The slots were at right angles to the direction of movement. Each plate, by itself, was like a Scotch yoke, known to steam engine enthusiasts.
During World War II, a similar mechanism converted rectilinear to polar coordinates, but it was not particularly successful and was eliminated in a significant redesign (USN, Mk. 1 to Mk. 1A).
Multiplication was done by mechanisms based on the geometry of similar right triangles. Using the trigonometric terms for a right triangle, specifically opposite, adjacent, and hypotenuse, the adjacent side was fixed by construction. One variable changed the magnitude of the opposite side. In many cases, this variable changed sign; the hypotenuse could coincide with the adjacent side (a zero input), or move beyond the adjacent side, representing a sign change.
Typically, a pinion-operated rack moving parallel to the (trig.-defined) opposite side would position a slide with a slot coincident with the hypotenuse. A pivot on the rack let the slide's angle change freely. At the other end of the slide (the angle, in trig. terms), a block on a pin fixed to the frame defined the vertex between the hypotenuse and the adjacent side.
At any distance along the adjacent side, a line perpendicular to it intersects the hypotenuse at a particular point. The distance between that point and the adjacent side is some fraction that is the product of 1 the distance from the vertex, and 2 the magnitude of the opposite side.
The second input variable in this type of multiplier positions a slotted plate perpendicular to the adjacent side. That slot contains a block, and that block's position in its slot is determined by another block right next to it. The latter slides along the hypotenuse, so the two blocks are positioned at a distance from the (trig.) adjacent side by an amount proportional to the product.
To provide the product as an output, a third element, another slotted plate, also moves parallel to the (trig.) opposite side of the theoretical triangle. As usual, the slot is perpendicular to the direction of movement. A block in its slot, pivoted to the hypotenuse block positions it.
A special type of integrator, used at a point where only moderate accuracy was needed, was based on a steel ball, instead of a disc. It had two inputs, one to rotate the ball, and the other to define the angle of the ball's rotating axis. That axis was always in a plane that contained the axes of two movement pick-off rollers, quite similar to the mechanism of a rolling-ball computer mouse (in that mechanism, the pick-off rollers were roughly the same diameter as the ball). The pick-off roller axes were at right angles.
A pair of rollers "above" and "below" the pick-off plane were mounted in rotating holders that were geared together. That gearing was driven by the angle input, and established the rotating axis of the ball. The other input rotated the "bottom" roller to make the ball rotate.
Essentially, the whole mechanism, called a component integrator, was a variable-speed drive with one motion input and two outputs, as well as an angle input. The angle input varied the ratio (and direction) of coupling between the "motion" input and the outputs according to the sine and cosine of the input angle.
Although they did not accomplish any computation, electromechanical position servos (aka. torque amplifiers) were essential in mechanical analog computers of the "rotating-shaft" type for providing operating torque to the inputs of subsequent computing mechanisms, as well as driving output data-transmission devices such as large torque-transmitter synchros in naval computers.
Other readout mechanisms, not directly part of the computation, included internal odometer-like counters with interpolating drum dials for indicating internal variables, and mechanical multi-turn limit stops.
Considering that accurately controlled rotational speed in analog fire-control computers was a basic element of their accuracy, there was a motor with its average speed controlled by a balance wheel, hairspring, jeweled-bearing differential, a twin-lobe cam, and spring-loaded contacts (ship's AC power frequency was not necessarily accurate, nor dependable enough, when these computers were designed).
Electronic analog computers
Electronic analog computers typically have front panels with numerous jacks (single-contact sockets) that permit patch cords (flexible wires with plugs at both ends) to create the interconnections that define the problem setup. In addition, there are precision high-resolution potentiometers (variable resistors) for setting up (and, when needed, varying) scale factors. In addition, there is usually a zero-center analog pointer-type meter for modest-accuracy voltage measurement. Stable, accurate voltage sources provide known magnitudes.
Typical electronic analog computers contain anywhere from a few to a hundred or more operational amplifiers ("op amps"), named because they perform mathematical operations. Op amps are a particular type of feedback amplifier with very high gain and stable input (low and stable offset). They are always used with precision feedback components that, in operation, all but cancel out the currents arriving from input components. The majority of op amps in a representative setup are summing amplifiers, which add and subtract analog voltages, providing the result at their output jacks. As well, op amps with capacitor feedback are usually included in a setup; they integrate the sum of their inputs with respect to time.
Integrating with respect to another variable is the nearly exclusive province of mechanical analog integrators; it is almost never done in electronic analog computers. However, given that a problem solution does not change with time, time can serve as one of the variables.
Other computing elements include analog multipliers, nonlinear function generators, and analog comparators.
Electrical elements such as inductors and capacitors used in electrical analog computers had to be carefully manufactured to reduce non-ideal effects. For example, in the construction of AC power network analyzers, one motive for using higher frequencies for the calculator (instead of the actual power frequency) was that higher-quality inductors could be more easily made. Many general-purpose analog computers avoided the use of inductors entirely, re-casting the problem in a form that could be solved using only resistive and capacitive elements, since high-quality capacitors are relatively easy to make.
The use of electrical properties in analog computers means that calculations are normally performed in real time (or faster), at a speed determined mostly by the frequency response of the operational amplifiers and other computing elements. In the history of electronic analog computers, there were some special high-speed types.
Nonlinear functions and calculations can be constructed to a limited precision (three or four digits) by designing function generators—special circuits of various combinations of resistors and diodes to provide the nonlinearity. Typically, as the input voltage increases, progressively more diodes conduct.
When compensated for temperature, the forward voltage drop of a transistor's base-emitter junction can provide a usably accurate logarithmic or exponential function. Op amps scale the output voltage so that it is usable with the rest of the computer.
Any physical process that models some computation can be interpreted as an analog computer. Some examples, invented for the purpose of illustrating the concept of analog computation, include using a bundle of spaghetti as a model of sorting numbers; a board, a set of nails, and a rubber band as a model of finding the convex hull of a set of points; and strings tied together as a model of finding the shortest path in a network. These are all described in Dewdney (1984).
Components
Analog computers often have a complicated framework, but they have, at their core, a set of key components that perform the calculations. The operator manipulates these through the computer's framework.
Key hydraulic components might include pipes, valves and containers.
Key mechanical components might include rotating shafts for carrying data within the computer, miter gear differentials, disc/ball/roller integrators, cams (2-D and 3-D), mechanical resolvers and multipliers, and torque servos.
Key electrical/electronic components might include:
precision resistors and capacitors
operational amplifiers
multipliers
potentiometers
fixed-function generators
The core mathematical operations used in an electric analog computer are:
addition
integration with respect to time
inversion
multiplication
exponentiation
logarithm
division
In some analog computer designs, multiplication is much preferred to division. Division is carried out with a multiplier in the feedback path of an Operational Amplifier.
Differentiation with respect to time is not frequently used, and in practice is avoided by redefining the problem when possible. It corresponds in the frequency domain to a high-pass filter, which means that high-frequency noise is amplified; differentiation also risks instability.
Limitations
In general, analog computers are limited by non-ideal effects. An analog signal is composed of four basic components: DC and AC magnitudes, frequency, and phase. The real limits of range on these characteristics limit analog computers. Some of these limits include the operational amplifier offset, finite gain, and frequency response, noise floor, non-linearities, temperature coefficient, and parasitic effects within semiconductor devices. For commercially available electronic components, ranges of these aspects of input and output signals are always figures of merit.
Decline
In the 1950s to 1970s, digital computers based on first vacuum tubes, transistors, integrated circuits and then micro-processors became more economical and precise. This led digital computers to largely replace analog computers. Even so, some research in analog computation is still being done. A few universities still use analog computers to teach control system theory. The American company Comdyna manufactured small analog computers. At Indiana University Bloomington, Jonathan Mills has developed the Extended Analog Computer based on sampling voltages in a foam sheet. At the Harvard Robotics Laboratory, analog computation is a research topic. Lyric Semiconductor's error correction circuits use analog probabilistic signals. Slide rules are still used as flight computers in flight training.
Resurgence
With the development of very-large-scale integration (VLSI) technology, Yannis Tsividis' group at Columbia University has been revisiting analog/hybrid computers design in standard CMOS process. Two VLSI chips have been developed, an 80th-order analog computer (250 nm) by Glenn Cowan in 2005 and a 4th-order hybrid computer (65 nm) developed by Ning Guo in 2015, both targeting at energy-efficient ODE/PDE applications. Glenn's chip contains 16 macros, in which there are 25 analog computing blocks, namely integrators, multipliers, fanouts, few nonlinear blocks. Ning's chip contains one macro block, in which there are 26 computing blocks including integrators, multipliers, fanouts, ADCs, SRAMs and DACs. Arbitrary nonlinear function generation is made possible by the ADC+SRAM+DAC chain, where the SRAM block stores the nonlinear function data. The experiments from the related publications revealed that VLSI analog/hybrid computers demonstrated about 1–2 orders magnitude of advantage in both solution time and energy while achieving accuracy within 5%, which points to the promise of using analog/hybrid computing techniques in the area of energy-efficient approximate computing. In 2016, a team of researchers developed a compiler to solve differential equations using analog circuits.
Analog computers are also used in neuromorphic computing, and in 2021 a group of researchers have shown that a specific type of artificial neural network called a spiking neural network was able to work with analog neuromorphic computers.
In 2021, the German company anabrid GmbH began to produce THE ANALOG THING (abbreviated THAT), a small low-cost analog computer mainly for educational and scientific use. The company is also constructing analog mainframes and hybrid computers.
Practical examples
These are examples of analog computers that have been constructed or practically used:
Analog Paradim, a modular analog computer produced by anabrid
Boeing B-29 Superfortress Central Fire Control System
Deltar
E6B flight computer
Ishiguro Storm Surge Computer
Kerrison Predictor
Leonardo Torres y Quevedo's Analogue Calculating Machines based on "fusee sans fin"
Librascope, aircraft weight and balance computer
Mechanical computer
Mechanical watch
Mechanical integrators, for example, the planimeter
Mischgerät (V-2 guidance computer)
MONIAC, economic modelling
Nomogram
Norden bombsight
Rangekeeper, and related fire control computers
Scanimate
SR-71 inlet control system (fast adjustment of inlet geometry to prevent super-sonic shock waves from causing engine flame-out at high mach numbers)
THE ANALOG THING, a small analog computer by anabrid
Torpedo Data Computer
Torquetum
Water integrator
Analog (audio) synthesizers can also be viewed as a form of analog computer, and their technology was originally based in part on electronic analog computer technology. The ARP 2600's Ring Modulator was actually a moderate-accuracy analog multiplier.
The Simulation Council (or Simulations Council) was an association of analog computer users in US. It is now known as The Society for Modeling and Simulation International. The Simulation Council newsletters from 1952 to 1963 are available online and show the concerns and technologies at the time, and the common use of analog computers for missilry.
See also
Analog neural network
Analogical models
Chaos theory
Differential equation
Dynamical system
Field-programmable analog array
General purpose analog computer
Lotfernrohr 7 series of WW II German bombsights
Signal (electrical engineering)
Voskhod Spacecraft "Globus" IMP navigation instrument
XY-writer
Notes
References
A.K. Dewdney. "On the Spaghetti Computer and Other Analog Gadgets for Problem Solving", Scientific American, 250(6):19–26, June 1984. Reprinted in The Armchair Universe, by A.K. Dewdney, published by W.H. Freeman & Company (1988), .
Universiteit van Amsterdam Computer Museum. (2007). Analog Computers.
Jackson, Albert S., "Analog Computation". London & New York: McGraw-Hill, 1960.
External links
Biruni's eight-geared lunisolar calendar in "Archaeology: High tech from Ancient Greece", François Charette, Nature 444, 551–552(30 November 2006),
The first computers
Large collection of electronic analog computers with lots of pictures, documentation and samples of implementations (some in German)
Large collection of old analog and digital computers at Old Computer Museum
A great disappearing act: the electronic analogue computer Chris Bissell, The Open University, Milton Keynes, UK Accessed February 2007
German computer museum with still runnable analog computers
Analog computer basics
Harvard Robotics Laboratory Analog Computation
The Enns Power Network Computer – an analog computer for the analysis of electric power systems (advertisement from 1955)
Librascope Development Company – Type LC-1 WWII Navy PV-1 "Balance Computor"
History of computing hardware
Greek inventions | Analog computer | [
"Technology"
] | 7,751 | [
"History of computing hardware",
"History of computing"
] |
2,443 | https://en.wikipedia.org/wiki/Acceleration | In mechanics, acceleration is the rate of change of the velocity of an object with respect to time. Acceleration is one of several components of kinematics, the study of motion. Accelerations are vector quantities (in that they have magnitude and direction). The orientation of an object's acceleration is given by the orientation of the net force acting on that object. The magnitude of an object's acceleration, as described by Newton's Second Law, is the combined effect of two causes:
the net balance of all external forces acting onto that object — magnitude is directly proportional to this net resulting force;
that object's mass, depending on the materials out of which it is made — magnitude is inversely proportional to the object's mass.
The SI unit for acceleration is metre per second squared (, ).
For example, when a vehicle starts from a standstill (zero velocity, in an inertial frame of reference) and travels in a straight line at increasing speeds, it is accelerating in the direction of travel. If the vehicle turns, an acceleration occurs toward the new direction and changes its motion vector. The acceleration of the vehicle in its current direction of motion is called a linear (or tangential during circular motions) acceleration, the reaction to which the passengers on board experience as a force pushing them back into their seats. When changing direction, the effecting acceleration is called radial (or centripetal during circular motions) acceleration, the reaction to which the passengers experience as a centrifugal force. If the speed of the vehicle decreases, this is an acceleration in the opposite direction of the velocity vector (mathematically a negative, if the movement is unidimensional and the velocity is positive), sometimes called deceleration or retardation, and passengers experience the reaction to deceleration as an inertial force pushing them forward. Such negative accelerations are often achieved by retrorocket burning in spacecraft. Both acceleration and deceleration are treated the same, as they are both changes in velocity. Each of these accelerations (tangential, radial, deceleration) is felt by passengers until their relative (differential) velocity are neutralised in reference to the acceleration due to change in speed.
Definition and properties
Average acceleration
An object's average acceleration over a period of time is its change in velocity, , divided by the duration of the period, . Mathematically,
Instantaneous acceleration
Instantaneous acceleration, meanwhile, is the limit of the average acceleration over an infinitesimal interval of time. In the terms of calculus, instantaneous acceleration is the derivative of the velocity vector with respect to time:
As acceleration is defined as the derivative of velocity, , with respect to time and velocity is defined as the derivative of position, , with respect to time, acceleration can be thought of as the second derivative of with respect to :
(Here and elsewhere, if motion is in a straight line, vector quantities can be substituted by scalars in the equations.)
By the fundamental theorem of calculus, it can be seen that the integral of the acceleration function is the velocity function ; that is, the area under the curve of an acceleration vs. time ( vs. ) graph corresponds to the change of velocity.
Likewise, the integral of the jerk function , the derivative of the acceleration function, can be used to find the change of acceleration at a certain time:
Units
Acceleration has the dimensions of velocity (L/T) divided by time, i.e. L T−2. The SI unit of acceleration is the metre per second squared (m s−2); or "metre per second per second", as the velocity in metres per second changes by the acceleration value, every second.
Other forms
An object moving in a circular motion—such as a satellite orbiting the Earth—is accelerating due to the change of direction of motion, although its speed may be constant. In this case it is said to be undergoing centripetal (directed towards the center) acceleration.
Proper acceleration, the acceleration of a body relative to a free-fall condition, is measured by an instrument called an accelerometer.
In classical mechanics, for a body with constant mass, the (vector) acceleration of the body's center of mass is proportional to the net force vector (i.e. sum of all forces) acting on it (Newton's second law):
where is the net force acting on the body, is the mass of the body, and is the center-of-mass acceleration. As speeds approach the speed of light, relativistic effects become increasingly large.
Tangential and centripetal acceleration
The velocity of a particle moving on a curved path as a function of time can be written as:
with equal to the speed of travel along the path, and
a unit vector tangent to the path pointing in the direction of motion at the chosen moment in time. Taking into account both the changing speed and the changing direction of , the acceleration of a particle moving on a curved path can be written using the chain rule of differentiation for the product of two functions of time as:
where is the unit (inward) normal vector to the particle's trajectory (also called the principal normal), and is its instantaneous radius of curvature based upon the osculating circle at time . The components
are called the tangential acceleration and the normal or radial acceleration (or centripetal acceleration in circular motion, see also circular motion and centripetal force), respectively.
Geometrical analysis of three-dimensional space curves, which explains tangent, (principal) normal and binormal, is described by the Frenet–Serret formulas.
Special cases
Uniform acceleration
Uniform or constant acceleration is a type of motion in which the velocity of an object changes by an equal amount in every equal time period.
A frequently cited example of uniform acceleration is that of an object in free fall in a uniform gravitational field. The acceleration of a falling body in the absence of resistances to motion is dependent only on the gravitational field strength (also called acceleration due to gravity). By Newton's Second Law the force acting on a body is given by:
Because of the simple analytic properties of the case of constant acceleration, there are simple formulas relating the displacement, initial and time-dependent velocities, and acceleration to the time elapsed:
where
is the elapsed time,
is the initial displacement from the origin,
is the displacement from the origin at time ,
is the initial velocity,
is the velocity at time , and
is the uniform rate of acceleration.
In particular, the motion can be resolved into two orthogonal parts, one of constant velocity and the other according to the above equations. As Galileo showed, the net result is parabolic motion, which describes, e.g., the trajectory of a projectile in vacuum near the surface of Earth.
Circular motion
In uniform circular motion, that is moving with constant speed along a circular path, a particle experiences an acceleration resulting from the change of the direction of the velocity vector, while its magnitude remains constant. The derivative of the location of a point on a curve with respect to time, i.e. its velocity, turns out to be always exactly tangential to the curve, respectively orthogonal to the radius in this point. Since in uniform motion the velocity in the tangential direction does not change, the acceleration must be in radial direction, pointing to the center of the circle. This acceleration constantly changes the direction of the velocity to be tangent in the neighbouring point, thereby rotating the velocity vector along the circle.
For a given speed , the magnitude of this geometrically caused acceleration (centripetal acceleration) is inversely proportional to the radius of the circle, and increases as the square of this speed:
For a given angular velocity , the centripetal acceleration is directly proportional to radius . This is due to the dependence of velocity on the radius .
Expressing centripetal acceleration vector in polar components, where is a vector from the centre of the circle to the particle with magnitude equal to this distance, and considering the orientation of the acceleration towards the center, yields
As usual in rotations, the speed of a particle may be expressed as an angular speed with respect to a point at the distance as
Thus
This acceleration and the mass of the particle determine the necessary centripetal force, directed toward the centre of the circle, as the net force acting on this particle to keep it in this uniform circular motion. The so-called 'centrifugal force', appearing to act outward on the body, is a so-called pseudo force experienced in the frame of reference of the body in circular motion, due to the body's linear momentum, a vector tangent to the circle of motion.
In a nonuniform circular motion, i.e., the speed along the curved path is changing, the acceleration has a non-zero component tangential to the curve, and is not confined to the principal normal, which directs to the center of the osculating circle, that determines the radius for the centripetal acceleration. The tangential component is given by the angular acceleration , i.e., the rate of change of the angular speed times the radius . That is,
The sign of the tangential component of the acceleration is determined by the sign of the angular acceleration (), and the tangent is always directed at right angles to the radius vector.
Coordinate systems
In multi-dimensional Cartesian coordinate systems, acceleration is broken up into components that correspond with each dimensional axis of the coordinate system. In a two-dimensional system, where there is an x-axis and a y-axis, corresponding acceleration components are defined as The two-dimensional acceleration vector is then defined as . The magnitude of this vector is found by the distance formula asIn three-dimensional systems where there is an additional z-axis, the corresponding acceleration component is defined asThe three-dimensional acceleration vector is defined as with its magnitude being determined by
Relation to relativity
Special relativity
The special theory of relativity describes the behaviour of objects travelling relative to other objects at speeds approaching that of light in vacuum. Newtonian mechanics is exactly revealed to be an approximation to reality, valid to great accuracy at lower speeds. As the relevant speeds increase toward the speed of light, acceleration no longer follows classical equations.
As speeds approach that of light, the acceleration produced by a given force decreases, becoming infinitesimally small as light speed is approached; an object with mass can approach this speed asymptotically, but never reach it.
General relativity
Unless the state of motion of an object is known, it is impossible to distinguish whether an observed force is due to gravity or to acceleration—gravity and inertial acceleration have identical effects. Albert Einstein called this the equivalence principle, and said that only observers who feel no force at all—including the force of gravity—are justified in concluding that they are not accelerating.
Conversions
See also
Acceleration (differential geometry)
Four-vector: making the connection between space and time explicit
Gravitational acceleration
Inertia
Orders of magnitude (acceleration)
Shock (mechanics)
Shock and vibration data logger measuring 3-axis acceleration
Space travel using constant acceleration
Specific force
References
External links
Acceleration Calculator Simple acceleration unit converter
Dynamics (mechanics)
Kinematic properties
Vector physical quantities | Acceleration | [
"Physics",
"Mathematics"
] | 2,292 | [
"Physical phenomena",
"Mechanical quantities",
"Physical quantities",
"Acceleration",
"Quantity",
"Classical mechanics",
"Motion (physics)",
"Kinematic properties",
"Dynamics (mechanics)",
"Vector physical quantities",
"Wikipedia categories named after physical quantities"
] |
2,457 | https://en.wikipedia.org/wiki/Apoptosis | Apoptosis (from ) is a form of programmed cell death that occurs in multicellular organisms and in some eukaryotic, single-celled microorganisms such as yeast. Biochemical events lead to characteristic cell changes (morphology) and death. These changes include blebbing, cell shrinkage, nuclear fragmentation, chromatin condensation, DNA fragmentation, and mRNA decay. The average adult human loses 50 to 70 billion cells each day due to apoptosis. For the average human child between 8 and 14 years old, each day the approximate loss is 20 to 30 billion cells.
In contrast to necrosis, which is a form of traumatic cell death that results from acute cellular injury, apoptosis is a highly regulated and controlled process that confers advantages during an organism's life cycle. For example, the separation of fingers and toes in a developing human embryo occurs because cells between the digits undergo apoptosis. Unlike necrosis, apoptosis produces cell fragments called apoptotic bodies that phagocytes are able to engulf and remove before the contents of the cell can spill out onto surrounding cells and cause damage to them.
Because apoptosis cannot stop once it has begun, it is a highly regulated process. Apoptosis can be initiated through one of two pathways. In the intrinsic pathway the cell kills itself because it senses cell stress, while in the extrinsic pathway the cell kills itself because of signals from other cells. Weak external signals may also activate the intrinsic pathway of apoptosis. Both pathways induce cell death by activating caspases, which are proteases, or enzymes that degrade proteins. The two pathways both activate initiator caspases, which then activate executioner caspases, which then kill the cell by degrading proteins indiscriminately.
In addition to its importance as a biological phenomenon, defective apoptotic processes have been implicated in a wide variety of diseases. Excessive apoptosis causes atrophy, whereas an insufficient amount results in uncontrolled cell proliferation, such as cancer. Some factors like Fas receptors and caspases promote apoptosis, while some members of the Bcl-2 family of proteins inhibit apoptosis.
Discovery and etymology
German scientist Carl Vogt was first to describe the principle of apoptosis in 1842. In 1885, anatomist Walther Flemming delivered a more precise description of the process of programmed cell death. However, it was not until 1965 that the topic was resurrected. While studying tissues using electron microscopy, John Kerr at the University of Queensland was able to distinguish apoptosis from traumatic cell death. Following the publication of a paper describing the phenomenon, Kerr was invited to join Alastair Currie, as well as Andrew Wyllie, who was Currie's graduate student, at the University of Aberdeen. In 1972, the trio published a seminal article in the British Journal of Cancer. Kerr had initially used the term programmed cell necrosis, but in the article, the process of natural cell death was called apoptosis. Kerr, Wyllie and Currie credited James Cormack, a professor of Greek language at University of Aberdeen, with suggesting the term apoptosis. Kerr received the Paul Ehrlich and Ludwig Darmstaedter Prize on March 14, 2000, for his description of apoptosis. He shared the prize with Boston biologist H. Robert Horvitz.
For many years, neither "apoptosis" nor "programmed cell death" was a highly cited term. Two discoveries brought cell death from obscurity to a major field of research: identification of the first component of the cell death control and effector mechanisms, and linkage of abnormalities in cell death to human disease, in particular cancer. This occurred in 1988 when it was shown that BCL2, the gene responsible for follicular lymphoma, encoded a protein that inhibited cell death.
The 2002 Nobel Prize in Medicine was awarded to Sydney Brenner, H. Robert Horvitz and John Sulston for their work identifying genes that control apoptosis. The genes were identified by studies in the nematode C. elegans and homologues of these genes function in humans to regulate apoptosis.
In Greek, apoptosis translates to the "falling off" of leaves from a tree. Cormack, professor of Greek language, reintroduced the term for medical use as it had a medical meaning for the Greeks over two thousand years before. Hippocrates used the term to mean "the falling off of the bones". Galen extended its meaning to "the dropping of the scabs". Cormack was no doubt aware of this usage when he suggested the name. Debate continues over the correct pronunciation, with opinion divided between a pronunciation with the second p silent ( ) and the second p pronounced (). In English, the p of the Greek -pt- consonant cluster is typically silent at the beginning of a word (e.g. pterodactyl, Ptolemy), but articulated when used in combining forms preceded by a vowel, as in helicopter or the orders of insects: diptera, lepidoptera, etc.
In the original Kerr, Wyllie & Currie paper, there is a footnote regarding the pronunciation:
We are most grateful to Professor James Cormack of the Department of Greek, University of Aberdeen, for suggesting this term. The word "apoptosis" () is used in Greek to describe the "dropping off" or "falling off" of petals from flowers, or leaves from trees. To show the derivation clearly, we propose that the stress should be on the penultimate syllable, the second half of the word being pronounced like "ptosis" (with the "p" silent), which comes from the same root "to fall", and is already used to describe the drooping of the upper eyelid.
Activation mechanisms
The initiation of apoptosis is tightly regulated by activation mechanisms, because once apoptosis has begun, it inevitably leads to the death of the cell. The two best-understood activation mechanisms are the intrinsic pathway (also called the mitochondrial pathway) and the extrinsic pathway. The intrinsic pathway is activated by intracellular signals generated when cells are stressed and depends on the release of proteins from the intermembrane space of mitochondria. The extrinsic pathway is activated by extracellular ligands binding to cell-surface death receptors, which leads to the formation of the death-inducing signaling complex (DISC).
A cell initiates intracellular apoptotic signaling in response to a stress, which may bring about cell death. The binding of nuclear receptors by glucocorticoids, heat, radiation, nutrient deprivation, viral infection, hypoxia, increased intracellular concentration of free fatty acids and increased intracellular calcium concentration, for example, by damage to the membrane, can all trigger the release of intracellular apoptotic signals by a damaged cell. A number of cellular components, such as poly ADP ribose polymerase, may also help regulate apoptosis. Single cell fluctuations have been observed in experimental studies of stress induced apoptosis.
Before the actual process of cell death is precipitated by enzymes, apoptotic signals must cause regulatory proteins to initiate the apoptosis pathway. This step allows those signals to cause cell death, or the process to be stopped, should the cell no longer need to die. Several proteins are involved, but two main methods of regulation have been identified: the targeting of mitochondria functionality, or directly transducing the signal via adaptor proteins to the apoptotic mechanisms. An extrinsic pathway for initiation identified in several toxin studies is an increase in calcium concentration within a cell caused by drug activity, which also can cause apoptosis via a calcium binding protease calpain.
Intrinsic pathway
The intrinsic pathway is also known as the mitochondrial pathway. Mitochondria are essential to multicellular life. Without them, a cell ceases to respire aerobically and quickly dies. This fact forms the basis for some apoptotic pathways. Apoptotic proteins that target mitochondria affect them in different ways. They may cause mitochondrial swelling through the formation of membrane pores, or they may increase the permeability of the mitochondrial membrane and cause apoptotic effectors to leak out. There is also a growing body of evidence indicating that nitric oxide is able to induce apoptosis by helping to dissipate the membrane potential of mitochondria and therefore make it more permeable. Nitric oxide has been implicated in initiating and inhibiting apoptosis through its possible action as a signal molecule of subsequent pathways that activate apoptosis.
During apoptosis, cytochrome c is released from mitochondria through the actions of the proteins Bax and Bak. The mechanism of this release is enigmatic, but appears to stem from a multitude of Bax/Bak homo- and hetero-dimers of Bax/Bak inserted into the outer membrane. Once cytochrome c is released it binds with Apoptotic protease activating factor – 1 (Apaf-1) and ATP, which then bind to pro-caspase-9 to create a protein complex known as an apoptosome. The apoptosome cleaves the pro-caspase to its active form of caspase-9, which in turn cleaves and activates pro-caspase into the effector caspase-3.
Mitochondria also release proteins known as SMACs (second mitochondria-derived activator of caspases) into the cell's cytosol following the increase in permeability of the mitochondria membranes. SMAC binds to proteins that inhibit apoptosis (IAPs) thereby deactivating them, and preventing the IAPs from arresting the process and therefore allowing apoptosis to proceed. IAP also normally suppresses the activity of a group of cysteine proteases called caspases, which carry out the degradation of the cell. Therefore, the actual degradation enzymes can be seen to be indirectly regulated by mitochondrial permeability.
Extrinsic pathway
Two theories of the direct initiation of apoptotic mechanisms in mammals have been suggested: the TNF-induced (tumor necrosis factor) model and the Fas-Fas ligand-mediated model, both involving receptors of the TNF receptor (TNFR) family coupled to extrinsic signals.
TNF pathway
TNF-alpha is a cytokine produced mainly by activated macrophages, and is the major extrinsic mediator of apoptosis. Most cells in the human body have two receptors for TNF-alpha: TNFR1 and TNFR2. The binding of TNF-alpha to TNFR1 has been shown to initiate the pathway that leads to caspase activation via the intermediate membrane proteins TNF receptor-associated death domain (TRADD) and Fas-associated death domain protein (FADD). cIAP1/2 can inhibit TNF-α signaling by binding to TRAF2. FLIP inhibits the activation of caspase-8. Binding of this receptor can also indirectly lead to the activation of transcription factors involved in cell survival and inflammatory responses. However, signalling through TNFR1 might also induce apoptosis in a caspase-independent manner. The link between TNF-alpha and apoptosis shows why an abnormal production of TNF-alpha plays a fundamental role in several human diseases, especially in autoimmune diseases. The TNF-alpha receptor superfamily also includes death receptors (DRs), such as DR4 and DR5. These receptors bind to the protein TRAIL and mediate apoptosis. Apoptosis is known to be one of the primary mechanisms of targeted cancer therapy. Luminescent iridium complex-peptide hybrids (IPHs) have recently been designed, which mimic TRAIL and bind to death receptors on cancer cells, thereby inducing their apoptosis.
Fas pathway
The fas receptor (First apoptosis signal) – (also known as Apo-1 or CD95) is a transmembrane protein of the TNF family which binds the Fas ligand (FasL). The interaction between Fas and FasL results in the formation of the death-inducing signaling complex (DISC), which contains the FADD, caspase-8 and caspase-10. In some types of cells (type I), processed caspase-8 directly activates other members of the caspase family, and triggers the execution of apoptosis of the cell. In other types of cells (type II), the Fas-DISC starts a feedback loop that spirals into increasing release of proapoptotic factors from mitochondria and the amplified activation of caspase-8.
Common components
Following TNF-R1 and Fas activation in mammalian cells a balance between proapoptotic (BAX, BID, BAK, or BAD) and anti-apoptotic (Bcl-Xl and Bcl-2) members of the Bcl-2 family are established. This balance is the proportion of proapoptotic homodimers that form in the outer-membrane of the mitochondrion. The proapoptotic homodimers are required to make the mitochondrial membrane permeable for the release of caspase activators such as cytochrome c and SMAC. Control of proapoptotic proteins under normal cell conditions of nonapoptotic cells is incompletely understood, but in general, Bax or Bak are activated by the activation of BH3-only proteins, part of the Bcl-2 family.
Caspases
Caspases play the central role in the transduction of ER apoptotic signals. Caspases are proteins that are highly conserved, cysteine-dependent aspartate-specific proteases. There are two types of caspases: initiator caspases (caspases 2, 8, 9, 10, 11, and 12) and effector caspases (caspases 3, 6, and 7). The activation of initiator caspases requires binding to specific oligomeric activator protein. Effector caspases are then activated by these active initiator caspases through proteolytic cleavage. The active effector caspases then proteolytically degrade a host of intracellular proteins to carry out the cell death program.
Caspase-independent apoptotic pathway
There also exists a caspase-independent apoptotic pathway that is mediated by AIF (apoptosis-inducing factor).
Apoptosis model in amphibians
The frog Xenopus laevis serves as an ideal model system for the study of the mechanisms of apoptosis. In fact, iodine and thyroxine also stimulate the spectacular apoptosis of the cells of the larval gills, tail and fins in amphibian's metamorphosis, and stimulate the evolution of their nervous system transforming the aquatic, vegetarian tadpole into the terrestrial, carnivorous frog.
Negative regulators of apoptosis
Negative regulation of apoptosis inhibits cell death signaling pathways, helping tumors to evade cell death and developing drug resistance. The ratio between anti-apoptotic (Bcl-2) and pro-apoptotic (Bax) proteins determines whether a cell lives or dies. Many families of proteins act as negative regulators categorized into either antiapoptotic factors, such as IAPs and Bcl-2 proteins or prosurvival factors like cFLIP, BNIP3, FADD, Akt, and NF-κB.
Proteolytic caspase cascade: Killing the cell
Many pathways and signals lead to apoptosis, but these converge on a single mechanism that actually causes the death of the cell. After a cell receives stimulus, it undergoes organized degradation of cellular organelles by activated proteolytic caspases. In addition to the destruction of cellular organelles, mRNA is rapidly and globally degraded by a mechanism that is not yet fully characterized. mRNA decay is triggered very early in apoptosis.
A cell undergoing apoptosis shows a series of characteristic morphological changes. Early alterations include:
Cell shrinkage and rounding occur because of the retraction of lamellipodia and the breakdown of the proteinaceous cytoskeleton by caspases.
The cytoplasm appears dense, and the organelles appear tightly packed.
Chromatin undergoes condensation into compact patches against the nuclear envelope (also known as the perinuclear envelope) in a process known as pyknosis, a hallmark of apoptosis.
The nuclear envelope becomes discontinuous and the DNA inside it is fragmented in a process referred to as karyorrhexis. The nucleus breaks into several discrete chromatin bodies or nucleosomal units due to the degradation of DNA.
Apoptosis progresses quickly and its products are quickly removed, making it difficult to detect or visualize on classical histology sections. During karyorrhexis, endonuclease activation leaves short DNA fragments, regularly spaced in size. These give a characteristic "laddered" appearance on agar gel after electrophoresis. Tests for DNA laddering differentiate apoptosis from ischemic or toxic cell death.
Apoptotic cell disassembly
Before the apoptotic cell is disposed of, there is a process of disassembly. There are three recognized steps in apoptotic cell disassembly:
Membrane blebbing: The cell membrane shows irregular buds known as blebs. Initially these are smaller surface blebs. Later these can grow into larger so-called dynamic membrane blebs. An important regulator of apoptotic cell membrane blebbing is ROCK1 (rho associated coiled-coil-containing protein kinase 1).
Formation of membrane protrusions: Some cell types, under specific conditions, may develop different types of long, thin extensions of the cell membrane called membrane protrusions. Three types have been described: microtubule spikes, apoptopodia (feet of death), and beaded apoptopodia (the latter having a beads-on-a-string appearance). Pannexin 1 is an important component of membrane channels involved in the formation of apoptopodia and beaded apoptopodia.
Fragmentation: The cell breaks apart into multiple vesicles called apoptotic bodies, which undergo phagocytosis. The plasma membrane protrusions may help bring apoptotic bodies closer to phagocytes.
Removal of dead cells
The removal of dead cells by neighboring phagocytic cells has been termed efferocytosis.
Dying cells that undergo the final stages of apoptosis display phagocytotic molecules, such as phosphatidylserine, on their cell surface. Phosphatidylserine is normally found on the inner leaflet surface of the plasma membrane, but is redistributed during apoptosis to the extracellular surface by a protein known as scramblase. These molecules mark the cell for phagocytosis by cells possessing the appropriate receptors, such as macrophages. The removal of dying cells by phagocytes occurs in an orderly manner without eliciting an inflammatory response. During apoptosis cellular RNA and DNA are separated from each other and sorted to different apoptotic bodies; separation of RNA is initiated as nucleolar segregation.
Pathway knock-outs
Many knock-outs have been made in the apoptosis pathways to test the function of each of the proteins. Several caspases, in addition to APAF1 and FADD, have been mutated to determine the new phenotype. In order to create a tumor necrosis factor (TNF) knockout, an exon containing the nucleotides 3704–5364 was removed from the gene. This exon encodes a portion of the mature TNF domain, as well as the leader sequence, which is a highly conserved region necessary for proper intracellular processing. TNF-/- mice develop normally and have no gross structural or morphological abnormalities. However, upon immunization with SRBC (sheep red blood cells), these mice demonstrated a deficiency in the maturation of an antibody response; they were able to generate normal levels of IgM, but could not develop specific IgG levels. Apaf-1 is the protein that turns on caspase 9 by cleavage to begin the caspase cascade that leads to apoptosis. Since a -/- mutation in the APAF-1 gene is embryonic lethal, a gene trap strategy was used in order to generate an APAF-1 -/- mouse. This assay is used to disrupt gene function by creating an intragenic gene fusion. When an APAF-1 gene trap is introduced into cells, many morphological changes occur, such as spina bifida, the persistence of interdigital webs, and open brain. In addition, after embryonic day 12.5, the brain of the embryos showed several structural changes. APAF-1 cells are protected from apoptosis stimuli such as irradiation. A BAX-1 knock-out mouse exhibits normal forebrain formation and a decreased programmed cell death in some neuronal populations and in the spinal cord, leading to an increase in motor neurons.
The caspase proteins are integral parts of the apoptosis pathway, so it follows that knock-outs made have varying damaging results. A caspase 9 knock-out leads to a severe brain malformation . A caspase 8 knock-out leads to cardiac failure and thus embryonic lethality . However, with the use of cre-lox technology, a caspase 8 knock-out has been created that exhibits an increase in peripheral T cells, an impaired T cell response, and a defect in neural tube closure . These mice were found to be resistant to apoptosis mediated by CD95, TNFR, etc. but not resistant to apoptosis caused by UV irradiation, chemotherapeutic drugs, and other stimuli. Finally, a caspase 3 knock-out was characterized by ectopic cell masses in the brain and abnormal apoptotic features such as membrane blebbing or nuclear fragmentation . A remarkable feature of these KO mice is that they have a very restricted phenotype: Casp3, 9, APAF-1 KO mice have deformations of neural tissue and FADD and Casp 8 KO showed defective heart development, however, in both types of KO other organs developed normally and some cell types were still sensitive to apoptotic stimuli suggesting that unknown proapoptotic pathways exist.
Methods for distinguishing apoptotic from necrotic cells
Label-free live cell imaging, time-lapse microscopy, flow fluorocytometry, and transmission electron microscopy can be used to compare apoptotic and necrotic cells. There are also various biochemical techniques for analysis of cell surface markers (phosphatidylserine exposure versus cell permeability by flow cytometry), cellular markers such as DNA fragmentation (flow cytometry), caspase activation, Bid cleavage, and cytochrome c release (Western blotting). Supernatant screening for caspases, HMGB1, and cytokeratin 18 release can identify primary from secondary necrotic cells. However, no distinct surface or biochemical markers of necrotic cell death have been identified yet, and only negative markers are available. These include absence of apoptotic markers (caspase activation, cytochrome c release, and oligonucleosomal DNA fragmentation) and differential kinetics of cell death markers (phosphatidylserine exposure and cell membrane permeabilization). A selection of techniques that can be used to distinguish apoptosis from necroptotic cells could be found in these references.
Implication in disease
Defective pathways
The many different types of apoptotic pathways contain a multitude of different biochemical components, many of them not yet understood. As a pathway is more or less sequential in nature, removing or modifying one component leads to an effect in another. In a living organism, this can have disastrous effects, often in the form of disease or disorder. A discussion of every disease caused by modification of the various apoptotic pathways would be impractical, but the concept overlying each one is the same: The normal functioning of the pathway has been disrupted in such a way as to impair the ability of the cell to undergo normal apoptosis. This results in a cell that lives past its "use-by date" and is able to replicate and pass on any faulty machinery to its progeny, increasing the likelihood of the cell's becoming cancerous or diseased.
A recently described example of this concept in action can be seen in the development of a lung cancer called NCI-H460. The X-linked inhibitor of apoptosis protein (XIAP) is overexpressed in cells of the H460 cell line. XIAPs bind to the processed form of caspase-9 and suppress the activity of apoptotic activator cytochrome c, therefore overexpression leads to a decrease in the number of proapoptotic agonists. As a consequence, the balance of anti-apoptotic and proapoptotic effectors is upset in favour of the former, and the damaged cells continue to replicate despite being directed to die. Defects in regulation of apoptosis in cancer cells occur often at the level of control of transcription factors. As a particular example, defects in molecules that control transcription factor NF-κB in cancer change the mode of transcriptional regulation and the response to apoptotic signals, to curtail dependence on the tissue that the cell belongs. This degree of independence from external survival signals, can enable cancer metastasis.
Dysregulation of p53
The tumor-suppressor protein p53 accumulates when DNA is damaged due to a chain of biochemical factors. Part of this pathway includes alpha-interferon and beta-interferon, which induce transcription of the p53 gene, resulting in the increase of p53 protein level and enhancement of cancer cell-apoptosis. p53 prevents the cell from replicating by stopping the cell cycle at G1, or interphase, to give the cell time to repair; however, it will induce apoptosis if damage is extensive and repair efforts fail. Any disruption to the regulation of the p53 or interferon genes will result in impaired apoptosis and the possible formation of tumors.
Inhibition
Inhibition of apoptosis can result in a number of cancers, inflammatory diseases, and viral infections. It was originally believed that the associated accumulation of cells was due to an increase in cellular proliferation, but it is now known that it is also due to a decrease in cell death. The most common of these diseases is cancer, the disease of excessive cellular proliferation, which is often characterized by an overexpression of IAP family members. As a result, the malignant cells experience an abnormal response to apoptosis induction: Cycle-regulating genes (such as p53, ras or c-myc) are mutated or inactivated in diseased cells, and further genes (such as bcl-2) also modify their expression in tumors. Some apoptotic factors are vital during mitochondrial respiration e.g. cytochrome C. Pathological inactivation of apoptosis in cancer cells is correlated with frequent respiratory metabolic shifts toward glycolysis (an observation known as the "Warburg hypothesis".
HeLa cell
Apoptosis in HeLa cells is inhibited by proteins produced by the cell; these inhibitory proteins target retinoblastoma tumor-suppressing proteins. These tumor-suppressing proteins regulate the cell cycle, but are rendered inactive when bound to an inhibitory protein. HPV E6 and E7 are inhibitory proteins expressed by the human papillomavirus, HPV being responsible for the formation of the cervical tumor from which HeLa cells are derived. HPV E6 causes p53, which regulates the cell cycle, to become inactive. HPV E7 binds to retinoblastoma tumor suppressing proteins and limits its ability to control cell division. These two inhibitory proteins are partially responsible for HeLa cells' immortality by inhibiting apoptosis to occur.
Treatments
The main method of treatment for potential death from signaling-related diseases involves either increasing or decreasing the susceptibility of apoptosis in diseased cells, depending on whether the disease is caused by either the inhibition of or excess apoptosis. For instance, treatments aim to restore apoptosis to treat diseases with deficient cell death and to increase the apoptotic threshold to treat diseases involved with excessive cell death. To stimulate apoptosis, one can increase the number of death receptor ligands (such as TNF or TRAIL), antagonize the anti-apoptotic Bcl-2 pathway, or introduce Smac mimetics to inhibit the inhibitor (IAPs). The addition of agents such as Herceptin, Iressa, or Gleevec works to stop cells from cycling and causes apoptosis activation by blocking growth and survival signaling further upstream. Finally, adding p53-MDM2 complexes displaces p53 and activates the p53 pathway, leading to cell cycle arrest and apoptosis. Many different methods can be used either to stimulate or to inhibit apoptosis in various places along the death signaling pathway.
Apoptosis is a multi-step, multi-pathway cell-death programme that is inherent in every cell of the body. In cancer, the apoptosis cell-division ratio is altered. Cancer treatment by chemotherapy and irradiation kills target cells primarily by inducing apoptosis.
Hyperactive apoptosis
On the other hand, loss of control of cell death (resulting in excess apoptosis) can lead to neurodegenerative diseases, hematologic diseases, and tissue damage. Neurons that rely on mitochondrial respiration undergo apoptosis in neurodegenerative diseases such as Alzheimer's and Parkinson's. (an observation known as the "Inverse Warburg hypothesis"). Moreover, there is an inverse epidemiological comorbidity between neurodegenerative diseases and cancer. The progression of HIV is directly linked to excess, unregulated apoptosis. In a healthy individual, the number of CD4+ lymphocytes is in balance with the cells generated by the bone marrow; however, in HIV-positive patients, this balance is lost due to an inability of the bone marrow to regenerate CD4+ cells. In the case of HIV, CD4+ lymphocytes die at an accelerated rate through uncontrolled apoptosis, when stimulated.
At the molecular level, hyperactive apoptosis can be caused by defects in signaling pathways that regulate the Bcl-2 family proteins. Increased expression of apoptotic proteins such as BIM, or their decreased proteolysis, leads to cell death and can cause a number of pathologies, depending on the cells where excessive activity of BIM occurs. Cancer cells can escape apoptosis through mechanisms that suppress BIM expression or by increased proteolysis of BIM.
Treatments
Treatments aiming to inhibit works to block specific caspases. Finally, the Akt protein kinase promotes cell survival through two pathways. Akt phosphorylates and inhibits Bad (a Bcl-2 family member), causing Bad to interact with the 14-3-3 scaffold, resulting in Bcl dissociation and thus cell survival. Akt also activates IKKα, which leads to NF-κB activation and cell survival. Active NF-κB induces the expression of anti-apoptotic genes such as Bcl-2, resulting in inhibition of apoptosis. NF-κB has been found to play both an antiapoptotic role and a proapoptotic role depending on the stimuli utilized and the cell type.
HIV progression
The progression of the human immunodeficiency virus infection into AIDS is due primarily to the depletion of CD4+ T-helper lymphocytes in a manner that is too rapid for the body's bone marrow to replenish the cells, leading to a compromised immune system. One of the mechanisms by which T-helper cells are depleted is apoptosis, which results from a series of biochemical pathways:
HIV enzymes deactivate anti-apoptotic Bcl-2. This does not directly cause cell death but primes the cell for apoptosis should the appropriate signal be received. In parallel, these enzymes activate proapoptotic procaspase-8, which does directly activate the mitochondrial events of apoptosis.
HIV may increase the level of cellular proteins that prompt Fas-mediated apoptosis.
HIV proteins decrease the amount of CD4 glycoprotein marker present on the cell membrane.
Released viral particles and proteins present in extracellular fluid are able to induce apoptosis in nearby "bystander" T helper cells.
HIV decreases the production of molecules involved in marking the cell for apoptosis, giving the virus time to replicate and continue releasing apoptotic agents and virions into the surrounding tissue.
The infected CD4+ cell may also receive the death signal from a cytotoxic T cell.
Cells may also die as direct consequences of viral infections. HIV-1 expression induces tubular cell G2/M arrest and apoptosis. The progression from HIV to AIDS is not immediate or even necessarily rapid; HIV's cytotoxic activity toward CD4+ lymphocytes is classified as AIDS once a given patient's CD4+ cell count falls below 200.
Researchers from Kumamoto University in Japan have developed a new method to eradicate HIV in viral reservoir cells, named "Lock-in and apoptosis." Using the synthesized compound Heptanoylphosphatidyl L-Inositol Pentakisphophate (or L-Hippo) to bind strongly to the HIV protein PR55Gag, they were able to suppress viral budding. By suppressing viral budding, the researchers were able to trap the HIV virus in the cell and allow for the cell to undergo apoptosis (natural cell death). Associate Professor Mikako Fujita has stated that the approach is not yet available to HIV patients because the research team has to conduct further research on combining the drug therapy that currently exists with this "Lock-in and apoptosis" approach to lead to complete recovery from HIV.
Viral infection
Viral induction of apoptosis occurs when one or several cells of a living organism are infected with a virus, leading to cell death. Cell death in organisms is necessary for the normal development of cells and the cell cycle maturation. It is also important in maintaining the regular functions and activities of cells.
Viruses can trigger apoptosis of infected cells via a range of mechanisms including:
Receptor binding
Activation of protein kinase R (PKR)
Interaction with p53
Expression of viral proteins coupled to MHC proteins on the surface of the infected cell, allowing recognition by cells of the immune system (such as natural killer and cytotoxic T cells) that then induce the infected cell to undergo apoptosis.
Canine distemper virus (CDV) is known to cause apoptosis in central nervous system and lymphoid tissue of infected dogs in vivo and in vitro.
Apoptosis caused by CDV is typically induced via the extrinsic pathway, which activates caspases that disrupt cellular function and eventually leads to the cells death. In normal cells, CDV activates caspase-8 first, which works as the initiator protein followed by the executioner protein caspase-3. However, apoptosis induced by CDV in HeLa cells does not involve the initiator protein caspase-8. HeLa cell apoptosis caused by CDV follows a different mechanism than that in vero cell lines. This change in the caspase cascade suggests CDV induces apoptosis via the intrinsic pathway, excluding the need for the initiator caspase-8. The executioner protein is instead activated by the internal stimuli caused by viral infection not a caspase cascade.
The Oropouche virus (OROV) is found in the family Bunyaviridae. The study of apoptosis brought on by Bunyaviridae was initiated in 1996, when it was observed that apoptosis was induced by the La Crosse virus into the kidney cells of baby hamsters and into the brains of baby mice.
OROV is a disease that is transmitted between humans by the biting midge (Culicoides paraensis). It is referred to as a zoonotic arbovirus and causes febrile illness, characterized by the onset of a sudden fever known as Oropouche fever.
The Oropouche virus also causes disruption in cultured cells – cells that are cultivated in distinct and specific conditions. An example of this can be seen in HeLa cells, whereby the cells begin to degenerate shortly after they are infected.
With the use of gel electrophoresis, it can be observed that OROV causes DNA fragmentation in HeLa cells. It can be interpreted by counting, measuring, and analyzing the cells of the Sub/G1 cell population. When HeLA cells are infected with OROV, the cytochrome C is released from the membrane of the mitochondria, into the cytosol of the cells. This type of interaction shows that apoptosis is activated via an intrinsic pathway.
In order for apoptosis to occur within OROV, viral uncoating, viral internalization, along with the replication of cells is necessary. Apoptosis in some viruses is activated by extracellular stimuli. However, studies have demonstrated that the OROV infection causes apoptosis to be activated through intracellular stimuli and involves the mitochondria.
Many viruses encode proteins that can inhibit apoptosis. Several viruses encode viral homologs of Bcl-2. These homologs can inhibit proapoptotic proteins such as BAX and BAK, which are essential for the activation of apoptosis. Examples of viral Bcl-2 proteins include the Epstein-Barr virus BHRF1 protein and the adenovirus E1B 19K protein. Some viruses express caspase inhibitors that inhibit caspase activity and an example is the CrmA protein of cowpox viruses. Whilst a number of viruses can block the effects of TNF and Fas. For example, the M-T2 protein of myxoma viruses can bind TNF preventing it from binding the TNF receptor and inducing a response. Furthermore, many viruses express p53 inhibitors that can bind p53 and inhibit its transcriptional transactivation activity. As a consequence, p53 cannot induce apoptosis, since it cannot induce the expression of proapoptotic proteins. The adenovirus E1B-55K protein and the hepatitis B virus HBx protein are examples of viral proteins that can perform such a function.
Viruses can remain intact from apoptosis in particular in the latter stages of infection. They can be exported in the apoptotic bodies that pinch off from the surface of the dying cell, and the fact that they are engulfed by phagocytes prevents the initiation of a host response. This favours the spread of the virus. Prions can cause apoptosis in neurons.
Plants
Programmed cell death in plants has a number of molecular similarities to that of animal apoptosis, but it also has differences, notable ones being the presence of a cell wall and the lack of an immune system that removes the pieces of the dead cell. Instead of an immune response, the dying cell synthesizes substances to break itself down and places them in a vacuole that ruptures as the cell dies. Additionally, plants do not contain phagocytic cells, which are essential in the process of breaking down and removing apoptotic bodies. Whether this whole process resembles animal apoptosis closely enough to warrant using the name apoptosis (as opposed to the more general programmed cell death) is unclear.
Caspase-independent apoptosis
The characterization of the caspases allowed the development of caspase inhibitors, which can be used to determine whether a cellular process involves active caspases. Using these inhibitors it was discovered that cells can die while displaying a morphology similar to apoptosis without caspase activation. Later studies linked this phenomenon to the release of AIF (apoptosis-inducing factor) from the mitochondria and its translocation into the nucleus mediated by its NLS (nuclear localization signal). Inside the mitochondria, AIF is anchored to the inner membrane. In order to be released, the protein is cleaved by a calcium-dependent calpain protease.
See also
Anoikis
Apaf-1
Apo2.7
Apoptotic DNA fragmentation
Atromentin induces apoptosis in human leukemia U937 cells.
Autolysis
Autophagy
Cisplatin
Cytotoxicity
Entosis
Ferroptosis
Homeostasis
Immunology
Necrobiosis
Necrosis
Necrotaxis
Nemosis
Mitotic catastrophe
p53
Paraptosis
Pseudoapoptosis
PI3K/AKT/mTOR pathway
Explanatory footnotes
Citations
General bibliography
External links
Apoptosis & Caspase 3, The Proteolysis Map – animation
Apoptosis & Caspase 8, The Proteolysis Map – animation
Apoptosis & Caspase 7, The Proteolysis Map – animation
Apoptosis MiniCOPE Dictionary – list of apoptosis terms and acronyms
Apoptosis (Programmed Cell Death) – The Virtual Library of Biochemistry, Molecular Biology and Cell Biology
Apoptosis Research Portal
Apoptosis Info Apoptosis protocols, articles, news, and recent publications.
Database of proteins involved in apoptosis
Apoptosis Video
Apoptosis Video (WEHI on YouTube )
The Mechanisms of Apoptosis Kimball's Biology Pages. Simple explanation of the mechanisms of apoptosis triggered by internal signals (bcl-2), along the caspase-9, caspase-3 and caspase-7 pathway; and by external signals (FAS and TNF), along the caspase 8 pathway. Accessed 25 March 2007.
WikiPathways – Apoptosis pathway
"Finding Cancer's Self-Destruct Button". CR magazine (Spring 2007). Article on apoptosis and cancer.
Xiaodong Wang's lecture: Introduction to Apoptosis
Robert Horvitz's Short Clip: Discovering Programmed Cell Death
The Bcl-2 Database
DeathBase: a database of proteins involved in cell death, curated by experts
European Cell Death Organization
Apoptosis signaling pathway created by Cusabio
Cell signaling
Cellular senescence
Immunology
Medical aspects of death
Programmed cell death | Apoptosis | [
"Chemistry",
"Biology"
] | 9,058 | [
"Signal transduction",
"Senescence",
"Cellular senescence",
"Immunology",
"Cellular processes",
"Apoptosis",
"Programmed cell death"
] |
2,460 | https://en.wikipedia.org/wiki/Anal%20sex | Anal sex or anal intercourse is generally the insertion and thrusting of the erect penis into a person's anus, or anus and rectum, for sexual pleasure. Other forms of anal sex include anal fingering, the use of sex toys, anilingus, pegging, as well as electrostimulation and erotic torture such as figging. Although anal sex most commonly means penileanal penetration, sources sometimes use anal intercourse to exclusively denote penileanal penetration, and anal sex to denote any form of anal sexual activity, especially between pairings as opposed to anal masturbation.
Most homosexual men report engaging in anal sex, though other types of sexual behaviour are more frequently practised in this group. Among heterosexual couples, anal sex is probably not uncommon and may be becoming more prevalent. Types of anal sex can also be a part of lesbian sexual practices. People may experience pleasure from anal sex by stimulation of the anal nerve endings, and orgasm may be achieved through anal penetration – by indirect stimulation of the prostate in men, indirect stimulation of the clitoris or an area of the vagina (sometimes called the G-spot) in women, and other sensory nerves (especially the pudendal nerve). However, people may also find anal sex painful, sometimes extremely so, which may be due to psychological factors in some cases.
As with most forms of sexual activity, anal sex participants risk contracting sexually transmitted infections (STIs). Anal sex is considered a high-risk sexual practice because of the vulnerability of the anus and rectum. The anal and rectal tissue are delicate and do not, unlike the vagina, provide lubrication Vaginal lubrication. They can easily tear and permit disease transmission, especially if a personal lubricant is not used. Anal sex without protection of a condom is considered the riskiest form of sexual activity, and therefore health authorities such as the World Health Organization (WHO) recommend safe sex practices for anal sex.
Strong views are often expressed about anal sex. It is controversial in various cultures, often because of religious prohibitions against anal sex among males or teachings about the procreative purpose of sexual activity. It may be considered taboo or unnatural, and is a criminal offense in some countries, punishable by corporal or capital punishment. By contrast, anal sex may also be considered a natural and valid form of sexual activity as fulfilling as other desired sexual expressions, and can be an enhancing or primary element of a person's sex life.
Anatomy and stimulation
The abundance of nerve endings in the anal region and rectum can make anal sex pleasurable for men and women. The internal and external sphincter muscles control the opening and closing of the anus; these muscles, which are sensitive membranes made up of many nerve endings, facilitate pleasure or pain during anal sex. Human Sexuality: An Encyclopedia states that "the inner third of the anal canal is less sensitive to touch than the outer two-thirds, but is more sensitive to pressure" and that "the rectum is a curved tube about long and has the capacity, like the anus, to expand".
Research indicates that anal sex occurs significantly less frequently than other sexual behaviors, but its association with dominance and submission, as well as taboo, makes it an appealing stimulus to people of all sexual orientations. In addition to sexual penetration by the penis, people may use sex toys such as a dildo, a butt plug or anal beads, engage in anal fingering, anilingus, pegging, anal masturbation, figging or fisting for anal sexual activity, and different sex positions may also be included. Fisting is the least practiced of the activities, partly because it is uncommon that people can relax enough to accommodate an object as big as a fist being inserted into the anus.
In a male receptive partner, being anally penetrated can produce a pleasurable sensation due to the object of insertion rubbing or brushing against the prostate through the anal wall. This can result in pleasurable sensations and can lead to an orgasm in some cases. Prostate stimulation can produce a deeper orgasm, sometimes described by men as more widespread and intense, longer-lasting, and allowing for greater feelings of ecstasy than orgasm elicited by penile stimulation only. The prostate is located next to the rectum and is the larger, more developed male homologue (variation) to the female Skene's glands. It is also typical for a man to not reach orgasm as a receptive partner solely from anal sex.
General statistics indicate that 70–80% of women require direct clitoral stimulation to achieve orgasm. The vaginal walls contain significantly fewer nerve endings than the clitoris (which has many nerve endings specifically intended for orgasm), and therefore intense sexual pleasure, including orgasm, from vaginal sexual stimulation is less likely to occur than from direct clitoral stimulation in the majority of women. The clitoris is composed of more than the externally visible glans (head). The vagina, for example, is flanked on each side by the clitoral crura, the internal legs of the clitoris, which are highly sensitive and become engorged with blood when sexually aroused. Indirect stimulation of the clitoris through anal penetration may be caused by the shared sensory nerves, especially the pudendal nerve, which gives off the inferior anal nerves and divides into the perineal nerve and the dorsal nerve of the clitoris. Although the anus has many nerve endings, their purpose is not specifically for inducing orgasm, and so a woman achieving orgasm solely by anal stimulation is rare.
Stimulation from anal sex can additionally be affected by popular perception or portrayals of the activity, such as erotica or pornography. In pornography, anal sex is commonly portrayed as a desirable, painless routine that does not require personal lubricant; this can result in couples performing anal sex without care, and men and women believing that it is unusual for women, as receptive partners, to find discomfort or pain instead of pleasure from the activity. By contrast, each person's sphincter muscles react to penetration differently, the anal sphincters have tissues that are more prone to tearing, and the anus and rectum, unlike the vagina, do not provide lubrication for sexual penetration. Researchers say adequate application of a personal lubricant, relaxation, and communication between sexual partners are crucial to avoid pain or damage to the anus or rectum. Additionally, ensuring that the anal area is clean and the bowel is empty, for both aesthetics and practicality, may be desired by participants.
Male to female
Behaviors and views
The anal sphincters are usually tighter than the pelvic muscles of the vagina, which can enhance the sexual pleasure for the inserting male during male-to-female anal intercourse because of the pressure applied to the penis. Men may also enjoy the penetrative role during anal sex because of its association with dominance, because it is made more alluring by a female partner or society in general insisting that it is forbidden, or because it presents an additional option for penetration.
While some women find being a receptive partner during anal intercourse painful or uncomfortable, or only engage in the act to please a male sexual partner, other women find the activity pleasurable or prefer it to vaginal intercourse.
In a 2010 clinical review article of heterosexual anal sex, anal intercourse is used to specifically denote penile-anal penetration, and anal sex is used to denote any form of anal sexual activity. The review suggests that anal sex is exotic among the sexual practices of some heterosexuals and that "for a certain number of heterosexuals, anal intercourse is pleasurable, exciting, and perhaps considered more intimate than vaginal sex".
Anal intercourse is sometimes used as a substitute for vaginal intercourse during menstruation. The likelihood of pregnancy occurring during anal sex is greatly reduced, as anal sex alone cannot lead to pregnancy unless sperm is somehow transported to the vaginal opening. Because of this, some couples practice anal intercourse as a form of contraception, often in the absence of a condom.
Some couples may practice anal sex as a way of preserving female virginity because it is non-procreative and does not tear the hymen; this has been reported in Christian communities in the United States. A person, especially a teenage girl or woman, who engages in anal sex or other sexual activity with no history of having engaged in vaginal intercourse may be regarded as not having yet experienced virginity loss. This is sometimes called technical virginity.
Heterosexuals may view anal sex as "fooling around" or as foreplay; scholar Laura M. Carpenter stated that this view "dates to the late 1600s, with explicit 'rules' appearing around the turn of the twentieth century, as in marriage manuals defining petting as 'literally every caress known to married couples but does not include complete sexual intercourse.'" One study found US teens who pledged to not have sex until marriage were more likely to engage in anal sex without vaginal sex than teens who had not made a sexual abstinence pledge, and found pledge-takers were just as likely to test positive for an STI five years after taking the pledge as those who had not pledged to abstinence.
Prevalence
Because most research on anal intercourse addresses men who have sex with men, little data exists on the prevalence of anal intercourse among heterosexual couples. In Kimberly R. McBride's 2010 clinical review on heterosexual anal intercourse and other forms of anal sexual activity, it is suggested that changing norms may affect the frequency of heterosexual anal sex. McBride and her colleagues investigated the prevalence of non-intercourse anal sex behaviors among a sample of men (n=1,299) and women (n=1,919) compared to anal intercourse experience and found that 51% of men and 43% of women had participated in at least one act of oral–anal sex, manual–anal sex, or anal sex toy use. The report states the majority of men (n=631) and women (n=856) who reported heterosexual anal intercourse in the past 12 months were in exclusive, monogamous relationships: 69% and 73%, respectively. The review added that because "relatively little attention [is] given to anal intercourse and other anal sexual behaviors between heterosexual partners", this means that it is "quite rare" to have research "that specifically differentiates the anus as a sexual organ or addresses anal sexual function or dysfunction as legitimate topics. As a result, we do not know the extent to which anal intercourse differs qualitatively from coitus."
According to a 2010 study from the National Survey of Sexual Health and Behavior (NSSHB) that was authored by Debby Herbenick et al., although anal intercourse is reported by fewer women than other partnered sex behaviors, partnered women in the age groups between 18 and 49 are significantly more likely to report having anal sex in the past 90 days. Women engaged in anal intercourse less commonly than men. Vaginal intercourse was practiced more than insertive anal intercourse among men, but 13% to 15% of men aged 25 to 49 practiced insertive anal intercourse.
With regard to adolescents, limited data also exists. This may be because of the taboo nature of anal sex and that teenagers and caregivers subsequently avoid talking to one another about the topic. It is also common for subject review panels and schools to avoid the subject. A 2000 study found that 22.9% of college students who self-identified as non-virgins had anal sex. They used condoms during anal sex 20.9% of the time as compared with 42.9% of the time with vaginal intercourse.
Anal sex being more common among heterosexuals today than it was previously has been linked to the increase in consumption of anal pornography among men, especially among those who view it on a regular basis. Seidman et al. argued that "cheap, accessible and, especially, interactive media have enabled many more people to produce as well as consume pornography", and that this modern way of producing pornography, in addition to the buttocks and anus having become more eroticized, has led to a significant interest in or obsession with anal sex among men.
Male to male
Behaviors and views
Most homosexual men are reported to engage in anal sex. Among men who have anal sex with other men, the insertive partner may be referred to as the top and the one being penetrated may be referred to as the bottom. Those who enjoy either role may be referred to as versatile. Some men who have sex with men, however, find that being a receptive partner during anal sex makes them question their masculinity.
Prevalence
Reports regarding the prevalence of anal sex among gay men and other men who have sex with men vary. A survey in The Advocate in 1994 indicated that 46% of gay men preferred to penetrate their partners, while 43% preferred to be the receptive partner. Other sources suggest that roughly three-fourths of gay men have had anal sex at one time or another, with an equal percentage participating as tops and bottoms. A 2012 NSSHB sex survey in the U.S. suggests high lifetime participation in anal sex among gay men: 83.3% report ever taking part in anal sex in the insertive position and 90% in the receptive position, even if only between a third and a quarter self-report very recent engagement in the practice, defined as 30 days or less.
Oral sex and mutual masturbation are more common than anal stimulation among men in sexual relationships with other men. According to Weiten et al., anal intercourse is generally more popular among gay male couples than among heterosexual couples, but "it ranks behind oral sex and mutual masturbation" among both sexual orientations in prevalence. Wellings et al. reported that "the equation of 'homosexual' with 'anal' sex among men is common among lay and health professionals alike" and that "yet an Internet survey of 180,000 MSM across Europe (EMIS, 2011) showed that oral sex was most commonly practised, followed by mutual masturbation, with anal intercourse in third place".
Female to male
Women may sexually stimulate a man's anus by fingering the exterior or interior areas of the anus; they may also stimulate the perineum (which, for males, is between the base of the scrotum and the anus), massage the prostate or engage in anilingus. Sex toys, such as a dildo, may also be used. The practice of a woman penetrating a man's anus with a strap-on dildo for sexual activity is called pegging.
Reece et al. reported in 2010 that receptive anal intercourse is infrequent among men overall, stating that "an estimated 7% of men 14 to 94 years old reported being a receptive partner during anal intercourse".
The BMJ stated in 1999:
Female to female
With regard to lesbian sexual practices, anal sex includes anal fingering, use of a dildo or other sex toys, or anilingus.
There is less research on anal sexual activity among women who have sex with women compared to couples of other sexual orientations. In 1987, a non-scientific study (Munson) was conducted of more than 100 members of a lesbian social organization in Colorado. When asked what techniques they used in their last ten sexual encounters, lesbians in their 30s were twice as likely as other age groups to engage in anal stimulation (with a finger or dildo). A 2014 study of partnered lesbian women in Canada and the U.S. found that 7% engaged in anal stimulation or penetration at least once a week; about 10% did so monthly and 70% did not at all. Anilingus is also less often practiced among female same-sex couples.
Health risks
General risks
Anal sex can expose its participants to two principal dangers: infections due to the high number of infectious microorganisms not found elsewhere on the body, and physical damage to the anus and rectum due to their fragility. Unprotected penile-anal penetration, colloquially known as barebacking, carries a higher risk of passing on sexually transmitted infections (STIs) because the anal sphincter is a delicate, easily torn tissue that can provide an entry for pathogens. Use of condoms, ample lubrication to reduce the risk of tearing, and safer sex practices in general, reduce the risk of STIs. However, a condom can break or otherwise come off during anal sex, and this is more likely to happen with anal sex than with other sex acts because of the tightness of the anal sphincters during friction.
Unprotected receptive anal sex (with an HIV positive partner) is the sex act most likely to result in HIV transmission.
As with other sexual practices, people without sound knowledge about the sexual risks involved are susceptible to STIs. Because of the view that anal sex is not "real sex" and therefore does not result in virginity loss, or pregnancy, teenagers and other young people who are unaware of the risks of the anal sex may consider vaginal intercourse riskier than anal intercourse and also the may believe that an STI can only result from vaginal intercourse. It may be because of these views that condom use with anal sex is often reported to be low and inconsistent across all groups in various countries.
Although anal sex alone does not lead to pregnancy, pregnancy can still occur with anal sex or other forms of sexual activity if the penis is near the vagina (such as during intercrural sex or other genital-genital rubbing) and its sperm is deposited near the vagina's entrance and travels along the vagina's lubricating fluids; the risk of pregnancy can also occur without the penis being near the vagina because sperm may be transported to the vaginal opening by the vagina coming in contact with fingers or other non-genital body parts that have come in contact with semen.
There are a variety of factors that make male-to-female anal intercourse riskier than vaginal intercourse for women, including the risk of HIV transmission being higher for anal intercourse than for vaginal intercourse. The risk of injury to the woman during anal intercourse is also significantly higher than the risk of injury to her during vaginal intercourse because of the durability of the vaginal tissues compared to the anal tissues. Additionally, if a man moves from anal intercourse immediately to vaginal intercourse without a condom or without changing it, infections can arise in the vagina (or urinary tract) due to bacteria present within the anus; these infections can also result from switching between vaginal sex and anal sex by the use of fingers or sex toys.
Pain during receptive anal sex among gay men (or men who have sex with men) is formally known as anodyspareunia. In one study, 61% of gay or bisexual men said they experienced painful receptive anal sex and that it was the most frequent sexual difficulty they had experienced. By contrast, 24% of gay or bisexual men stated that they always experienced some degree of pain during anal sex, and about 12% of gay men find it too painful to pursue receptive anal sex; it was concluded that the perception of anal sex as painful is as likely to be psychologically or emotionally based as it is to be physically based. Factors predictive of pain during anal sex include inadequate lubrication, feeling tense or anxious, lack of stimulation, as well as lack of social ease with being gay and being closeted. Research has found that psychological factors can in fact be the primary contributors to the experience of pain during anal intercourse and that adequate communication between sexual partners can prevent it, countering the notion that pain is always inevitable during anal sex.
Damage
Anal sex can exacerbate hemorrhoids and therefore result in bleeding; in other cases, the formation of a hemorrhoid is attributed to anal sex. If bleeding occurs as a result of anal sex, it may also be because of a tear in the anal or rectal tissues (an anal fissure) or perforation (a hole) in the colon, the latter of which being a serious medical issue that should be remedied by immediate medical attention. Because of the rectum's lack of elasticity, the anal mucous membrane being thin, and small blood vessels being present directly beneath the mucous membrane, tiny tears and bleeding in the rectum usually result from penetrative anal sex, though the bleeding is usually minor and therefore usually not visible.
By contrast to other anal sexual behaviors, anal fisting poses a more serious danger of damage due to the deliberate stretching of the anal and rectal tissues; anal fisting injuries include anal sphincter lacerations and rectal and sigmoid colon (rectosigmoid) perforation, which might result in death.
Repetitive penetrative anal sex may result in the anal sphincters becoming weakened, which may cause rectal prolapse or affect the ability to hold in feces (a condition known as fecal incontinence). Rectal prolapse is relatively uncommon, however, especially in men, and its causes are not well understood. Kegel exercises have been used to strengthen the anal sphincters and overall pelvic floor, and may help prevent or remedy fecal incontinence.
Cancer
People who have anal intercourse may have an increased risk of anal cancer.
Most cases of anal cancer are related to infection with the human papilloma virus (HPV). The risk of anal cancer through anal sex is attributed to HPV infection, which is often contracted through unprotected anal sex. Anal cancer is significantly less common than cancer of the colon or rectum (colorectal cancer); the American Cancer Society estimates that in 2023 there were approximately 9,760 new cases (6,580 in women and 3,180 in men) and approximately 1,870 deaths (860 women and 1,010 men) in the United States, and that, though anal cancer has been on the rise for many years, it is mainly diagnosed in adults, "with an average age being in the early 60s" and it "affects women somewhat more often than men."
Cultural views
General
Different cultures have had different views on anal sex throughout human history, with some cultures more positive about the activity than others. Historically, anal sex has been restricted or condemned, especially with regard to religious beliefs; it has also commonly been used as a form of domination, usually with the active partner (the one who is penetrating) representing masculinity and the passive partner (the one who is being penetrated) representing femininity. A number of cultures have especially recorded the practice of anal sex between males, and anal sex between males has been especially stigmatized or punished. In some societies, if discovered to have engaged in the practice, the individuals involved were put to death, such as by decapitation, burning, or even mutilation.
Anal sex has been more accepted in modern times; it is often considered a natural, pleasurable form of sexual expression. The buttocks and anus have become more eroticized in modern culture, including via pornography. Engaging in anal sex is still, however, punished in some societies. For example, regarding LGBT rights in Iran, Iran's Penal Code states in Article 109 that "both men involved in same-sex penetrative (anal) or non-penetrative sex will be punished" and "Article 110 states that those convicted of engaging in anal sex will be executed and that the manner of execution is at the discretion of the judge".
Ancient and non-Western cultures
From the earliest records, the ancient Sumerians had very relaxed attitudes toward sex and did not regard anal sex as taboo. priestesses were forbidden from producing offspring and frequently engaged in anal sex as a method of birth control. Anal sex is also obliquely alluded to by a description of an omen in which a man "keeps saying to his wife: 'Bring your backside. Other Sumerian texts refer to homosexual anal intercourse. The , a set of priests who worked in the temples of the goddess Inanna, where they performed elegies and lamentations, were especially known for their homosexual proclivities. The Sumerian sign for was a ligature of the signs for 'penis' and 'anus'. One Sumerian proverb reads: "When the wiped off his ass [he said], 'I must not arouse that which belongs to my mistress [i.e., Inanna].'"
The term Greek love has long been used to refer to anal intercourse, and in modern times, "doing it the Greek way" is sometimes used as slang for anal sex. Male-male anal sex was not a universally accepted practice in Ancient Greece; it was the target of jokes in some Athenian comedies. Aristophanes, for instance, mockingly alludes to the practice, claiming, "Most citizens are ('wide-arsed') now." The terms , , and were used by Greek residents to categorize men who chronically practiced passive anal intercourse. Pederastic practices in ancient Greece (sexual activity between men and adolescent boys), at least in Athens and Sparta, were expected to avoid penetrative sex of any kind. Greek artwork of sexual interaction between men and boys usually depicted fondling or intercrural sex, which was not condemned for violating or feminizing boys, while male-male anal intercourse was usually depicted between males of the same age-group. Intercrural sex was not considered penetrative and two males engaging in it was considered a "clean" act. Some sources explicitly state that anal sex between men and boys was criticized as shameful and seen as a form of hubris. Evidence suggests, however, that the younger partner in pederastic relationships (i.e., the ) did engage in receptive anal intercourse so long as no one accused him of being 'feminine'.
In later Roman-era Greek poetry, anal sex became a common literary convention, represented as taking place with "eligible" youths: those who had attained the proper age but had not yet become adults. Seducing those not of proper age (for example, non-adolescent children) into the practice was considered very shameful for the adult, and having such relations with a male who was no longer adolescent was considered more shameful for the young male than for the one mounting him. Greek courtesans, or hetaerae, are said to have frequently practiced male-female anal intercourse as a means of preventing pregnancy.
A male citizen taking the passive (or receptive) role in anal intercourse ( in Latin) was condemned in Rome as an act of ('immodesty' or 'unchastity'); free men, however, could take the active role with a young male slave, known as a or . The latter was allowed because anal intercourse was considered equivalent to vaginal intercourse in this way; men were said to "take it like a woman" ( 'to undergo womanly things') when they were anally penetrated, but when a man performed anal sex on a woman, she was thought of as playing the boy's role. Likewise, women were believed to only be capable of anal sex or other sex acts with women if they possessed an exceptionally large clitoris or a dildo. The passive partner in any of these cases was always considered a woman or a boy because being the one who penetrates was characterized as the only appropriate way for an adult male citizen to engage in sexual activity, and he was therefore considered unmanly if he was the one who was penetrated; slaves could be considered "non-citizen". Although Roman men often availed themselves of their own slaves or others for anal intercourse, Roman comedies and plays presented Greek settings and characters for explicit acts of anal intercourse, and this may be indicative that the Romans thought of anal sex as something specifically "Greek".
In Japan, records (including detailed shunga) show that some males engaged in penetrative anal intercourse with males. Evidence suggestive of widespread male-female anal intercourse in a pre-modern culture can be found in the erotic vases, or stirrup-spout pots, made by the Moche people of Peru; in a survey, of a collection of these pots, it was found that 31 percent of them depicted male-female anal intercourse significantly more than any other sex act. Moche pottery of this type belonged to the world of the dead, which was believed to be a reversal of life. Therefore, the reverse of common practices was often portrayed. The Larco Museum houses an erotic gallery in which this pottery is showcased.
Religion
Judaism
The Mishneh Torah, a text considered authoritative by Orthodox Jewish sects, states "since a man's wife is permitted to him, he may act with her in any manner whatsoever. He may have intercourse with her whenever he so desires and kiss any organ of her body he wishes, and he may have intercourse with her naturally or unnaturally [traditionally, unnaturally refers to anal and oral sex], provided that he does not expend semen to no purpose. Nevertheless, it is an attribute of piety that a man should not act in this matter with levity and that he should sanctify himself at the time of intercourse."
Christianity
Christian texts may sometimes euphemistically refer to anal sex as the ('the sin against nature', after Thomas Aquinas) or ('sodomitical lusts', in one of Charlemagne's ordinances), or ('that horrible sin that among Christians is not to be named').
Islam
, or the sin of Lot's people, which has come to be interpreted as referring generally to same-sex sexual activity, is commonly officially prohibited by Islamic sects; there are parts of the Quran which talk about smiting on Sodom and Gomorrah, and this is thought to be a reference to "unnatural" sex, and so there are hadith and Islamic laws which prohibit it. Same-sex male practitioners of anal sex are called luti or lutiyin in plural and are seen as criminals in the same way that a thief is a criminal.
Other animals
As a form of non-reproductive sexual behavior in animals, anal sex has been observed in a few other primates, both in captivity and in the wild.
See also
Anal eroticism
Ass to mouth
Autosodomy
Coprophilia
Creampie (sexual act)
Felching
Gay bowel syndrome
Klismaphilia
Sodomy law
References
Further reading
Brent, Bill Ultimate Guide to Anal Sex for Men, Cleis Press, 2002.
DeCitore, David Arouse Her Anal Ecstasy (2008)
Houser, Ward Anal Sex, Encyclopedia of Homosexuality Dynes, Wayne R. (ed.), Garland Publishing, 1990. pp. 48–50.
Morin, Jack Anal Pleasure & Health: A Guide for Men and Women, Down There Press, 1998.
Sanderson, Terry The Gay Man's Kama Sutra, Thomas Dunne Books, 2004.
Tristan Taormino The Ultimate Guide to Anal Sex for Women, Cleis Press, 1997, 2006.
Underwood, Steven G. Gay Men and Anal Eroticism: Tops, Bottoms, and Versatiles, Harrington Park Press, 2003
External links
Anal eroticism
Sexology
Sexual acts | Anal sex | [
"Biology"
] | 6,430 | [
"Behavior",
"Sexual acts",
"Sexology",
"Behavioural sciences",
"Sexuality",
"Mating"
] |
2,473 | https://en.wikipedia.org/wiki/Abac%C3%A1 | Abacá ( ; ), also known as Manila hemp, is a species of banana, Musa textilis, endemic to the Philippines. The plant grows to , and averages about . The plant has great economic importance, being harvested for its fiber extracted from the leaf-stems.
The lustrous fiber is traditionally hand-loomed into various indigenous textiles (abaca cloth or medriñaque) in the Philippines. They are still featured prominently as the traditional material of the barong tagalog, the national male attire of the Philippines, as well as in sheer lace-like fabrics called nipis used in various clothing components. Native abaca textiles also survive into the modern era among various ethnic groups, like the t'nalak of the T'boli people and the dagmay of the Bagobo people. Abaca is also used in traditional Philippine millinery, as well as for bags, shawls, and other decorative items. The hatmaking straw made from Manila hemp is called tagal or tagal straw.
The fiber is also exceptionally strong, stronger than hemp and naturally salt-resistant, making it ideal for making twines and ropes (especially for maritime shipping). It became a major trade commodity in the colonial era for this reason. The abaca industry declined sharply in the mid-20th century when abaca plantations were decimated by World War II and plant diseases, as well as the invention of nylon in the 1930s. Today, abaca is mostly used in a variety of specialized paper products including tea bags, filter paper and banknotes. Manila envelopes and Manila paper derive their name from this fiber.
Abaca is classified as a hard fiber, along with coir, henequin and sisal. Abaca is grown as a commercial crop in the Philippines, Ecuador, Costa Rica.
Description
The abacá plant is stoloniferous, meaning that the plant produces runners or shoots along the ground that then root at each segment. Cutting and transplanting rooted runners is the primary technique for creating new plants, since seed growth is substantially slower. Abacá has a "false trunk" or pseudostem about in diameter. The leaf stalks (petioles) are expanded at the base to form sheaths that are tightly wrapped together to form the pseudostem. There are from 12 to 25 leaves, dark green on the top and pale green on the underside, sometimes with large brown patches. They are oblong in shape with a deltoid base. They grow in succession. The petioles grow to at least in length.
When the plant is mature, the flower stalk grows up inside the pseudostem. The male flower has five petals, each about long. The leaf sheaths contain the valuable fiber. After harvesting, the coarse fibers range in length from long. They are composed primarily of cellulose, lignin, and pectin.
The fruit, which is inedible and is rarely seen as harvesting occurs before the plant fruits, grows to about in length and in diameter. It has black turbinate seeds that are in diameter.
Systematics
The abacá plant belongs to the banana family, Musaceae; it resembles the closely related wild seeded bananas, Musa acuminata and Musa balbisiana. Its scientific name is Musa textilis. Within the genus Musa, it is placed in section Callimusa (now including the former section Australimusa), members of which have a diploid chromosome number of 2n = 20.
Genetic diversity
The Philippines, especially the Bicol region in Luzon, has the most abaca genotypes and cultivars. Genetic analysis using simple sequence repeats (SSR) markers revealed that the Philippines' abaca germplasm is genetically diverse. Abaca genotypes in Luzon had higher genetic diversity than Visayas and Mindanao. Ninety-five (95) percent was attributed to molecular variance within the population, and only 5% of the molecular variance to variation among populations. Genetic analysis by Unweighted Pair Group Method with Arithmetic Mean (UPGMA) revealed several clusters irrespective of geographical origin.
History
Before synthetic textiles came into use, M. textilis was a major source of high quality fiber: soft, silky and fine. Ancestors of the modern abacá are thought to have originated from the eastern Philippines, where there is significant rainfall throughout the year. Wild varieties of abacá can still be found in the interior forests of the island province of Catanduanes, away from cultivated areas.
Today, Catanduanes has many other modern kinds of abacá which are more competitive. For many years, breeders from various research institutions have made the cultivated varieties of Catanduanes even more competitive in local and international markets. This results in the optimum production of the island which had a consistent highest production throughout the archipelago.
16th century
Europeans first came into contact with Abacá fibre when Ferdinand Magellan landed in the Philippines in 1521, as the natives were already cultivating it and utilizing it in bulk for textiles. Throughout the Spanish colonial era, it was referred to as "medriñaque" cloth.
19th century
By 1897, the Philippines were exporting almost 100,000 tons of abacá, and it was one of the three biggest cash crops, along with tobacco and sugar. In fact, from 1850 through the end of the 19th century, sugar or abacá alternated with each other as the biggest export crop of the Philippines. This 19th-century trade was predominantly with the United States and the making of ropes was done mainly in New England, although in time rope-making shifted back to the Philippines.
From 1898 to 1946, the United States colonized the Philippines following the Spanish-American War. The Guggenheim claims the "colonial government found ways to prevent Filipinos from profiting off of the abaca crops, instead favoring the businesses of American expats and Japanese immigrants, as well as ensuring that the bulk of the abaca harvests were exported to the United States" for use in military initiatives.
20th century
In the early 1900s, a train running from Danao to Argao would transport Philippine abacá from the plantations to Cebu City for export. The railway system was destroyed during World War II; the abaca continues to be transported to Cebu by road.
Outside the Philippines, abacá was first cultivated on a large scale in Sumatra in 1925 under the Dutch, who had observed its cultivation in the Philippines for cordage since the nineteenth century, followed up by plantings in Central America in 1929 sponsored by the U.S. Department of Agriculture. It also was transplanted into India and Guam. Commercial planting began in 1930 in British North Borneo; at the onset of World War II, the supply from the Philippines was eliminated by the Empire of Japan.
After the war, the U.S. Department of Agriculture started production in Panama, Costa Rica, Honduras, and Guatemala.
21st century
Today, abacá is produced primarily in the Philippines and Ecuador. The Philippines produces between 85% and 95% of the world's abacá, and the production employs 1.5 million people. Production has declined because of virus diseases.
Cultivation
The plant is normally grown in well-drained loamy soil, using rhizomes planted at the start of the rainy season. In addition, new plants can be started by seeds. Growers harvest abacá fields every three to eight months after an initial growth period of 12–25 months. Harvesting is done by removing the leaf-stems after flowering but before the fruit appears. The plant loses productivity between 15 to 40 years. The slopes of volcanoes provide a preferred growing environment. Harvesting generally includes several operations involving the leaf sheaths:
tuxying (separation of primary and secondary sheath)
stripping (getting the fibers)
drying (usually following the tradition of sun-drying).
When the processing is complete, the bundles of fiber are pale and lustrous with a length of .
In Costa Rica, more modern harvest and drying techniques are being developed to accommodate the very high yields obtained there.
According to the Philippine Fiber Industry Development Authority, the Philippines provided 87.4% of the world's abacá in 2014, earning the Philippines US$111.33 million. The demand is still greater than the supply. The remainder came from Ecuador (12.5%) and Costa Rica (0.1%). The Bicol region in the Philippines produced 27,885 metric tons of abacá in 2014, the largest of any Philippine region.
The Philippine Rural Development Program (PRDP) and the Department of Agriculture reported that in 2009–2013, Bicol Region had a 39% share of Philippine abacá production of which an overwhelming 92% came from Catanduanes Island. Eastern Visayas, the second largest producer had 24% and the Davao Region, the third largest producer had 11% of the total production. Around 42 percent of the total abacá fiber shipments from the Philippines went to the United Kingdom in 2014, making it the top importer. Germany imported 37.1 percent abacá pulp from the Philippines, importing around 7,755 metric tons (MT). Sales of abacá cordage surged 20 percent in 2014 to a total of 5,093 MT from 4,240 MT, with the United States holding around 68 percent of the market.
Pathogens
Abacá is vulnerable to a number of pathogens, notably abaca bunchy top virus, abaca bract mosaic virus, and abaca mosaic virus.
Uses
Due to its strength, it is a sought after product and is the strongest of the natural fibers. It is used by the paper industry for such specialty uses such as tea bags, banknotes and decorative papers. It can be used to make handcrafts such as hats, bags, carpets, clothing and furniture.
Lupis is the finest quality of abacá. Sinamay is woven chiefly from abacá.
Textiles
Abacá fibers were traditionally woven into sturdy textiles and clothing in the Philippines since pre-colonial times. Along with cotton, they were the main source of textile fibers used for clothing in the pre-colonial Philippines. Abacá cloth was often compared to calico in terms of texture and was a major trade commodity in the pre-colonial maritime trade and the Spanish colonial era. There are multiple traditional types and names of abaca cloth among the different ethnic groups of the Philippines. Undyed plain abacá cloth, woven from fine fibers of abaca, is generally known as sinamáy in most of the islands. Abacá cloth with a more delicate texture is called tinampipi. While especially fine lace-like abacá cloth is called nipis or lupis. Fine abacá fibers may also be woven with piña, silk, or fine cotton to create a fabric called jusi.
Traditional abacá textiles were often dyed in various colors from various natural dyes. These include blue from indigo (tarum, dagum, tayum, etc.); black from ebony (knalum or batulinao) leaves; red from noni roots and sapang; yellow from turmeric (kalawag, kuning, etc.); and so on. They were often woven into specific patterns, and further ornamented with embroidery, beadwork, and other decorations. Most clothing made from abacá took the form of the baro (also barú or bayú, literally "shirt" or "clothing"), a simple collar-less shirt or jacket with close-fitting long sleeves worn by both men and women in most ethnic groups in the pre-colonial Philippines. These were paired with wraparound sarong-like skirts (for both men and women), close-fitting pants, or loincloths (bahag).
During the Spanish colonial era, abacá cloth became known as medriñaque in Spanish (apparently derived from a native Cebuano name). They were exported to other Spanish colonies since the 16th century. A waistcoat of a native Quechua man in Peru was recorded as being made of medriñaque as early as 1584. Abacá cloth also appear in English records, spelled variously as medrinacks, medrianacks, medrianackes, and medrinacles, among other names. They were used as canvas for sails and for stiffening clothing like skirts, collars, and doublets.
Philippine indigenous tribes still weave abacá-based textiles like t'nalak, made by the Tiboli tribe of South Cotabato, and dagmay, made by the Bagobo people. Abacá cloth is found in museum collections around the world, like the Boston Museum of Fine Arts and the Textile Museum of Canada.
The inner fibers are also used in the making of hats, including the "Manila hats", hammocks, matting, cordage, ropes, coarse twines, and types of canvas.
Industrial textile production
Processing
Dyeing and weaving
Manila rope
Manila rope is a type of rope made from manila hemp. Manila rope is very durable, flexible, and resistant to salt water damage, allowing its use in rope, hawsers, ships' lines, and fishing nets. A rope can require to break. Manilla rope is still the only material specified for lifeboat falls (the ropes with which a ship's lifeboat is lowered) in the United Kingdom.
Manila ropes shrink when they become wet. This effect can be advantageous under certain circumstances, but if it is not a wanted feature, it should be well taken into account. Since shrinkage is more pronounced the first time the rope becomes wet, new rope is usually immersed into water and put to dry before use so that the shrinkage is less than it would be if the rope had never been wet. A major disadvantage in this shrinkage is that many knots made with manila rope became harder and more difficult to untie when wet, thus becoming subject of increased stress. Manila rope will rot after a period of time when exposed to saltwater.
Manila hemp rope was previously the favoured variety of rope used for executions by hanging, both in the U.K. and USA. Usually 3/4 to 1 inch diameter, boiled prior to use to take out any overelasticity. It was also used in the 19th century as whaling line. Abacá fiber was once used primarily for rope, but this application is now of minor significance.
See also
Musa basjoo (Japanese banana), banana species also used as a traditional source of fiber in Okinawa, Japan
Kijōka-bashōfu, similar traditional fiber from Okinawa, Japan
Piña
T'nalak
Malong
Tapis
Inabel
Batik
Yakan people
Fiber crop
International Year of Natural Fibres
Natural fiber
Manila folder
Domesticated plants and animals of Austronesia
Notes
Footnotes
References
Yllano, O. B., Diaz, M. G. Q., Lalusin, A. G., Laurena, A. C., & Tecson-Mendoza, E. M. (2020). Genetic Analyses of Abaca (Musa textilis Née) Germplasm from its Primary Center of Origin, the Philippines, Using Simple Sequence Repeat (SSR) Markers. Philippine Agricultural Scientist, 103(4).
External links
The World Book encyclopedia set, 1988.
See International Year of Natural Fibres 2009
abacá A comprehensive pamphlet about Philippine abacá presented 1915 Panama Pacific International Exposition held in San Francisco. Online publication uploaded in Filipiniana.net
Musa (genus)
Flora of the Philippines
Fiber plants
Biodegradable materials
Philippine clothing
History of Asian clothing
Philippine handicrafts
Austronesian agriculture | Abacá | [
"Physics",
"Chemistry"
] | 3,179 | [
"Biodegradation",
"Biodegradable materials",
"Materials",
"Matter"
] |
2,487 | https://en.wikipedia.org/wiki/Amazonite | Amazonite, also known as amazonstone, is a green tectosilicate mineral, a variety of the potassium feldspar called microcline. Its chemical formula is KAlSi3O8, which is polymorphic to orthoclase.
Its name is taken from that of the Amazon River, from which green stones were formerly obtained, though it is unknown whether those stones were amazonite. Although it has been used for jewellery for well over three thousand years, as attested by archaeological finds in Middle and New Kingdom Egypt and Mesopotamia, no ancient or medieval authority mentions it. It was first described as a distinct mineral only in the 18th century.
Green and greenish-blue varieties of potassium feldspars that are predominantly triclinic are designated as amazonite. It has been described as a "beautiful crystallized variety of a bright verdigris-green" and as possessing a "lively green colour". It is occasionally cut and used as a gemstone.
Occurrence
Amazonite is a mineral of limited occurrence. In Bronze Age Egypt, it was mined in the southern Eastern Desert at Gebel Migif. In early modern times, it was obtained almost exclusively from the area of Miass in the Ilmensky Mountains, southwest of Chelyabinsk, Russia, where it occurs in granitic rocks.
Amazonite is now known to occur in various places around the world. Those places are, among others, as follows:
Australia:
Eyre Peninsula, Koppio, Baila Hill Mine (Koppio Amazonite Mine)
China:
Baishitouquan granite intrusion, Hami Prefecture, Xinjiang: found in granite
Libya:
Jabal Eghei, Tibesti Mountains: found in granitic rocks
Mongolia:
Avdar Massif, Töv Province: found in alkali granite
Ethiopia:
Konso Zone
South Africa:
Mogalakwena, Limpopo Province
Khâi-Ma, Northern Cape
Kakamas, Northern Cape
Ceres Valley, Western Cape
Sweden:
Skuleboda mine, Västra Götaland County: found in pegmatite
United States:
Colorado:
Deer Trail, Arapahoe County:233
Custer County:234
Devils Head, Douglas County:234
Pine Creek, Douglas County:234
Crystal Park, El Paso County:234
Pikes Peak, El Paso County: found in coarse granites or pegmatite
St. Peter's Dome, El Paso County:234
Tarryall Mountains, Park County:235
Crystal Peak, Teller County:235
Wyoming
Virginia:
Morefield Mine, Amelia County: found in pegmatite
Rutherford Mine, Amelia County
Pennsylvania:
Media, Delaware County:244
Middletown, Delaware County:244
Color
For many years, the source of amazonite's color was a mystery. Some people assumed the color was due to copper because copper compounds often have blue and green colors. A 1985 study suggests that the blue-green color results from quantities of lead and water in the feldspar. Subsequent 1998 theoretical studies by A. Julg expand on the potential role of aliovalent lead in the color of microcline.
Other studies suggest the colors are associated with the increasing content of lead, rubidium, and thallium ranging in amounts between 0.00X and 0.0X in the feldspars, with even extremely high contents of PbO, lead monoxide, (1% or more) known from the literature. A 2010 study also implicated the role of divalent iron in the green coloration. These studies and associated hypotheses indicate the complex nature of the color in amazonite; in other words, the color may be the aggregate effect of several mutually inclusive and necessary factors.
Health
A 2021 study by the German Institut für Edelsteinprüfung (EPI) found that the amount of lead that leaked from an sample of amazonite into an acidic solution simulating saliva exceeded European Union standard DIN EN 71-3:2013's recommended amount by five times. This experiment was to simulate a child swallowing amazonite, and could also apply to new alternative medicine practices such as inserting the mineral into oils or drinking water for days.
Gallery
References
Further reading
External links
Feldspar
Gemstones | Amazonite | [
"Physics"
] | 853 | [
"Materials",
"Gemstones",
"Matter"
] |
2,499 | https://en.wikipedia.org/wiki/Asynchronous%20Transfer%20Mode | Asynchronous Transfer Mode (ATM) is a telecommunications standard defined by the American National Standards Institute and International Telecommunication Union Telecommunication Standardization Sector (ITU-T, formerly CCITT) for digital transmission of multiple types of traffic. ATM was developed to meet the needs of the Broadband Integrated Services Digital Network as defined in the late 1980s, and designed to integrate telecommunication networks. It can handle both traditional high-throughput data traffic and real-time, low-latency content such as telephony (voice) and video. ATM provides functionality that uses features of circuit switching and packet switching networks by using asynchronous time-division multiplexing. ATM was seen in the 1990s as a competitor to Ethernet and networks carrying IP traffic as, unlike Ethernet, it was faster and designed with quality-of-service in mind, but it fell out of favor once Ethernet reached speeds of 1 gigabits per second.
In the Open Systems Interconnection (OSI) reference model data link layer (layer 2), the basic transfer units are called frames. In ATM these frames are of a fixed length (53 octets) called cells. This differs from approaches such as Internet Protocol (IP) (OSI layer 3) or Ethernet (also layer 2) that use variable-sized packets or frames. ATM uses a connection-oriented model in which a virtual circuit must be established between two endpoints before the data exchange begins. These virtual circuits may be either permanent (dedicated connections that are usually preconfigured by the service provider), or switched (set up on a per-call basis using signaling and disconnected when the call is terminated).
The ATM network reference model approximately maps to the three lowest layers of the OSI model: physical layer, data link layer, and network layer. ATM is a core protocol used in the synchronous optical networking and synchronous digital hierarchy (SONET/SDH) backbone of the public switched telephone network and in the Integrated Services Digital Network (ISDN) but has largely been superseded in favor of next-generation networks based on IP technology. Wireless and mobile ATM never established a significant foothold.
Protocol architecture
To minimize queuing delay and packet delay variation (PDV), all ATM cells are the same small size. Reduction of PDV is particularly important when carrying voice traffic, because the conversion of digitized voice into an analog audio signal is an inherently real-time process. The decoder needs an evenly spaced stream of data items.
At the time of the design of ATM, synchronous digital hierarchy with payload was considered a fast optical network link, and many plesiochronous digital hierarchy links in the digital network were considerably slower, ranging from 1.544 to in the US, and 2 to in Europe.
At , a typical full-length 1,500 byte Ethernet frame would take 77.42 μs to transmit. On a lower-speed T1 line, the same packet would take up to 7.8 milliseconds. A queuing delay induced by several such data packets might exceed the figure of 7.8 ms several times over. This was considered unacceptable for speech traffic.
The design of ATM aimed for a low-jitter network interface. Cells were introduced to provide short queuing delays while continuing to support datagram traffic. ATM broke up all data packets and voice streams into 48-byte pieces, adding a 5-byte routing header to each one so that they could be reassembled later. Being 1/30th the size reduced cell contention jitter by the same factor of 30.
The choice of 48 bytes was political rather than technical. When the CCITT (now ITU-T) was standardizing ATM, parties from the United States wanted a 64-byte payload because this was felt to be a good compromise between larger payloads optimized for data transmission and shorter payloads optimized for real-time applications like voice. Parties from Europe wanted 32-byte payloads because the small size (4 ms of voice data) would avoid the need for echo cancellation on domestic voice calls. The United States, due to its larger size, already had echo cancellers widely deployed. Most of the European parties eventually came around to the arguments made by the Americans, but France and a few others held out for a shorter cell length.
48 bytes was chosen as a compromise, despite having all the disadvantages of both proposals and the additional inconvenience of not being a power of two in size. 5-byte headers were chosen because it was thought that 10% of the payload was the maximum price to pay for routing information.
Cell structure
An ATM cell consists of a 5-byte header and a 48-byte payload. ATM defines two different cell formats: user–network interface (UNI) and network–network interface (NNI). Most ATM links use UNI cell format.
GFC
The generic flow control (GFC) field is a 4-bit field that was originally added to support the connection of ATM networks to shared access networks such as a distributed queue dual bus (DQDB) ring. The GFC field was designed to give the User-Network Interface (UNI) 4 bits in which to negotiate multiplexing and flow control among the cells of various ATM connections. However, the use and exact values of the GFC field have not been standardized, and the field is always set to 0000.
VPI
Virtual path identifier (8 bits UNI, or 12 bits NNI)
VCI
Virtual channel identifier (16 bits)
PT
Payload type (3 bits)
Bit 3 (msbit): Network management cell. If 0, user data cell and the following apply:
Bit 2: Explicit forward congestion indication (EFCI); 1 = network congestion experienced
Bit 1 (lsbit): ATM user-to-user (AAU) bit. Used by AAL5 to indicate packet boundaries.
CLP
Cell loss priority (1-bit)
HEC
Header error control (8-bit CRC, polynomial = X8 + X2 + X + 1)
ATM uses the PT field to designate various special kinds of cells for operations, administration and management (OAM) purposes, and to delineate packet boundaries in some ATM adaptation layers (AAL). If the most significant bit (MSB) of the PT field is 0, this is a user data cell, and the other two bits are used to indicate network congestion and as a general-purpose header bit available for ATM adaptation layers. If the MSB is 1, this is a management cell, and the other two bits indicate the type: network management segment, network management end-to-end, resource management, and reserved for future use.
Several ATM link protocols use the HEC field to drive a CRC-based framing algorithm, which allows locating the ATM cells with no overhead beyond what is otherwise needed for header protection. The 8-bit CRC is used to correct single-bit header errors and detect multi-bit header errors. When multi-bit header errors are detected, the current and subsequent cells are dropped until a cell with no header errors is found.
A UNI cell reserves the GFC field for a local flow control and sub-multiplexing system between users. This was intended to allow several terminals to share a single network connection in the same way that two ISDN phones can share a single basic rate ISDN connection. All four GFC bits must be zero by default.
The NNI cell format replicates the UNI format almost exactly, except that the 4-bit GFC field is re-allocated to the VPI field, extending the VPI to 12 bits. Thus, a single NNI ATM interconnection is capable of addressing almost 212 VPs of up to almost 216 VCs each.
Service types
ATM supports different types of services via AALs. Standardized AALs include AAL1, AAL2, and AAL5, and the rarely used AAL3 and AAL4. AAL1 is used for constant bit rate (CBR) services and circuit emulation. Synchronization is also maintained at AAL1. AAL2 through AAL4 are used for variable bitrate (VBR) services, and AAL5 for data. Which AAL is in use for a given cell is not encoded in the cell. Instead, it is negotiated by or configured at the endpoints on a per-virtual-connection basis.
Following the initial design of ATM, networks have become much faster. A 1500 byte (12000-bit) full-size Ethernet frame takes only 1.2 μs to transmit on a network, reducing the motivation for small cells to reduce jitter due to contention. The increased link speeds by themselves do not eliminate jitter due to queuing.
ATM provides a useful ability to carry multiple logical circuits on a single physical or virtual medium, although other techniques exist, such as Multi-link PPP, Ethernet VLANs, VxLAN, MPLS, and multi-protocol support over SONET.
Virtual circuits
An ATM network must establish a connection before two parties can send cells to each other. This is called a virtual circuit (VC). It can be a permanent virtual circuit (PVC), which is created administratively on the end points, or a switched virtual circuit (SVC), which is created as needed by the communicating parties. SVC creation is managed by signaling, in which the requesting party indicates the address of the receiving party, the type of service requested, and whatever traffic parameters may be applicable to the selected service. Call admission is then performed by the network to confirm that the requested resources are available and that a route exists for the connection.
Motivation
ATM operates as a channel-based transport layer, using VCs. This is encompassed in the concept of the virtual paths (VP) and virtual channels. Every ATM cell has an 8- or 12-bit virtual path identifier (VPI) and 16-bit virtual channel identifier (VCI) pair defined in its header. The VCI, together with the VPI, is used to identify the next destination of a cell as it passes through a series of ATM switches on its way to its destination. The length of the VPI varies according to whether the cell is sent on a user-network interface (at the edge of the network), or if it is sent on a network-network interface (inside the network).
As these cells traverse an ATM network, switching takes place by changing the VPI/VCI values (label swapping). Although the VPI/VCI values are not necessarily consistent from one end of the connection to the other, the concept of a circuit is consistent (unlike IP, where any given packet could get to its destination by a different route than the others). ATM switches use the VPI/VCI fields to identify the virtual channel link (VCL) of the next network that a cell needs to transit on its way to its final destination. The function of the VCI is similar to that of the data link connection identifier (DLCI) in Frame Relay and the logical channel number and logical channel group number in X.25.
Another advantage of the use of virtual circuits comes with the ability to use them as a multiplexing layer, allowing different services (such as voice, Frame Relay, IP). The VPI is useful for reducing the switching table of some virtual circuits which have common paths.
Types
ATM can build virtual circuits and virtual paths either statically or dynamically. Static circuits (permanent virtual circuits or PVCs) or paths (permanent virtual paths or PVPs) require that the circuit is composed of a series of segments, one for each pair of interfaces through which it passes.
PVPs and PVCs, though conceptually simple, require significant effort in large networks. They also do not support the re-routing of service in the event of a failure. Dynamically built PVPs (soft PVPs or SPVPs) and PVCs (soft PVCs or SPVCs), in contrast, are built by specifying the characteristics of the circuit (the service contract) and the two endpoints.
ATM networks create and remove switched virtual circuits (SVCs) on demand when requested by an end station. One application for SVCs is to carry individual telephone calls when a network of telephone switches are interconnected using ATM. SVCs were also used in attempts to replace local area networks with ATM.
Routing
Most ATM networks supporting SPVPs, SPVCs, and SVCs use the Private Network-to-Network Interface (PNNI) protocol to share topology information between switches and select a route through a network. PNNI is a link-state routing protocol like OSPF and IS-IS. PNNI also includes a very powerful route summarization mechanism to allow construction of very large networks, as well as a call admission control (CAC) algorithm which determines the availability of sufficient bandwidth on a proposed route through a network in order to satisfy the service requirements of a VC or VP.
Traffic engineering
Another key ATM concept involves the traffic contract. When an ATM circuit is set up each switch on the circuit is informed of the traffic class of the connection. ATM traffic contracts form part of the mechanism by which quality of service (QoS) is ensured. There are four basic types (and several variants) which each have a set of parameters describing the connection.
CBR Constant bit rate: a Peak Cell Rate (PCR) is specified, which is constant.
VBR Variable bit rate: an average or Sustainable Cell Rate (SCR) is specified, which can peak at a certain level, a PCR, for a maximum interval before being problematic.
ABR Available bit rate: a minimum guaranteed rate is specified.
UBR Unspecified bit rate: traffic is allocated to all remaining transmission capacity.
VBR has real-time and non-real-time variants, and serves for bursty traffic. Non-real-time is sometimes abbreviated to vbr-nrt. Most traffic classes also introduce the concept of cell-delay variation tolerance (CDVT), which defines the clumping of cells in time.
Traffic policing
To maintain network performance, networks may apply traffic policing to virtual circuits to limit them to their traffic contracts at the entry points to the network, i.e. the user–network interfaces (UNIs) and network-to-network interfaces (NNIs) using usage/network parameter control (UPC and NPC). The reference model given by the ITU-T and ATM Forum for UPC and NPC is the generic cell rate algorithm (GCRA), which is a version of the leaky bucket algorithm. CBR traffic will normally be policed to a PCR and CDVT alone, whereas VBR traffic will normally be policed using a dual leaky bucket controller to a PCR and CDVT and an SCR and maximum burst size (MBS). The MBS will normally be the packet (SAR-SDU) size for the VBR VC in cells.
If the traffic on a virtual circuit exceeds its traffic contract, as determined by the GCRA, the network can either drop the cells or set the Cell Loss Priority (CLP) bit, allowing the cells to be dropped at a congestion point. Basic policing works on a cell-by-cell basis, but this is sub-optimal for encapsulated packet traffic as discarding a single cell will invalidate a packet's worth of cells. As a result, schemes such as partial packet discard (PPD) and early packet discard (EPD) have been developed to discard a whole packet's cells. This reduces the number of useless cells in the network, saving bandwidth for full packets. EPD and PPD work with AAL5 connections as they use the end of packet marker: the ATM user-to-ATM user (AUU) indication bit in the payload-type field of the header, which is set in the last cell of a SAR-SDU.
Traffic shaping
Traffic shaping usually takes place in the network interface controller (NIC) in user equipment, and attempts to ensure that the cell flow on a VC will meet its traffic contract, i.e. cells will not be dropped or reduced in priority at the UNI. Since the reference model given for traffic policing in the network is the GCRA, this algorithm is normally used for shaping as well, and single and dual leaky bucket implementations may be used as appropriate.
Reference model
The ATM network reference model approximately maps to the three lowest layers of the OSI reference model. It specifies the following layers:
At the physical network level, ATM specifies a layer that is equivalent to the OSI physical layer.
The ATM layer 2 roughly corresponds to the OSI data link layer.
The OSI network layer is implemented as the ATM adaptation layer (AAL).
Deployment
ATM became popular with telephone companies and many computer makers in the 1990s. However, even by the end of the decade, the better price–performance ratio of Internet Protocol-based products was competing with ATM technology for integrating real-time and bursty network traffic. Additionally, among cable companies using ATM there often would be discrete and competing management teams for telephony, video on demand, and broadcast and digital video reception, which adversely impacted efficiency. Companies such as FORE Systems focused on ATM products, while other large vendors such as Cisco Systems provided ATM as an option. After the burst of the dot-com bubble, some still predicted that "ATM is going to dominate". However, in 2005 the ATM Forum, which had been the trade organization promoting the technology, merged with groups promoting other technologies, and eventually became the Broadband Forum.
Wireless or mobile ATM
Wireless ATM, or mobile ATM, consists of an ATM core network with a wireless access network. ATM cells are transmitted from base stations to mobile terminals. Mobility functions are performed at an ATM switch in the core network, known as a crossover switch, which is similar to the mobile switching center of GSM networks.
The advantage of wireless ATM is its high bandwidth and high-speed handoffs done at layer 2. In the early 1990s, Bell Labs and NEC research labs worked actively in this field. Andy Hopper from the University of Cambridge Computer Laboratory also worked in this area. There was a wireless ATM forum formed to standardize the technology behind wireless ATM networks. The forum was supported by several telecommunication companies, including NEC, Fujitsu and AT&T. Mobile ATM aimed to provide high-speed multimedia communications technology, capable of delivering broadband mobile communications beyond that of GSM and WLANs.
See also
VoATM
ATM25
Notes
References
Further reading
External links
ATM ChipWeb - Chip and NIC database
ITU-T recommendations
Link protocols
Networking standards | Asynchronous Transfer Mode | [
"Technology",
"Engineering"
] | 3,859 | [
"Networking standards",
"Computer standards",
"Asynchronous Transfer Mode",
"Computer networks engineering"
] |
2,500 | https://en.wikipedia.org/wiki/Anus | In mammals, invertebrates and most fish, the anus (: anuses or ani; from Latin, 'ring' or 'circle') is the external body orifice at the exit end of the digestive tract (bowel), i.e. the opposite end from the mouth. Its function is to facilitate the expulsion of wastes that remain after digestion.
Bowel contents that pass through the anus include the gaseous flatus and the semi-solid feces, which (depending on the type of animal) include: indigestible matter such as bones, hair pellets, endozoochorous seeds and digestive rocks; residual food material after the digestible nutrients have been extracted, for example cellulose or lignin; ingested matter which would be toxic if it remained in the digestive tract; excreted metabolites like bilirubin-containing bile; and dead mucosal epithelia or excess gut bacteria and other endosymbionts. Passage of feces through the anus is typically controlled by muscular sphincters, and failure to stop unwanted passages results in fecal incontinence.
Amphibians, reptiles and birds use a similar orifice (known as the cloaca) for excreting liquid and solid wastes, for copulation and egg-laying. Monotreme mammals also have a cloaca, which is thought to be a feature inherited from the earliest amniotes. Marsupials have a single orifice for excreting both solids and liquids and, in females, a separate vagina for reproduction. Female placental mammals have completely separate orifices for defecation, urination, and reproduction; males have one opening for defecation and another for both urination and reproduction, although the channels flowing to that orifice are almost completely separate.
The development of the anus was an important stage in the evolution of multicellular animals. It appears to have happened at least twice, following different paths in protostomes and deuterostomes. This accompanied or facilitated other important evolutionary developments: the bilaterian body plan, the coelom, and metamerism, in which the body was built of repeated "modules" which could later specialize, such as the heads of most arthropods, which are composed of fused, specialized segments.
In comb jellies, there are species with one and sometimes two permanent anuses, species like the warty comb jelly grows an anus, which then disappear when it is no longer needed.
Development
In animals at least as complex as an earthworm, the embryo forms a dent on one side, the blastopore, which deepens to become the archenteron, the first phase in the growth of the gut. In deuterostomes, the original dent becomes the anus while the gut eventually tunnels through to make another opening, which forms the mouth. The protostomes were so named because it was thought that in their embryos the dent formed the mouth first (proto– meaning "first") and the anus was formed later at the opening made by the other end of the gut. Research from 2001 shows the edges of the dent close up in the middles of protosomes, leaving openings at the ends which become the mouths and anuses.
See also
References
External links
Digestive system | Anus | [
"Biology"
] | 696 | [
"Digestive system",
"Organ systems"
] |
2,504 | https://en.wikipedia.org/wiki/Amphetamine | Amphetamine (contracted from alpha-methylphenethylamine) is a central nervous system (CNS) stimulant that is used in the treatment of attention deficit hyperactivity disorder (ADHD), narcolepsy, and obesity; it is also used to treat binge eating disorder in the form of its inactive prodrug lisdexamfetamine. Amphetamine was discovered as a chemical in 1887 by Lazăr Edeleanu, and then as a drug in the late 1920s. It exists as two enantiomers: levoamphetamine and dextroamphetamine. Amphetamine properly refers to a specific chemical, the racemic free base, which is equal parts of the two enantiomers in their pure amine forms. The term is frequently used informally to refer to any combination of the enantiomers, or to either of them alone. Historically, it has been used to treat nasal congestion and depression. Amphetamine is also used as an athletic performance enhancer and cognitive enhancer, and recreationally as an aphrodisiac and euphoriant. It is a prescription drug in many countries, and unauthorized possession and distribution of amphetamine are often tightly controlled due to the significant health risks associated with recreational use.
The first amphetamine pharmaceutical was Benzedrine, a brand which was used to treat a variety of conditions. Pharmaceutical amphetamine is prescribed as racemic amphetamine, Adderall, dextroamphetamine, or the inactive prodrug lisdexamfetamine. Amphetamine increases monoamine and excitatory neurotransmission in the brain, with its most pronounced effects targeting the norepinephrine and dopamine neurotransmitter systems.
At therapeutic doses, amphetamine causes emotional and cognitive effects such as euphoria, change in desire for sex, increased wakefulness, and improved cognitive control. It induces physical effects such as improved reaction time, fatigue resistance, decreased appetite, elevated heart rate, and increased muscle strength. Larger doses of amphetamine may impair cognitive function and induce rapid muscle breakdown. Addiction is a serious risk with heavy recreational amphetamine use, but is unlikely to occur from long-term medical use at therapeutic doses. Very high doses can result in psychosis (e.g., hallucinations, delusions and paranoia) which rarely occurs at therapeutic doses even during long-term use. Recreational doses are generally much larger than prescribed therapeutic doses and carry a far greater risk of serious side effects.
Amphetamine belongs to the phenethylamine class. It is also the parent compound of its own structural class, the substituted amphetamines, which includes prominent substances such as bupropion, cathinone, MDMA, and methamphetamine. As a member of the phenethylamine class, amphetamine is also chemically related to the naturally occurring trace amine neuromodulators, specifically phenethylamine and , both of which are produced within the human body. Phenethylamine is the parent compound of amphetamine, while is a positional isomer of amphetamine that differs only in the placement of the methyl group.
Uses
Medical
Amphetamine is used to treat attention deficit hyperactivity disorder (ADHD), narcolepsy, obesity, and, in the form of lisdexamfetamine, binge eating disorder. It is sometimes prescribed for its past medical indications, particularly for depression and chronic pain.
ADHD
Long-term amphetamine exposure at sufficiently high doses in some animal species is known to produce abnormal dopamine system development or nerve damage, but, in humans with ADHD, long-term use of pharmaceutical amphetamines at therapeutic doses appears to improve brain development and nerve growth. Reviews of magnetic resonance imaging (MRI) studies suggest that long-term treatment with amphetamine decreases abnormalities in brain structure and function found in subjects with ADHD, and improves function in several parts of the brain, such as the right caudate nucleus of the basal ganglia.
Reviews of clinical stimulant research have established the safety and effectiveness of long-term continuous amphetamine use for the treatment of ADHD. Randomized controlled trials of continuous stimulant therapy for the treatment of ADHD spanning 2 years have demonstrated treatment effectiveness and safety. Two reviews have indicated that long-term continuous stimulant therapy for ADHD is effective for reducing the core symptoms of ADHD (i.e., hyperactivity, inattention, and impulsivity), enhancing quality of life and academic achievement, and producing improvements in a large number of functional outcomes across 9 categories of outcomes related to academics, antisocial behavior, driving, non-medicinal drug use, obesity, occupation, self-esteem, service use (i.e., academic, occupational, health, financial, and legal services), and social function. Additionally, a 2024 meta-analytic systematic review reported moderate improvements in quality of life when amphetamine treatment is used for ADHD. One review highlighted a nine-month randomized controlled trial of amphetamine treatment for ADHD in children that found an average increase of 4.5 IQ points, continued increases in attention, and continued decreases in disruptive behaviors and hyperactivity. Another review indicated that, based upon the longest follow-up studies conducted to date, lifetime stimulant therapy that begins during childhood is continuously effective for controlling ADHD symptoms and reduces the risk of developing a substance use disorder as an adult. A 2025 meta-analytic systematic review of 113 randomized controlled trials demonstrated that stimulant medications significantly improved core ADHD symptoms in adults over a three-month period, with good acceptability compared to other pharmacological and non-pharmacological treatments.
Models of ADHD suggest that it is associated with functional impairments in some of the brain's neurotransmitter systems; these functional impairments involve impaired dopamine neurotransmission in the mesocorticolimbic projection and norepinephrine neurotransmission in the noradrenergic projections from the locus coeruleus to the prefrontal cortex. Stimulants like methylphenidate and amphetamine are effective in treating ADHD because they increase neurotransmitter activity in these systems. Approximately 80% of those who use these stimulants see improvements in ADHD symptoms. Children with ADHD who use stimulant medications generally have better relationships with peers and family members, perform better in school, are less distractible and impulsive, and have longer attention spans. The Cochrane reviews on the treatment of ADHD in children, adolescents, and adults with pharmaceutical amphetamines stated that short-term studies have demonstrated that these drugs decrease the severity of symptoms, but they have higher discontinuation rates than non-stimulant medications due to their adverse side effects. A Cochrane review on the treatment of ADHD in children with tic disorders such as Tourette syndrome indicated that stimulants in general do not make tics worse, but high doses of dextroamphetamine could exacerbate tics in some individuals.
Binge eating disorder
Binge eating disorder (BED) is characterized by recurrent and persistent episodes of compulsive binge eating. These episodes are often accompanied by marked distress and a feeling of loss of control over eating. The pathophysiology of BED is not fully understood, but it is believed to involve dysfunctional dopaminergic reward circuitry along the cortico-striatal-thalamic-cortical loop. As of July 2024, lisdexamfetamine is the only USFDA- and TGA-approved pharmacotherapy for BED. Evidence suggests that lisdexamfetamine's treatment efficacy in BED is underpinned at least in part by a psychopathological overlap between BED and ADHD, with the latter conceptualized as a cognitive control disorder that also benefits from treatment with lisdexamfetamine.
Lisdexamfetamine's therapeutic effects for BED primarily involve direct action in the central nervous system after conversion to its pharmacologically active metabolite, dextroamphetamine. Centrally, dextroamphetamine increases neurotransmitter activity of dopamine and norepinephrine in prefrontal cortical regions that regulate cognitive control of behavior. Similar to its therapeutic effect in ADHD, dextroamphetamine enhances cognitive control and may reduce impulsivity in patients with BED by enhancing the cognitive processes responsible for overriding prepotent feeding responses that precede binge eating episodes. In addition, dextroamphetamine's actions outside of the central nervous system may also contribute to its treatment effects in BED. Peripherally, dextroamphetamine triggers lipolysis through noradrenergic signaling in adipose fat cells, leading to the release of triglycerides into blood plasma to be utilized as a fuel substrate. Dextroamphetamine also activates TAAR1 in peripheral organs along the gastrointestinal tract that are involved in the regulation of food intake and body weight. Together, these actions confer an anorexigenic effect that promotes satiety in response to feeding and may decrease binge eating as a secondary effect.
Medical reviews of randomized controlled trials have demonstrated that lisdexamfetamine, at doses between 50–70 mg, is safe and effective for the treatment of moderate-to-severe BED in adults. These reviews suggest that lisdexamfetamine is persistently effective at treating BED and is associated with significant reductions in the number of binge eating days and binge eating episodes per week. Furthermore, a meta-analytic systematic review highlighted an open-label, 12-month extension safety and tolerability study that reported lisdexamfetamine remained effective at reducing the number of binge eating days for the duration of the study. In addition, both a review and a meta-analytic systematic review found lisdexamfetamine to be superior to placebo in several secondary outcome measures, including persistent binge eating cessation, reduction of obsessive-compulsive related binge eating symptoms, reduction of body-weight, and reduction of triglycerides. Lisdexamfetamine, like all pharmaceutical amphetamines, has direct appetite suppressant effects that may be therapeutically useful in both BED and its comorbidities. Based on reviews of neuroimaging studies involving BED-diagnosed participants, therapeautic neuroplasticity in dopaminergic and noradrenergic pathways from long-term use of lisdexamfetamine may be implicated in lasting improvements in the regulation of eating behaviors that are observed even after the drug is discontinued.
Narcolepsy
Narcolepsy is a chronic sleep-wake disorder that is associated with excessive daytime sleepiness, cataplexy, and sleep paralysis. Patients with narcolepsy are diagnosed as either type 1 or type 2, with only the former presenting cataplexy symptoms. Type 1 narcolepsy results from the loss of approximately 70,000 orexin-releasing neurons in the lateral hypothalamus, leading to significantly reduced cerebrospinal orexin levels; this reduction is a diagnostic biomarker for type 1 narcolepsy. Lateral hypothalamic orexin neurons innervate every component of the ascending reticular activating system (ARAS), which includes noradrenergic, dopaminergic, histaminergic, and serotonergic nuclei that promote wakefulness.
Amphetamine’s therapeutic mode of action in narcolepsy primarily involves increasing monoamine neurotransmitter activity in the ARAS. This includes noradrenergic neurons in the locus coeruleus, dopaminergic neurons in the ventral tegmental area, histaminergic neurons in the tuberomammillary nucleus, and serotonergic neurons in the dorsal raphe nucleus. Dextroamphetamine, the more dopaminergic enantiomer of amphetamine, is particularly effective at promoting wakefulness because dopamine release has the greatest influence on cortical activation and cognitive arousal, relative to other monoamines. In contrast, levoamphetamine may have a greater effect on cataplexy, a symptom more sensitive to the effects of norepinephrine and serotonin. Noradrenergic and serotonergic nuclei in the ARAS are involved in the regulation of the REM sleep cycle and function as "REM-off" cells, with amphetamine's effect on norepinephrine and serotonin contributing to the suppression of REM sleep and a possible reduction of cataplexy at high doses.
The American Academy of Sleep Medicine (AASM) 2021 clinical practice guideline conditionally recommends dextroamphetamine for the treatment of both type 1 and type 2 narcolepsy. Treatment with pharmaceutical amphetamines is generally less preferred relative to other stimulants (e.g., modafinil) and is considered a third-line treatment option. Medical reviews indicate that amphetamine is safe and effective for the treatment of narcolepsy. Amphetamine appears to be most effective at improving symptoms associated with hypersomnolence, with three reviews finding clinically significant reductions in daytime sleepiness in patients with narcolepsy. Additionally, these reviews suggest that amphetamine may dose-dependently improve cataplexy symptoms. However, the quality of evidence for these findings is low and is consequently reflected in the AASM's conditional recommendation for dextroamphetamine as a treatment option for narcolepsy.
Enhancing performance
Cognitive performance
In 2015, a systematic review and a meta-analysis of high quality clinical trials found that, when used at low (therapeutic) doses, amphetamine produces modest yet unambiguous improvements in cognition, including working memory, long-term episodic memory, inhibitory control, and some aspects of attention, in normal healthy adults; these cognition-enhancing effects of amphetamine are known to be partially mediated through the indirect activation of both dopamine D1 receptor and α2-adrenergic receptor in the prefrontal cortex. A systematic review from 2014 found that low doses of amphetamine also improve memory consolidation, in turn leading to improved recall of information. Therapeutic doses of amphetamine also enhance cortical network efficiency, an effect which mediates improvements in working memory in all individuals. Amphetamine and other ADHD stimulants also improve task saliency (motivation to perform a task) and increase arousal (wakefulness), in turn promoting goal-directed behavior. Stimulants such as amphetamine can improve performance on difficult and boring tasks and are used by some students as a study and test-taking aid. Based upon studies of self-reported illicit stimulant use, of college students use diverted ADHD stimulants, which are primarily used for enhancement of academic performance rather than as recreational drugs. However, high amphetamine doses that are above the therapeutic range can interfere with working memory and other aspects of cognitive control.
Physical performance
Amphetamine is used by some athletes for its psychological and athletic performance-enhancing effects, such as increased endurance and alertness; however, non-medical amphetamine use is prohibited at sporting events that are regulated by collegiate, national, and international anti-doping agencies. In healthy people at oral therapeutic doses, amphetamine has been shown to increase muscle strength, acceleration, athletic performance in anaerobic conditions, and endurance (i.e., it delays the onset of fatigue), while improving reaction time. Amphetamine improves endurance and reaction time primarily through reuptake inhibition and release of dopamine in the central nervous system. Amphetamine and other dopaminergic drugs also increase power output at fixed levels of perceived exertion by overriding a "safety switch", allowing the core temperature limit to increase in order to access a reserve capacity that is normally off-limits. At therapeutic doses, the adverse effects of amphetamine do not impede athletic performance; however, at much higher doses, amphetamine can induce effects that severely impair performance, such as rapid muscle breakdown and elevated body temperature.
Recreational
Amphetamine, specifically the more dopaminergic dextrorotatory enantiomer (dextroamphetamine), is also used recreationally as a euphoriant and aphrodisiac, and like other amphetamines; is used as a club drug for its energetic and euphoric high. Dextroamphetamine (d-amphetamine) is considered to have a high potential for misuse in a recreational manner since individuals typically report feeling euphoric, more alert, and more energetic after taking the drug. A notable part of the 1960s mod subculture in the UK was recreational amphetamine use, which was used to fuel all-night dances at clubs like Manchester's Twisted Wheel. Newspaper reports described dancers emerging from clubs at 5 a.m. with dilated pupils. Mods used the drug for stimulation and alertness, which they viewed as different from the intoxication caused by alcohol and other drugs. Dr. Andrew Wilson argues that for a significant minority, "amphetamines symbolised the smart, on-the-ball, cool image" and that they sought "stimulation not intoxication [...] greater awareness, not escape" and "confidence and articulacy" rather than the "drunken rowdiness of previous generations." Dextroamphetamine's dopaminergic (rewarding) properties affect the mesocorticolimbic circuit; a group of neural structures responsible for incentive salience (i.e., "wanting"; desire or craving for a reward and motivation), positive reinforcement and positively-valenced emotions, particularly ones involving pleasure. Large recreational doses of dextroamphetamine may produce symptoms of dextroamphetamine overdose. Recreational users sometimes open dexedrine capsules and crush the contents in order to insufflate (snort) it or subsequently dissolve it in water and inject it. Immediate-release formulations have higher potential for abuse via insufflation (snorting) or intravenous injection due to a more favorable pharmacokinetic profile and easy crushability (especially tablets). Injection into the bloodstream can be dangerous because insoluble fillers within the tablets can block small blood vessels. Chronic overuse of dextroamphetamine can lead to severe drug dependence, resulting in withdrawal symptoms when drug use stops.
Contraindications
According to the International Programme on Chemical Safety (IPCS) and the U.S. Food and Drug Administration (FDA), amphetamine is contraindicated in people with a history of drug abuse, cardiovascular disease, severe agitation, or severe anxiety. It is also contraindicated in individuals with advanced arteriosclerosis (hardening of the arteries), glaucoma (increased eye pressure), hyperthyroidism (excessive production of thyroid hormone), or moderate to severe hypertension. These agencies indicate that people who have experienced allergic reactions to other stimulants or who are taking monoamine oxidase inhibitors (MAOIs) should not take amphetamine, although safe concurrent use of amphetamine and monoamine oxidase inhibitors has been documented. These agencies also state that anyone with anorexia nervosa, bipolar disorder, depression, hypertension, liver or kidney problems, mania, psychosis, Raynaud's phenomenon, seizures, thyroid problems, tics, or Tourette syndrome should monitor their symptoms while taking amphetamine. Evidence from human studies indicates that therapeutic amphetamine use does not cause developmental abnormalities in the fetus or newborns (i.e., it is not a human teratogen), but amphetamine abuse does pose risks to the fetus. Amphetamine has also been shown to pass into breast milk, so the IPCS and the FDA advise mothers to avoid breastfeeding when using it. Due to the potential for reversible growth impairments, the FDA advises monitoring the height and weight of children and adolescents prescribed an amphetamine pharmaceutical.
Adverse effects
The adverse side effects of amphetamine are many and varied, and the amount of amphetamine used is the primary factor in determining the likelihood and severity of adverse effects. Amphetamine products such as Adderall, Dexedrine, and their generic equivalents are currently approved by the U.S. FDA for long-term therapeutic use. Recreational use of amphetamine generally involves much larger doses, which have a greater risk of serious adverse drug effects than dosages used for therapeutic purposes.
Physical
Cardiovascular side effects can include hypertension or hypotension from a vasovagal response, Raynaud's phenomenon (reduced blood flow to the hands and feet), and tachycardia (increased heart rate). Sexual side effects in males may include erectile dysfunction, frequent erections, or prolonged erections. Gastrointestinal side effects may include abdominal pain, constipation, diarrhea, and nausea. Other potential physical side effects include appetite loss, blurred vision, dry mouth, excessive grinding of the teeth, nosebleed, profuse sweating, rhinitis medicamentosa (drug-induced nasal congestion), reduced seizure threshold, tics (a type of movement disorder), and weight loss. Dangerous physical side effects are rare at typical pharmaceutical doses.
Amphetamine stimulates the medullary respiratory centers, producing faster and deeper breaths. In a normal person at therapeutic doses, this effect is usually not noticeable, but when respiration is already compromised, it may be evident. Amphetamine also induces contraction in the urinary bladder sphincter, the muscle which controls urination, which can result in difficulty urinating. This effect can be useful in treating bed wetting and loss of bladder control. The effects of amphetamine on the gastrointestinal tract are unpredictable. If intestinal activity is high, amphetamine may reduce gastrointestinal motility (the rate at which content moves through the digestive system); however, amphetamine may increase motility when the smooth muscle of the tract is relaxed. Amphetamine also has a slight analgesic effect and can enhance the pain relieving effects of opioids.
FDA-commissioned studies from 2011 indicate that in children, young adults, and adults there is no association between serious adverse cardiovascular events (sudden death, heart attack, and stroke) and the medical use of amphetamine or other ADHD stimulants. However, amphetamine pharmaceuticals are contraindicated in individuals with cardiovascular disease.
Psychological
At normal therapeutic doses, the most common psychological side effects of amphetamine include increased alertness, apprehension, concentration, initiative, self-confidence and sociability, mood swings (elated mood followed by mildly depressed mood), insomnia or wakefulness, and decreased sense of fatigue. Less common side effects include anxiety, change in libido, grandiosity, irritability, repetitive or obsessive behaviors, and restlessness; these effects depend on the user's personality and current mental state. Amphetamine psychosis (e.g., delusions and paranoia) can occur in heavy users. Although very rare, this psychosis can also occur at therapeutic doses during long-term therapy. According to the FDA, "there is no systematic evidence" that stimulants produce aggressive behavior or hostility.
Amphetamine has also been shown to produce a conditioned place preference in humans taking therapeutic doses, meaning that individuals acquire a preference for spending time in places where they have previously used amphetamine.
Reinforcement disorders
Addiction
Addiction is a serious risk with heavy recreational amphetamine use, but is unlikely to occur from long-term medical use at therapeutic doses; in fact, lifetime stimulant therapy for ADHD that begins during childhood reduces the risk of developing substance use disorders as an adult. Pathological overactivation of the mesolimbic pathway, a dopamine pathway that connects the ventral tegmental area to the nucleus accumbens, plays a central role in amphetamine addiction. Individuals who frequently self-administer high doses of amphetamine have a high risk of developing an amphetamine addiction, since chronic use at high doses gradually increases the level of accumbal ΔFosB, a "molecular switch" and "master control protein" for addiction. Once nucleus accumbens ΔFosB is sufficiently overexpressed, it begins to increase the severity of addictive behavior (i.e., compulsive drug-seeking) with further increases in its expression. While there are currently no effective drugs for treating amphetamine addiction, regularly engaging in sustained aerobic exercise appears to reduce the risk of developing such an addiction. Exercise therapy improves clinical treatment outcomes and may be used as an adjunct therapy with behavioral therapies for addiction.
Biomolecular mechanisms
Chronic use of amphetamine at excessive doses causes alterations in gene expression in the mesocorticolimbic projection, which arise through transcriptional and epigenetic mechanisms. The most important transcription factors that produce these alterations are Delta FBJ murine osteosarcoma viral oncogene homolog B (ΔFosB), cAMP response element binding protein (CREB), and nuclear factor-kappa B (NF-κB). ΔFosB is the most significant biomolecular mechanism in addiction because ΔFosB overexpression (i.e., an abnormally high level of gene expression which produces a pronounced gene-related phenotype) in the D1-type medium spiny neurons in the nucleus accumbens is necessary and sufficient for many of the neural adaptations and regulates multiple behavioral effects (e.g., reward sensitization and escalating drug self-administration) involved in addiction. Once ΔFosB is sufficiently overexpressed, it induces an addictive state that becomes increasingly more severe with further increases in ΔFosB expression. It has been implicated in addictions to alcohol, cannabinoids, cocaine, methylphenidate, nicotine, opioids, phencyclidine, propofol, and substituted amphetamines, among others.
ΔJunD, a transcription factor, and G9a, a histone methyltransferase enzyme, both oppose the function of ΔFosB and inhibit increases in its expression. Sufficiently overexpressing ΔJunD in the nucleus accumbens with viral vectors can completely block many of the neural and behavioral alterations seen in chronic drug abuse (i.e., the alterations mediated by ΔFosB). Similarly, accumbal G9a hyperexpression results in markedly increased histone 3 lysine residue 9 dimethylation (H3K9me2) and blocks the induction of ΔFosB-mediated neural and behavioral plasticity by chronic drug use, which occurs via H3K9me2-mediated repression of transcription factors for ΔFosB and H3K9me2-mediated repression of various ΔFosB transcriptional targets (e.g., CDK5). ΔFosB also plays an important role in regulating behavioral responses to natural rewards, such as palatable food, sex, and exercise. Since both natural rewards and addictive drugs induce the expression of ΔFosB (i.e., they cause the brain to produce more of it), chronic acquisition of these rewards can result in a similar pathological state of addiction. Consequently, ΔFosB is the most significant factor involved in both amphetamine addiction and amphetamine-induced sexual addictions, which are compulsive sexual behaviors that result from excessive sexual activity and amphetamine use. These sexual addictions are associated with a dopamine dysregulation syndrome which occurs in some patients taking dopaminergic drugs.
The effects of amphetamine on gene regulation are both dose- and route-dependent. Most of the research on gene regulation and addiction is based upon animal studies with intravenous amphetamine administration at very high doses. The few studies that have used equivalent (weight-adjusted) human therapeutic doses and oral administration show that these changes, if they occur, are relatively minor. This suggests that medical use of amphetamine does not significantly affect gene regulation.
Pharmacological treatments
there is no effective pharmacotherapy for amphetamine addiction. Reviews from 2015 and 2016 indicated that TAAR1-selective agonists have significant therapeutic potential as a treatment for psychostimulant addictions; however, the only compounds which are known to function as TAAR1-selective agonists are experimental drugs. Amphetamine addiction is largely mediated through increased activation of dopamine receptors and NMDA receptors in the nucleus accumbens; magnesium ions inhibit NMDA receptors by blocking the receptor calcium channel. One review suggested that, based upon animal testing, pathological (addiction-inducing) psychostimulant use significantly reduces the level of intracellular magnesium throughout the brain. Supplemental magnesium treatment has been shown to reduce amphetamine self-administration (i.e., doses given to oneself) in humans, but it is not an effective monotherapy for amphetamine addiction.
A systematic review and meta-analysis from 2019 assessed the efficacy of 17 different pharmacotherapies used in randomized controlled trials (RCTs) for amphetamine and methamphetamine addiction; it found only low-strength evidence that methylphenidate might reduce amphetamine or methamphetamine self-administration. There was low- to moderate-strength evidence of no benefit for most of the other medications used in RCTs, which included antidepressants (bupropion, mirtazapine, sertraline), antipsychotics (aripiprazole), anticonvulsants (topiramate, baclofen, gabapentin), naltrexone, varenicline, citicoline, ondansetron, prometa, riluzole, atomoxetine, dextroamphetamine, and modafinil.
Behavioral treatments
A 2018 systematic review and network meta-analysis of 50 trials involving 12 different psychosocial interventions for amphetamine, methamphetamine, or cocaine addiction found that combination therapy with both contingency management and community reinforcement approach had the highest efficacy (i.e., abstinence rate) and acceptability (i.e., lowest dropout rate). Other treatment modalities examined in the analysis included monotherapy with contingency management or community reinforcement approach, cognitive behavioral therapy, 12-step programs, non-contingent reward-based therapies, psychodynamic therapy, and other combination therapies involving these.
Additionally, research on the neurobiological effects of physical exercise suggests that daily aerobic exercise, especially endurance exercise (e.g., marathon running), prevents the development of drug addiction and is an effective adjunct therapy (i.e., a supplemental treatment) for amphetamine addiction. Exercise leads to better treatment outcomes when used as an adjunct treatment, particularly for psychostimulant addictions. In particular, aerobic exercise decreases psychostimulant self-administration, reduces the reinstatement (i.e., relapse) of drug-seeking, and induces increased dopamine receptor D2 (DRD2) density in the striatum. This is the opposite of pathological stimulant use, which induces decreased striatal DRD2 density. One review noted that exercise may also prevent the development of a drug addiction by altering ΔFosB or immunoreactivity in the striatum or other parts of the reward system.
Dependence and withdrawal
Drug tolerance develops rapidly in amphetamine abuse (i.e., recreational amphetamine use), so periods of extended abuse require increasingly larger doses of the drug in order to achieve the same effect.
According to a Cochrane review on withdrawal in individuals who compulsively use amphetamine and methamphetamine, "when chronic heavy users abruptly discontinue amphetamine use, many report a time-limited withdrawal syndrome that occurs within 24 hours of their last dose." This review noted that withdrawal symptoms in chronic, high-dose users are frequent, occurring in roughly 88% of cases, and persist for weeks with a marked "crash" phase occurring during the first week. Amphetamine withdrawal symptoms can include anxiety, drug craving, depressed mood, fatigue, increased appetite, increased movement or decreased movement, lack of motivation, sleeplessness or sleepiness, and lucid dreams. The review indicated that the severity of withdrawal symptoms is positively correlated with the age of the individual and the extent of their dependence. Mild withdrawal symptoms from the discontinuation of amphetamine treatment at therapeutic doses can be avoided by tapering the dose.
Overdose
An amphetamine overdose can lead to many different symptoms, but is rarely fatal with appropriate care. The severity of overdose symptoms increases with dosage and decreases with drug tolerance to amphetamine. Tolerant individuals have been known to take as much as 5 grams of amphetamine in a day, which is roughly 100 times the maximum daily therapeutic dose. Symptoms of a moderate and extremely large overdose are listed below; fatal amphetamine poisoning usually also involves convulsions and coma. In 2013, overdose on amphetamine, methamphetamine, and other compounds implicated in an "amphetamine use disorder" resulted in an estimated 3,788 deaths worldwide ( deaths, 95% confidence).
Toxicity
In rodents and primates, sufficiently high doses of amphetamine cause dopaminergic neurotoxicity, or damage to dopamine neurons, which is characterized by dopamine terminal degeneration and reduced transporter and receptor function. There is no evidence that amphetamine is directly neurotoxic in humans. However, large doses of amphetamine may indirectly cause dopaminergic neurotoxicity as a result of hyperpyrexia, the excessive formation of reactive oxygen species, and increased autoxidation of dopamine. Animal models of neurotoxicity from high-dose amphetamine exposure indicate that the occurrence of hyperpyrexia (i.e., core body temperature ≥ 40 °C) is necessary for the development of amphetamine-induced neurotoxicity. Prolonged elevations of brain temperature above 40 °C likely promote the development of amphetamine-induced neurotoxicity in laboratory animals by facilitating the production of reactive oxygen species, disrupting cellular protein function, and transiently increasing blood–brain barrier permeability.
Psychosis
An amphetamine overdose can result in a stimulant psychosis that may involve a variety of symptoms, such as delusions and paranoia. A Cochrane review on treatment for amphetamine, dextroamphetamine, and methamphetamine psychosis states that about of users fail to recover completely. According to the same review, there is at least one trial that shows antipsychotic medications effectively resolve the symptoms of acute amphetamine psychosis. Psychosis rarely arises from therapeutic use.
Drug interactions
Many types of substances are known to interact with amphetamine, resulting in altered drug action or metabolism of amphetamine, the interacting substance, or both. Inhibitors of enzymes that metabolize amphetamine (e.g., CYP2D6 and FMO3) will prolong its elimination half-life, meaning that its effects will last longer. Amphetamine also interacts with , particularly monoamine oxidase A inhibitors, since both MAOIs and amphetamine increase plasma catecholamines (i.e., norepinephrine and dopamine); therefore, concurrent use of both is dangerous. Amphetamine modulates the activity of most psychoactive drugs. In particular, amphetamine may decrease the effects of sedatives and depressants and increase the effects of stimulants and antidepressants. Amphetamine may also decrease the effects of antihypertensives and antipsychotics due to its effects on blood pressure and dopamine respectively. Zinc supplementation may reduce the minimum effective dose of amphetamine when it is used for the treatment of ADHD. Norepinephrine reuptake inhibitors (NRIs) like atomoxetine prevent norepinephrine release induced by amphetamines and have been found to reduce the stimulant, euphoriant, and sympathomimetic effects of dextroamphetamine in humans.
In general, there is no significant interaction when consuming amphetamine with food, but the pH of gastrointestinal content and urine affects the absorption and excretion of amphetamine, respectively. Acidic substances reduce the absorption of amphetamine and increase urinary excretion, and alkaline substances do the opposite. Due to the effect pH has on absorption, amphetamine also interacts with gastric acid reducers such as proton pump inhibitors and H2 antihistamines, which increase gastrointestinal pH (i.e., make it less acidic).
Pharmacology
Pharmacodynamics
Amphetamine exerts its behavioral effects by altering the use of monoamines as neuronal signals in the brain, primarily in catecholamine neurons in the reward and executive function pathways of the brain. The concentrations of the main neurotransmitters involved in reward circuitry and executive functioning, dopamine and norepinephrine, increase dramatically in a dose-dependent manner by amphetamine because of its effects on monoamine transporters. The reinforcing and motivational salience-promoting effects of amphetamine are due mostly to enhanced dopaminergic activity in the mesolimbic pathway. The euphoric and locomotor-stimulating effects of amphetamine are dependent upon the magnitude and speed by which it increases synaptic dopamine and norepinephrine concentrations in the striatum.
Amphetamine has been identified as a potent full agonist of trace amine-associated receptor 1 (TAAR1), a and G protein-coupled receptor (GPCR) discovered in 2001, which is important for regulation of brain monoamines. Activation of increases production via adenylyl cyclase activation and inhibits monoamine transporter function. Monoamine autoreceptors (e.g., D2 short, presynaptic α2, and presynaptic 5-HT1A) have the opposite effect of TAAR1, and together these receptors provide a regulatory system for monoamines. Notably, amphetamine and trace amines possess high binding affinities for TAAR1, but not for monoamine autoreceptors. Imaging studies indicate that monoamine reuptake inhibition by amphetamine and trace amines is site specific and depends upon the presence of TAAR1 in the associated monoamine neurons.
In addition to the neuronal monoamine transporters, amphetamine also inhibits both vesicular monoamine transporters, VMAT1 and VMAT2, as well as SLC1A1, SLC22A3, and SLC22A5. SLC1A1 is excitatory amino acid transporter 3 (EAAT3), a glutamate transporter located in neurons, SLC22A3 is an extraneuronal monoamine transporter that is present in astrocytes, and SLC22A5 is a high-affinity carnitine transporter. Amphetamine is known to strongly induce cocaine- and amphetamine-regulated transcript (CART) gene expression, a neuropeptide involved in feeding behavior, stress, and reward, which induces observable increases in neuronal development and survival in vitro. The CART receptor has yet to be identified, but there is significant evidence that CART binds to a unique . Amphetamine also inhibits monoamine oxidases at very high doses, resulting in less monoamine and trace amine metabolism and consequently higher concentrations of synaptic monoamines. In humans, the only post-synaptic receptor at which amphetamine is known to bind is the receptor, where it acts as an agonist with low micromolar affinity.
The full profile of amphetamine's short-term drug effects in humans is mostly derived through increased cellular communication or neurotransmission of dopamine, serotonin, norepinephrine, epinephrine, histamine, CART peptides, endogenous opioids, adrenocorticotropic hormone, corticosteroids, and glutamate, which it affects through interactions with , , , , , , and possibly other biological targets. Amphetamine also activates seven human carbonic anhydrase enzymes, several of which are expressed in the human brain.
Dextroamphetamine is a more potent agonist of than levoamphetamine. Consequently, dextroamphetamine produces greater stimulation than levoamphetamine, roughly three to four times more, but levoamphetamine has slightly stronger cardiovascular and peripheral effects.
Dopamine
In certain brain regions, amphetamine increases the concentration of dopamine in the synaptic cleft. Amphetamine can enter the presynaptic neuron either through or by diffusing across the neuronal membrane directly. As a consequence of DAT uptake, amphetamine produces competitive reuptake inhibition at the transporter. Upon entering the presynaptic neuron, amphetamine activates which, through protein kinase A (PKA) and protein kinase C (PKC) signaling, causes DAT phosphorylation. Phosphorylation by either protein kinase can result in DAT internalization ( reuptake inhibition), but phosphorylation alone induces the reversal of dopamine transport through DAT (i.e., dopamine efflux). Amphetamine is also known to increase intracellular calcium, an effect which is associated with DAT phosphorylation through an unidentified Ca2+/calmodulin-dependent protein kinase (CAMK)-dependent pathway, in turn producing dopamine efflux. Through direct activation of G protein-coupled inwardly-rectifying potassium channels, reduces the firing rate of dopamine neurons, preventing a hyper-dopaminergic state.
Amphetamine is also a substrate for the presynaptic vesicular monoamine transporter, . Following amphetamine uptake at VMAT2, amphetamine induces the collapse of the vesicular pH gradient, which results in the release of dopamine molecules from synaptic vesicles into the cytosol via dopamine efflux through VMAT2. Subsequently, the cytosolic dopamine molecules are released from the presynaptic neuron into the synaptic cleft via reverse transport at .
Norepinephrine
Similar to dopamine, amphetamine dose-dependently increases the level of synaptic norepinephrine, the direct precursor of epinephrine. Based upon neuronal expression, amphetamine is thought to affect norepinephrine analogously to dopamine. In other words, amphetamine induces TAAR1-mediated efflux and reuptake inhibition at phosphorylated , competitive NET reuptake inhibition, and norepinephrine release from .
Serotonin
Amphetamine exerts analogous, yet less pronounced, effects on serotonin as on dopamine and norepinephrine. Amphetamine affects serotonin via and, like norepinephrine, is thought to phosphorylate via . Like dopamine, amphetamine has low, micromolar affinity at the human 5-HT1A receptor.
Other neurotransmitters, peptides, hormones, and enzymes
Acute amphetamine administration in humans increases endogenous opioid release in several brain structures in the reward system. Extracellular levels of glutamate, the primary excitatory neurotransmitter in the brain, have been shown to increase in the striatum following exposure to amphetamine. This increase in extracellular glutamate presumably occurs via the amphetamine-induced internalization of EAAT3, a glutamate reuptake transporter, in dopamine neurons. Amphetamine also induces the selective release of histamine from mast cells and efflux from histaminergic neurons through . Acute amphetamine administration can also increase adrenocorticotropic hormone and corticosteroid levels in blood plasma by stimulating the hypothalamic–pituitary–adrenal axis.
In December 2017, the first study assessing the interaction between amphetamine and human carbonic anhydrase enzymes was published; of the eleven carbonic anhydrase enzymes it examined, it found that amphetamine potently activates seven, four of which are highly expressed in the human brain, with low nanomolar through low micromolar activating effects. Based upon preclinical research, cerebral carbonic anhydrase activation has cognition-enhancing effects; but, based upon the clinical use of carbonic anhydrase inhibitors, carbonic anhydrase activation in other tissues may be associated with adverse effects, such as ocular activation exacerbating glaucoma.
Pharmacokinetics
The oral bioavailability of amphetamine varies with gastrointestinal pH; it is well absorbed from the gut, and bioavailability is typically 90%. Amphetamine is a weak base with a pKa of 9.9; consequently, when the pH is basic, more of the drug is in its lipid soluble free base form, and more is absorbed through the lipid-rich cell membranes of the gut epithelium. Conversely, an acidic pH means the drug is predominantly in a water-soluble cationic (salt) form, and less is absorbed. Approximately of amphetamine circulating in the bloodstream is bound to plasma proteins. Following absorption, amphetamine readily distributes into most tissues in the body, with high concentrations occurring in cerebrospinal fluid and brain tissue.
The half-lives of amphetamine enantiomers differ and vary with urine pH. At normal urine pH, the half-lives of dextroamphetamine and levoamphetamine are hours and hours, respectively. Highly acidic urine will reduce the enantiomer half-lives to 7 hours; highly alkaline urine will increase the half-lives up to 34 hours. The immediate-release and extended release variants of salts of both isomers reach peak plasma concentrations at 3 hours and 7 hours post-dose respectively. Amphetamine is eliminated via the kidneys, with of the drug being excreted unchanged at normal urinary pH. When the urinary pH is basic, amphetamine is in its free base form, so less is excreted. When urine pH is abnormal, the urinary recovery of amphetamine may range from a low of 1% to a high of 75%, depending mostly upon whether urine is too basic or acidic, respectively. Following oral administration, amphetamine appears in urine within 3 hours. Roughly 90% of ingested amphetamine is eliminated 3 days after the last oral dose.
CYP2D6, dopamine β-hydroxylase (DBH), flavin-containing monooxygenase 3 (FMO3), butyrate-CoA ligase (XM-ligase), and glycine N-acyltransferase (GLYAT) are the enzymes known to metabolize amphetamine or its metabolites in humans. Amphetamine has a variety of excreted metabolic products, including , , , benzoic acid, hippuric acid, norephedrine, and phenylacetone. Among these metabolites, the active sympathomimetics are , , and norephedrine. The main metabolic pathways involve aromatic para-hydroxylation, aliphatic alpha- and beta-hydroxylation, N-oxidation, N-dealkylation, and deamination. The known metabolic pathways, detectable metabolites, and metabolizing enzymes in humans include the following:
Pharmacomicrobiomics
The human metagenome (i.e., the genetic composition of an individual and all microorganisms that reside on or within the individual's body) varies considerably between individuals. Since the total number of microbial and viral cells in the human body (over 100 trillion) greatly outnumbers human cells (tens of trillions), there is considerable potential for interactions between drugs and an individual's microbiome, including: drugs altering the composition of the human microbiome, drug metabolism by microbial enzymes modifying the drug's pharmacokinetic profile, and microbial drug metabolism affecting a drug's clinical efficacy and toxicity profile. The field that studies these interactions is known as pharmacomicrobiomics.
Similar to most biomolecules and other orally administered xenobiotics (i.e., drugs), amphetamine is predicted to undergo promiscuous metabolism by human gastrointestinal microbiota (primarily bacteria) prior to absorption into the blood stream. The first amphetamine-metabolizing microbial enzyme, tyramine oxidase from a strain of E. coli commonly found in the human gut, was identified in 2019. This enzyme was found to metabolize amphetamine, tyramine, and phenethylamine with roughly the same binding affinity for all three compounds.
Related endogenous compounds
Amphetamine has a very similar structure and function to the endogenous trace amines, which are naturally occurring neuromodulator molecules produced in the human body and brain. Among this group, the most closely related compounds are phenethylamine, the parent compound of amphetamine, and , a structural isomer of amphetamine (i.e., it has an identical molecular formula). In humans, phenethylamine is produced directly from by the aromatic amino acid decarboxylase (AADC) enzyme, which converts into dopamine as well. In turn, is metabolized from phenethylamine by phenylethanolamine N-methyltransferase, the same enzyme that metabolizes norepinephrine into epinephrine. Like amphetamine, both phenethylamine and regulate monoamine neurotransmission via ; unlike amphetamine, both of these substances are broken down by monoamine oxidase B, and therefore have a shorter half-life than amphetamine.
Chemistry
Amphetamine is a methyl homolog of the mammalian neurotransmitter phenethylamine with the chemical formula . The carbon atom adjacent to the primary amine is a stereogenic center, and amphetamine is composed of a racemic 1:1 mixture of two enantiomers. This racemic mixture can be separated into its optical isomers: levoamphetamine and dextroamphetamine. At room temperature, the pure free base of amphetamine is a mobile, colorless, and volatile liquid with a characteristically strong amine odor, and acrid, burning taste. Frequently prepared solid salts of amphetamine include amphetamine adipate, aspartate, hydrochloride, phosphate, saccharate, sulfate, and tannate. Dextroamphetamine sulfate is the most common enantiopure salt. Amphetamine is also the parent compound of its own structural class, which includes a number of psychoactive derivatives. In organic chemistry, amphetamine is an excellent chiral ligand for the stereoselective synthesis of .
Substituted derivatives
The substituted derivatives of amphetamine, or "substituted amphetamines", are a broad range of chemicals that contain amphetamine as a "backbone"; specifically, this chemical class includes derivative compounds that are formed by replacing one or more hydrogen atoms in the amphetamine core structure with substituents. The class includes amphetamine itself, stimulants like methamphetamine, serotonergic empathogens like MDMA, and decongestants like ephedrine, among other subgroups.
Synthesis
Since the first preparation was reported in 1887, numerous synthetic routes to amphetamine have been developed. The most common route of both legal and illicit amphetamine synthesis employs a non-metal reduction known as the Leuckart reaction (method 1). In the first step, a reaction between phenylacetone and formamide, either using additional formic acid or formamide itself as a reducing agent, yields . This intermediate is then hydrolyzed using hydrochloric acid, and subsequently basified, extracted with organic solvent, concentrated, and distilled to yield the free base. The free base is then dissolved in an organic solvent, sulfuric acid added, and amphetamine precipitates out as the sulfate salt.
A number of chiral resolutions have been developed to separate the two enantiomers of amphetamine. For example, racemic amphetamine can be treated with to form a diastereoisomeric salt which is fractionally crystallized to yield dextroamphetamine. Chiral resolution remains the most economical method for obtaining optically pure amphetamine on a large scale. In addition, several enantioselective syntheses of amphetamine have been developed. In one example, optically pure is condensed with phenylacetone to yield a chiral Schiff base. In the key step, this intermediate is reduced by catalytic hydrogenation with a transfer of chirality to the carbon atom alpha to the amino group. Cleavage of the benzylic amine bond by hydrogenation yields optically pure dextroamphetamine.
A large number of alternative synthetic routes to amphetamine have been developed based on classic organic reactions. One example is the Friedel–Crafts alkylation of benzene by allyl chloride to yield beta chloropropylbenzene which is then reacted with ammonia to produce racemic amphetamine (method 2). Another example employs the Ritter reaction (method 3). In this route, allylbenzene is reacted acetonitrile in sulfuric acid to yield an organosulfate which in turn is treated with sodium hydroxide to give amphetamine via an acetamide intermediate. A third route starts with which through a double alkylation with methyl iodide followed by benzyl chloride can be converted into acid. This synthetic intermediate can be transformed into amphetamine using either a Hofmann or Curtius rearrangement (method 4).
A significant number of amphetamine syntheses feature a reduction of a nitro, imine, oxime, or other nitrogen-containing functional groups. In one such example, a Knoevenagel condensation of benzaldehyde with nitroethane yields . The double bond and nitro group of this intermediate is reduced using either catalytic hydrogenation or by treatment with lithium aluminium hydride (method 5). Another method is the reaction of phenylacetone with ammonia, producing an imine intermediate that is reduced to the primary amine using hydrogen over a palladium catalyst or lithium aluminum hydride (method 6).
Detection in body fluids
Amphetamine is frequently measured in urine or blood as part of a drug test for sports, employment, poisoning diagnostics, and forensics. Techniques such as immunoassay, which is the most common form of amphetamine test, may cross-react with a number of sympathomimetic drugs. Chromatographic methods specific for amphetamine are employed to prevent false positive results. Chiral separation techniques may be employed to help distinguish the source of the drug, whether prescription amphetamine, prescription amphetamine prodrugs, (e.g., selegiline), over-the-counter drug products that contain levomethamphetamine, or illicitly obtained substituted amphetamines. Several prescription drugs produce amphetamine as a metabolite, including benzphetamine, clobenzorex, famprofazone, fenproporex, lisdexamfetamine, mesocarb, methamphetamine, prenylamine, and selegiline, among others. These compounds may produce positive results for amphetamine on drug tests. Amphetamine is generally only detectable by a standard drug test for approximately 24 hours, although a high dose may be detectable for days.
For the assays, a study noted that an enzyme multiplied immunoassay technique (EMIT) assay for amphetamine and methamphetamine may produce more false positives than liquid chromatography–tandem mass spectrometry. Gas chromatography–mass spectrometry (GC–MS) of amphetamine and methamphetamine with the derivatizing agent chloride allows for the detection of methamphetamine in urine. GC–MS of amphetamine and methamphetamine with the chiral derivatizing agent Mosher's acid chloride allows for the detection of both dextroamphetamine and dextromethamphetamine in urine. Hence, the latter method may be used on samples that test positive using other methods to help distinguish between the various sources of the drug.
History, society, and culture
Amphetamine was first synthesized in 1887 in Germany by Romanian chemist Lazăr Edeleanu who named it phenylisopropylamine; its stimulant effects remained unknown until 1927, when it was independently resynthesized by Gordon Alles and reported to have sympathomimetic properties. Amphetamine had no medical use until late 1933, when Smith, Kline and French began selling it as an inhaler under the brand name Benzedrine as a decongestant. Benzedrine sulfate was introduced 3 years later and was used to treat a wide variety of medical conditions, including narcolepsy, obesity, low blood pressure, low libido, and chronic pain, among others. During World War II, amphetamine and methamphetamine were used extensively by both the Allied and Axis forces for their stimulant and performance-enhancing effects. As the addictive properties of the drug became known, governments began to place strict controls on the sale of amphetamine. For example, during the early 1970s in the United States, amphetamine became a schedule II controlled substance under the Controlled Substances Act. In spite of strict government controls, amphetamine has been used legally or illicitly by people from a variety of backgrounds, including authors, musicians, mathematicians, and athletes.
Amphetamine is illegally synthesized in clandestine labs and sold on the black market, primarily in European countries. Among European Union (EU) member states 11.9 million adults of ages have used amphetamine or methamphetamine at least once in their lives and 1.7 million have used either in the last year. During 2012, approximately 5.9 metric tons of illicit amphetamine were seized within EU member states; the "street price" of illicit amphetamine within the EU ranged from per gram during the same period. Outside Europe, the illicit market for amphetamine is much smaller than the market for methamphetamine and MDMA.
Legal status
As a result of the United Nations 1971 Convention on Psychotropic Substances, amphetamine became a schedule II controlled substance, as defined in the treaty, in all 183 state parties. Consequently, it is heavily regulated in most countries. Some countries, such as South Korea and Japan, have banned substituted amphetamines even for medical use. In other nations, such as Brazil (class A3), Canada (schedule I drug), the Netherlands (List I drug), the United States (schedule II drug), Australia (schedule 8), Thailand (category 1 narcotic), and United Kingdom (class B drug), amphetamine is in a restrictive national drug schedule that allows for its use as a medical treatment.
Pharmaceutical products
Several currently marketed amphetamine formulations contain both enantiomers, including those marketed under the brand names Adderall, Adderall XR, Mydayis, Adzenys ER, , Dyanavel XR, Evekeo, and Evekeo ODT. Of those, Evekeo (including Evekeo ODT) is the only product containing only racemic amphetamine (as amphetamine sulfate), and is therefore the only one whose active moiety can be accurately referred to simply as "amphetamine". Dextroamphetamine, marketed under the brand names Dexedrine and Zenzedi, is the only enantiopure amphetamine product currently available. A prodrug form of dextroamphetamine, lisdexamfetamine, is also available and is marketed under the brand name Vyvanse. As it is a prodrug, lisdexamfetamine is structurally different from dextroamphetamine, and is inactive until it metabolizes into dextroamphetamine. The free base of racemic amphetamine was previously available as Benzedrine, Psychedrine, and Sympatedrine. Levoamphetamine was previously available as Cydril. Many current amphetamine pharmaceuticals are salts due to the comparatively high volatility of the free base. However, oral suspension and orally disintegrating tablet (ODT) dosage forms composed of the free base were introduced in 2015 and 2016, respectively. Some of the current brands and their generic equivalents are listed below.
Notes
Image legend
Reference notes
References
External links
– Dextroamphetamine
– Levoamphetamine
Comparative Toxicogenomics Database entry: Amphetamine
Comparative Toxicogenomics Database entry: CARTPT
5-HT1A agonists
Anorectics
Aphrodisiacs
Attention deficit hyperactivity disorder management
Carbonic anhydrase activators
Drugs acting on the cardiovascular system
Drugs acting on the nervous system
Drugs in sport
Ergogenic aids
Euphoriants
Excitatory amino acid reuptake inhibitors
German inventions
Human drug metabolites
Monoaminergic activity enhancers
Narcolepsy
Nootropics
Norepinephrine-dopamine releasing agents
Phenethylamines
Stimulants
Substituted amphetamines
TAAR1 agonists
VMAT inhibitors
World Anti-Doping Agency prohibited substances | Amphetamine | [
"Chemistry"
] | 13,240 | [
"Chemicals in medicine",
"Human drug metabolites"
] |
2,506 | https://en.wikipedia.org/wiki/Asynchronous%20communication | In telecommunications, asynchronous communication is transmission of data, generally without the use of an external clock signal, where data can be transmitted intermittently rather than in a steady stream. Any timing required to recover data from the communication symbols is encoded within the symbols.
The most significant aspect of asynchronous communications is that data is not transmitted at regular intervals, thus making possible variable bit rate, and that the transmitter and receiver clock generators do not have to be exactly synchronized all the time. In asynchronous transmission, data is sent one byte at a time and each byte is preceded by start and stop bits.
Physical layer
In asynchronous serial communication in the physical protocol layer, the data blocks are code words of a certain word length, for example octets (bytes) or ASCII characters, delimited by start bits and stop bits. A variable length space can be inserted between the code words. No bit synchronization signal is required. This is sometimes called character oriented communication. Examples include MNP2 and modems older than V.2.
Data link layer and higher
Asynchronous communication at the data link layer or higher protocol layers is known as statistical multiplexing, for example Asynchronous Transfer Mode (ATM). In this case, the asynchronously transferred blocks are called data packets, for example ATM cells. The opposite is circuit switched communication, which provides constant bit rate, for example ISDN and SONET/SDH.
The packets may be encapsulated in a data frame, with a frame synchronization bit sequence indicating the start of the frame, and sometimes also a bit synchronization bit sequence, typically 01010101, for identification of the bit transition times. Note that at the physical layer, this is considered as synchronous serial communication. Examples of packet mode data link protocols that can be/are transferred using synchronous serial communication are the HDLC, Ethernet, PPP and USB protocols.
Application layer
An asynchronous communication service or application does not require a constant bit rate. Examples are file transfer, email and the World Wide Web. An example of the opposite, a synchronous communication service, is realtime streaming media, for example IP telephony, IPTV and video conferencing.
Electronically mediated communication
Electronically mediated communication often happens asynchronously in that the participants do not communicate concurrently. Examples include email
and bulletin-board systems, where participants send or post messages at different times than they read them. The term "asynchronous communication" acquired currency in the field of online learning, where teachers and students often exchange information asynchronously instead of synchronously (that is, simultaneously), as they would in face-to-face or in telephone conversations.
See also
Synchronization in telecommunications
Asynchronous serial communication
Asynchronous system
Asynchronous circuit
Anisochronous
Baud rate
Plesiochronous
Plesiochronous Digital Hierarchy (PDH)
References
Synchronization
Telecommunications techniques | Asynchronous communication | [
"Engineering"
] | 649 | [
"Telecommunications engineering",
"Synchronization"
] |
4,077,375 | https://en.wikipedia.org/wiki/Directivity | In electromagnetics, directivity is a parameter of an antenna or optical system which measures the degree to which the radiation emitted is concentrated in a single direction. It is the ratio of the radiation intensity in a given direction from the antenna to the radiation intensity averaged over all directions. Therefore, the directivity of a hypothetical isotropic radiator, a source of electromagnetic waves which radiates the same power in all directions, is 1, or 0 dBi.
An antenna's directivity is greater than its gain by an efficiency factor, radiation efficiency. Directivity is an important measure because many antennas and optical systems are designed to radiate electromagnetic waves in a single direction or over a narrow-angle. By the principle of reciprocity, the directivity of an antenna when receiving is equal to its directivity when transmitting.
The directivity of an actual antenna can vary from 1.76 dBi for a short dipole to as much as 50 dBi for a large dish antenna.
Definition
The directivity, , of an antenna is defined for all incident angles of an antenna. The term "directive gain" is deprecated by IEEE. If an angle relative to the antenna is not specified, then directivity is presumed to refer to the axis of maximum radiation intensity.
Here and are the zenith angle and azimuth angle respectively in the standard spherical coordinate angles; is the radiation intensity, which is the power per unit solid angle; and is the total radiated power. The quantities and satisfy the relation
that is, the total radiated power is the power per unit solid angle integrated over a spherical surface. Since there are 4π steradians on the surface of a sphere, the quantity represents the average power per unit solid angle.
In other words, directivity is the radiation intensity of an antenna at a particular coordinate combination divided by what the radiation intensity would have been had the antenna been an isotropic antenna radiating the same amount of total power into space.
Directivity, if a direction is not specified, is the maximal directive gain value found among all possible solid angles:
In antenna arrays
In an antenna array the directivity is a complicated calculation in the general case. For a linear array the directivity will always be less than or equal to the number of elements. For a standard linear array (SLA), where the element spacing is , the directivity is equal to the inverse of the square of the 2-norm of the array weight vector, under the assumption that the weight vector is normalized such that its sum is unity.
In the case of a uniformly weighted (un-tapered) SLA, this reduces to simply N, the number of array elements.
For a planar array, the computation of directivity is more complicated and requires consideration of the positions of each array element with respect to all the others and with respect to wavelength. For a planar rectangular or hexagonally spaced array with non-isotropic elements, the maximum directivity can be estimated using the universal ratio of effective aperture to directivity, ,
where dx and dy are the element spacings in the x and y dimensions and is the "illumination efficiency" of the array that accounts for tapering and spacing of the elements in the array. For an un-tapered array with elements at less than spacing, . Note that for an un-tapered standard rectangular array (SRA), where , this reduces to . For an un-tapered standard rectangular array (SRA), where , this reduces to a maximum value of . The directivity of a planar array is the product of the array gain, and the directivity of an element (assuming all of the elements are identical) only in the limit as element spacing becomes much larger than lambda. In the case of a sparse array, where element spacing , is reduced because the array is not uniformly illuminated.
There is a physically intuitive reason for this relationship; essentially there are a limited number of photons per unit area to be captured by the individual antennas. Placing two high gain antennas very close to each other (less than a wavelength) does not buy twice the gain, for example. Conversely, if the antenna are more than a wavelength apart, there are photons that fall between the elements and are not collected at all. This is why the physical aperture size must be taken into account.
Let's assume a 16×16 un-tapered standard rectangular array (which means that elements are spaced at .) The array gain is dB. If the array were tapered, this value would go down. The directivity, assuming isotropic elements, is 25.9dBi. Now assume elements with 9.0dBi directivity. The directivity is not 33.1dBi, but rather is only 29.2dBi. The reason for this is that the effective aperture of the individual elements limits their directivity. So, . Note, in this case because the array is un-tapered. Why the slight difference from 29.05 dBi? The elements around the edge of the array aren't as limited in their effective aperture as are the majority of elements.
Now let's move the array elements to spacing. From the above formula, we expect the directivity to peak at . The actual result is 34.6380 dBi, just shy of the ideal 35.0745 dBi we expected. Why the difference from the ideal? If the spacing in the x and y dimensions is , then the spacing along the diagonals is , thus creating tiny regions in the overall array where photons are missed, leading to .
Now go to spacing. The result now should converge to N times the element gain, or + 9 dBi = 33.1 dBi. The actual result is in fact, 33.1 dBi.
For antenna arrays, the closed form expression for Directivity for progressively phased array of isotropic sources will be given by,
where,
is the total number of elements on the aperture;
represents the location of elements in Cartesian co-ordinate system;
is the complex excitation coefficient of the -element;
is the phase component (progressive phasing);
is the wavenumber;
is the angular location of the far-field target;
is the Euclidean distance between the and element on the aperture, and
Further studies on directivity expressions for various cases, like if the sources are omnidirectional (even in the array environment) like if the prototype element-pattern takes the form , and not restricting to progressive phasing can be done from.
Relation to beam width
The beam solid angle, represented as , is defined as the solid angle which all power would flow through if the antenna radiation intensity were constant at its maximal value. If the beam solid angle is known, then maximum directivity can be calculated as
which simply calculates the ratio of the beam solid angle to the solid angle of a sphere.
The beam solid angle can be approximated for antennas with one narrow major lobe and very negligible minor lobes by simply multiplying the half-power beamwidths (in radians) in two perpendicular planes. The half-power beamwidth is simply the angle in which the radiation intensity is at least half of the peak radiation intensity.
The same calculations can be performed in degrees rather than in radians:
where is the half-power beamwidth in one plane (in degrees) and is the half-power beamwidth in a plane at a right angle to the other (in degrees).
In planar arrays, a better approximation is
For an antenna with a conical (or approximately conical) beam with a half-power beamwidth of degrees, then elementary integral calculus yields an expression for the directivity as
.
Expression in decibels
The directivity is rarely expressed as the unitless number but rather as a decibel comparison to a reference antenna:
The reference antenna is usually the theoretical perfect isotropic radiator, which radiates uniformly in all directions and hence has a directivity of 1. The calculation is therefore simplified to
Another common reference antenna is the theoretical perfect half-wave dipole, which radiates perpendicular to itself with a directivity of 1.64:
Accounting for polarization
When polarization is taken under consideration, three additional measures can be calculated:
Partial directive gain
Partial directive gain is the power density in a particular direction and for a particular component of the polarization, divided by the average power density for all directions and all polarizations. For any pair of orthogonal polarizations (such as left-hand-circular and right-hand-circular), the individual power densities simply add to give the total power density. Thus, if expressed as dimensionless ratios rather than in dB, the total directive gain is equal to the sum of the two partial directive gains.
Partial directivity
Partial directivity is calculated in the same manner as the partial directive gain, but without consideration of antenna efficiency (i.e. assuming a lossless antenna). It is similarly additive for orthogonal polarizations.
Partial gain
Partial gain is calculated in the same manner as gain, but considering only a certain polarization. It is similarly additive for orthogonal polarizations.
In other areas
The term directivity is also used with other systems.
With directional couplers, directivity is a measure of the difference in dB of the power output at a coupled port, when power is transmitted in the desired direction, to the power output at the same coupled port when the same amount of power is transmitted in the opposite direction.
In acoustics, it is used as a measure of the radiation pattern from a source indicating how much of the total energy from the source is radiating in a particular direction. In electro-acoustics, these patterns commonly include omnidirectional, cardioid and hyper-cardioid microphone polar patterns. A loudspeaker with a high degree of directivity (narrow dispersion pattern) can be said to have a high Q.
See also
Directional antenna
References
Further reading
Antennas (radio)
Microphone technology
Radio electronics | Directivity | [
"Engineering"
] | 2,070 | [
"Radio electronics"
] |
4,078,061 | https://en.wikipedia.org/wiki/Papagoite | Papagoite is a rare cyclosilicate mineral. Chemically, it is a calcium copper aluminium silicate hydroxide, found as a secondary mineral on slip surfaces and in altered granodiorite veins, either in massive form or as microscopic crystals that may form spherical aggregates. Its chemical formula is Ca Cu Al Si2O6(O H)3.
It was discovered in 1960 in Ajo, Arizona, US, and was named after the Hia C-ed O'odham people (also known as the Sand Papago) who inhabit the area. This location is the only papagoite source within the United States, while worldwide it is also found in South Africa and Namibia. It is associated with aurichalcite, shattuckite, ajoite and baryte in Arizona, and with quartz, native copper and ajoite in South Africa. Its bright blue color is the mineral's most notable characteristic.
It is used as a gemstone.
References
Calcium minerals
Copper(II) minerals
Aluminium minerals
Cyclosilicates
Monoclinic minerals
Minerals in space group 12
Gemstones | Papagoite | [
"Physics"
] | 232 | [
"Materials",
"Gemstones",
"Matter"
] |
4,079,050 | https://en.wikipedia.org/wiki/Gifts%20in%20kind | Gifts in kind, also referred to as in-kind donations, is a kind of charitable giving in which, instead of giving money to buy needed goods and services, the goods and services themselves are given. Gifts in kind are distinguished from gifts of cash or stock. Some types of gifts in kind are appropriate, but others are not. Examples of in-kind gifts include goods like food, clothing, medicines, furniture, office equipment, and building materials. Performance of services, providing office space or offering administrative support, may also be counted as in-kind gifts.
While many attest to the benefits of in-kind over cash gifts, others have argued for their disadvantages, particularly in the context of disaster relief.
Arguments in favor of gifts in kind
Reduce in waste of materials
Many donated goods are either second hand or otherwise surplus. If not donated to people who need them, they might otherwise end up in a landfill. Thus, it is argued that gifts in kind reduce resource use and pollution. This provides a means, particularly for corporations, of doing social good with things that would otherwise be a liability.
Use in disaster relief
During disasters and other humanitarian crises, companies and individuals often want to help with the disaster relief operations. Some people have argued that giving goods that are already at hand is more cost effective for the donor than giving money to buy these same goods, thus reducing the cost of buying the goods afresh, particularly in the face of shortages.
Long-term development aid
Helping with longer term development in impoverished or otherwise distressed areas is a high priority for governments and large NGOs. It is argued that gifts in kind can be a significant component of a larger humanitarian development strategy.
Lower susceptibility to corruption
It has been argued that donated goods are much less susceptible to becoming graft because physical goods are more tangible than money.
However, the argument may be reversed in the modern context, now that there exist mobile phone-based payment mechanisms such as m-Pesa that have been used successfully for cash transfer programs, making cash transfers less dependent on intermediaries than the shipping of physical goods.
Great impact for small cost
Gifts in kind supply a market efficiency that is difficult to attain by other means. For example, many charities that provide life-saving medications to people in impoverished nations could not afford to buy these drugs using their cash donations or grants alone. Donated drugs help these organizations to work most effectively at a much lower cost.
Access to goods which are not readily available
Some products are simply not available but are still desperately needed. An example is anti-malarial drugs, which are unavailable in many areas of the world where they are most needed, and if they are available, the people who need them are not in a position to purchase them. They are not manufactured locally and the costs of setting up local manufacturing facilities would be prohibitive, given the regulations surrounding pharmaceuticals. There is a high likelihood of locally available drugs being counterfeit, with often fatal consequences.
Corporate social responsibility
As more and more companies continue to acknowledge their corporate social responsibility, they are also recognizing the benefits of participating in gifts in kind programs. In The Business Case for Product Philanthropy, a 63-page report published by the Indiana University School of Public and Environmental Affairs, authors Justin M. Ross and Kellie L. McGiverin-Bohan argue that businesses can do well by doing good through product philanthropy, as well as explore the advantages of donating goods over the liquidation and/or destruction of goods. In addition, with cash donations on the decrease over the past several years, offering donations of goods and services is a way for corporations to continue pursuing their philanthropic goals.
Arguments against gifts in kind
Matching of donation to recipient needs
One of the chief criticisms of gifts in kind, particularly in the context of disaster relief but also in other contexts, is that the things that people are likely to gift may be poorly matched to the immediate needs of recipients, but rather be influenced by what donors happen to want to dispose of. Some of the possibilities are:
The donated items may not be needed by the recipients at all.
The donated items may be needed by the recipients but are available locally and the cost of shipping the items from a remote location is far more than the cost of obtaining them locally. In the context of disaster relief, a large influx of donated goods may clog the ports making it difficult for needed emergency supplies to reach their recipients.
The donated items may be needed somewhat by the recipients, but it may be more beneficial if the items were sold to the highest payer and the money thus collected be used to meet other needs of the intended recipients.
Empowerment of recipients
In addition to the argument that gifts in kind often do not meet the needs of recipients, it has also been argued that gifts in kind fail to empower the recipients because the recipients don't have as much flexibility on how to spend the gifts as they would with gifts of cash or of public goods that they actively solicit. Relatedly, it has been argued that sending gifts in kind without checking on what the recipients may actually need may be disrespectful to the recipients, and in some cases self-centered and narcissistic, being focused on the needs of the donor rather than the recipient.
Impact on local economies
Some critics of gifts in kind argue that, like dumping, these have an artificial adverse impact on local industries producing similar goods.
Response to criticism
Improved communication between recipients and donors
Some of the downsides of gifts in kind may be mitigated by allowing recipients to communicate their needs to donors, thus helping donors and recipients match up. This has been made possible with the advent of the Internet as it is now possible to create an online marketplace for in-kind donations. Gifts in Kind International operates a network called Good360 that aims to do exactly this. Occupy Sandy volunteers use a sort of gift registry for this purpose; families and businesses impacted by the storm make specific requests, which remote donors can purchase directly via a web site.
The majority of online giving marketplaces, including GlobalGiving and DonorsChoose, however, are focused on cash donations, though the nonprofits seeking these donations usually specify what types of things they intend to buy with a given donation quantity.
Standards for gifts in kind
Global Hand has published a series of standards for gifts in kind. The principles include:
Need driven: Driven by a genuine and thorough understanding of the needs of the recipients.
Quality controlled: Goods are carefully chosen, of appropriate quality, and in consultation with the recipient.
Determined by informed choices.
Avoiding aid dependency.
and many more.
The Tales from the Hood blog has argued that there are two preconditions for successful gifts in kind.:
Gifts in kind should not drive the design of the charity program or aid program. Rather, the program should be evaluated based on the evidence and the appropriate gifts should be determined based on that evidence.
Gifts in kind should not be used to substitute for other needed items if they do not fit the requirements well.
Charity stores
Unlike a disaster relief scenario, the needs of a charity shop are long-term and more flexible; any item that can be sold at a price higher than the cost of warehousing it could be worthwhile. Large non-profits, such as Goodwill Industries, are also able to make use of items that cannot be sold in their thrift stores, for example by bundling them and selling them as bulk material or scrap. These stores refuse donations that cost money to dispose of safely if unwanted, such as e-waste.
See also
Corporate social responsibility
Gifts In Kind International
Cash transfer
Literature
Janet Currie and Firouz Gahvari: Transfers in Cash and In-Kind: Theory Meets the Data, Journal of Economic Literature, 2008, 46(2): 333-383.
References
Giving
Private aid programs | Gifts in kind | [
"Biology"
] | 1,579 | [
"Behavior",
"Altruism",
"Private aid programs"
] |
4,079,234 | https://en.wikipedia.org/wiki/Sassolite | Sassolite is a borate mineral, specifically the mineral form of boric acid. It is usually white to gray, and colourless in transmitted light. It can also take on a yellow colour from sulfur impurities, or brown from iron oxides.
History and occurrence
Its mineral form was first described in 1800, and was named after Sasso Pisano, Castelnuovo Val di Cecina, Pisa Province, Tuscany, Italy where it was found. The mineral may be found in lagoons throughout Tuscany and Sasso. It is also found in the Lipari Islands and the US state of Nevada. It occurs in volcanic fumaroles and hot springs, deposited from steam, as well as in bedded sedimentary evaporite deposits.
See also
List of minerals
Borax
References
External links
Borate minerals
Triclinic minerals
Luminescent minerals
Minerals in space group 2 | Sassolite | [
"Chemistry"
] | 180 | [
"Luminescence",
"Luminescent minerals"
] |
4,079,600 | https://en.wikipedia.org/wiki/Lists%20of%20useful%20plants | This article contains a list of useful plants, meaning a plant that has been or can be co-opted by humans to fulfill a particular need. Rather than listing all plants on one page, this page instead collects the lists and categories for the different ways in which a plant can be used; some plants may fall into several of the categories or lists below, and some lists overlap (for example, the term "crop" covers both edible and non-edible agricultural products).
Edible plants
:Category:Edible plants
:Category:Cereals
List of edible flowers
:Category:Forages
:Category:Grains
:Category:Spices
List of culinary herbs and spices
Fruits and vegetables
:Category:Fruit
:Category:Edible nuts and seeds
:Category:Vegetables
:Category:Inflorescence vegetables
:Category:Leaf vegetables
:Category:Root vegetables
:Category:Edible seaweeds
:Category:Stem vegetables
Forestry
:Category:Wood
:Category:Shrubs
:Category:Trees
Medicine, drugs, psychoactives
1.:Category:Medicinal plants
2.:Category:Medicinal herbs and fungi
3.List of Plants Used for Smoking
Other economic purposes
:Category:Crops
:Category:Energy crops
List of beneficial weeds
References
External links
Plants For A Future
Permaculture Information Web
Plant Resources of Tropical Africa (PROTA)
Handbook of Energy Crops
Lost Crops of Africa: Volume 1: Grains
Lost Crops of the Incas
Bibliography on underutilized roots and tubers crops
Australian New Crops Web Site
Global Facilitation Unit for Underutilized Species
UN Centre for the Alleviation of Poverty through Secondary Crops' Development in Asia and the Pacific (UNCAPSA)
Traditional African Vegetables
ECHO (Educational Concerns for Hunger Organization)
Useful plants
Agriculture-related lists | Lists of useful plants | [
"Biology"
] | 347 | [
"Lists of biota",
"Lists of plants",
"Plants"
] |
4,079,673 | https://en.wikipedia.org/wiki/Optical%20transfer%20function | The optical transfer function (OTF) of an optical system such as a camera, microscope, human eye, or projector is a scale-dependent description of their imaging contrast. Its magnitude is the image contrast of the harmonic intensity pattern, , as a function of the spatial frequency, , while its complex argument indicates a phase shift in the periodic pattern. The optical transfer function is used by optical engineers to describe how the optics project light from the object or scene onto a photographic film, detector array, retina, screen, or simply the next item in the optical transmission chain.
Formally, the optical transfer function is defined as the Fourier transform of the point spread function (PSF, that is, the impulse response of the optics, the image of a point source). As a Fourier transform, the OTF is generally complex-valued; however, it is real-valued in the common case of a PSF that is symmetric about its center. In practice, the imaging contrast, as given by the magnitude or modulus of the optical-transfer function, is of primary importance. This derived function is commonly referred to as the modulation transfer function (MTF).
The image on the right shows the optical transfer functions for two different optical systems in panels (a) and (d). The former corresponds to the ideal, diffraction-limited, imaging system with a circular pupil. Its transfer function decreases approximately gradually with spatial frequency until it reaches the diffraction-limit, in this case at 500 cycles per millimeter or a period of 2 μm. Since periodic features as small as this period are captured by this imaging system, it could be said that its resolution is 2 μm. Panel (d) shows an optical system that is out of focus. This leads to a sharp reduction in contrast compared to the diffraction-limited imaging system. It can be seen that the contrast is zero around 250 cycles/mm, or periods of 4 μm. This explains why the images for the out-of-focus system (e,f) are more blurry than those of the diffraction-limited system (b,c). Note that although the out-of-focus system has very low contrast at spatial frequencies around 250 cycles/mm, the contrast at spatial frequencies near the diffraction limit of 500 cycles/mm is diffraction-limited. Close observation of the image in panel (f) shows that the image of the large spoke densities near the center of the spoke target is relatively sharp.
Definition and related concepts
Since the optical transfer function (OTF) is defined as the Fourier transform of the point-spread function (PSF), it is generally speaking a complex-valued function of spatial frequency. The projection of a specific periodic pattern is represented by a complex number with absolute value and complex argument proportional to the relative contrast and translation of the projected projection, respectively.
Often the contrast reduction is of most interest and the translation of the pattern can be ignored. The relative contrast is given by the absolute value of the optical transfer function, a function commonly referred to as the modulation transfer function (MTF). Its values indicate how much of the object's contrast is captured in the image as a function of spatial frequency. The MTF tends to decrease with increasing spatial frequency from 1 to 0 (at the diffraction limit); however, the function is often not monotonic. On the other hand, when also the pattern translation is important, the complex argument of the optical transfer function can be depicted as a second real-valued function, commonly referred to as the phase transfer function (PhTF). The complex-valued optical transfer function can be seen as a combination of these two real-valued functions:
where
and represents the complex argument function, while is the spatial frequency of the periodic pattern. In general is a vector with a spatial frequency for each dimension, i.e. it indicates also the direction of the periodic pattern.
The impulse response of a well-focused optical system is a three-dimensional intensity distribution with a maximum at the focal plane, and could thus be measured by recording a stack of images while displacing the detector axially. By consequence, the three-dimensional optical transfer function can be defined as the three-dimensional Fourier transform of the impulse response. Although typically only a one-dimensional, or sometimes a two-dimensional section is used, the three-dimensional optical transfer function can improve the understanding of microscopes such as the structured illumination microscope.
True to the definition of transfer function, should indicate the fraction of light that was detected from the point source object. However, typically the contrast relative to the total amount of detected light is most important. It is thus common practice to normalize the optical transfer function to the detected intensity, hence .
Generally, the optical transfer function depends on factors such as the spectrum and polarization of the emitted light and the position of the point source. E.g. the image contrast and resolution are typically optimal at the center of the image, and deteriorate toward the edges of the field-of-view. When significant variation occurs, the optical transfer function may be calculated for a set of representative positions or colors.
Sometimes it is more practical to define the transfer functions based on a binary black-white stripe pattern. The transfer function for an equal-width black-white periodic pattern is referred to as the contrast transfer function (CTF).
Examples
The OTF of an ideal lens system
A perfect lens system will provide a high contrast projection without shifting the periodic pattern, hence the optical transfer function is identical to the modulation transfer function. Typically the contrast will reduce gradually towards zero at a point defined by the resolution of the optics. For example, a perfect, non-aberrated, f/4 optical imaging system used, at the visible wavelength of 500 nm, would have the optical transfer function depicted in the right hand figure.
It can be read from the plot that the contrast gradually reduces and reaches zero at the spatial frequency of 500 cycles per millimeter, in other words the optical resolution of the image projection is 1/500 of a millimeter, or 2 micrometer. Correspondingly, for this particular imaging device, the spokes become more and more blurred towards the center until they merge into a gray, unresolved, disc. Note that sometimes the optical transfer function is given in units of the object or sample space, observation angle, film width, or normalized to the theoretical maximum. Conversion between the two is typically a matter of a multiplication or division. E.g. a microscope typically magnifies everything 10 to 100-fold, and a reflex camera will generally demagnify objects at a distance of 5 meter by a factor of 100 to 200.
The resolution of a digital imaging device is not only limited by the optics, but also by the number of pixels, more in particular by their separation distance. As explained by the Nyquist–Shannon sampling theorem, to match the optical resolution of the given example, the pixels of each color channel should be separated by 1 micrometer, half the period of 500 cycles per millimeter. A higher number of pixels on the same sensor size will not allow the resolution of finer detail. On the other hand, when the pixel spacing is larger than 1 micrometer, the resolution will be limited by the separation between pixels; moreover, aliasing may lead to a further reduction of the image fidelity.
OTF of an imperfect lens system
An imperfect, aberrated imaging system could possess the optical transfer function depicted in the following figure.
As the ideal lens system, the contrast reaches zero at the spatial frequency of 500 cycles per millimeter. However, at lower spatial frequencies the contrast is considerably lower than that of the perfect system in the previous example. In fact, the contrast becomes zero on several occasions even for spatial frequencies lower than 500 cycles per millimeter. This explains the gray circular bands in the spoke image shown in the above figure. In between the gray bands, the spokes appear to invert from black to white and vice versa, this is referred to as contrast inversion, directly related to the sign reversal in the real part of the optical transfer function, and represents itself as a shift by half a period for some periodic patterns.
While it could be argued that the resolution of both the ideal and the imperfect system is 2 μm, or 500 LP/mm, it is clear that the images of the latter example are less sharp. A definition of resolution that is more in line with the perceived quality would instead use the spatial frequency at which the first zero occurs, 10 μm, or 100 LP/mm. Definitions of resolution, even for perfect imaging systems, vary widely. A more complete, unambiguous picture is provided by the optical transfer function.
The OTF of an optical system with a non-rotational symmetric aberration
Optical systems, and in particular optical aberrations are not always rotationally symmetric. Periodic patterns that have a different orientation can thus be imaged with different contrast even if their periodicity is the same. Optical transfer function or modulation transfer functions are thus generally two-dimensional functions. The following figures shows the two-dimensional equivalent of the ideal and the imperfect system discussed earlier, for an optical system with trefoil, a non-rotational-symmetric aberration.
Optical transfer functions are not always real-valued. Period patterns can be shifted by any amount, depending on the aberration in the system. This is generally the case with non-rotational-symmetric aberrations. The hue of the colors of the surface plots in the above figure indicate phase. It can be seen that, while for the rotational symmetric aberrations the phase is either 0 or π and thus the transfer function is real valued, for the non-rotational symmetric aberration the transfer function has an imaginary component and the phase varies continuously.
Practical example – high-definition video system
While optical resolution, as commonly used with reference to camera systems, describes only the number of pixels in an image, and hence the potential to show fine detail, the transfer function describes the ability of adjacent pixels to change from black to white in response to patterns of varying spatial frequency, and hence the actual capability to show fine detail, whether with full or reduced contrast. An image reproduced with an optical transfer function that 'rolls off' at high spatial frequencies will appear 'blurred' in everyday language.
Taking the example of a current high definition (HD) video system, with 1920 by 1080 pixels, the Nyquist theorem states that it should be possible, in a perfect system, to resolve fully (with true black to white transitions) a total of 1920 black and white alternating lines combined, otherwise referred to as a spatial frequency of 1920/2=960 line pairs per picture width, or 960 cycles per picture width, (definitions in terms of cycles per unit angle or per mm are also possible but generally less clear when dealing with cameras and more appropriate to telescopes etc.). In practice, this is far from the case, and spatial frequencies that approach the Nyquist rate will generally be reproduced with decreasing amplitude, so that fine detail, though it can be seen, is greatly reduced in contrast. This gives rise to the interesting observation that, for example, a standard definition television picture derived from a film scanner that uses oversampling, as described later, may appear sharper than a high definition picture shot on a camera with a poor modulation transfer function. The two pictures show an interesting difference that is often missed, the former having full contrast on detail up to a certain point but then no really fine detail, while the latter does contain finer detail, but with such reduced contrast as to appear inferior overall.
The three-dimensional optical transfer function
Although one typically thinks of an image as planar, or two-dimensional, the imaging system will produce a three-dimensional intensity distribution in image space that in principle can be measured. e.g. a two-dimensional sensor could be translated to capture a three-dimensional intensity distribution. The image of a point source is also a three dimensional (3D) intensity distribution which can be represented by a 3D point-spread function. As an example, the figure on the right shows the 3D point-spread function in object space of a wide-field microscope (a) alongside that of a confocal microscope (c). Although the same microscope objective with a numerical aperture of 1.49 is used, it is clear that the confocal point spread function is more compact both in the lateral dimensions (x,y) and the axial dimension (z). One could rightly conclude that the resolution of a confocal microscope is superior to that of a wide-field microscope in all three dimensions.
A three-dimensional optical transfer function can be calculated as the three-dimensional Fourier transform of the 3D point-spread function. Its color-coded magnitude is plotted in panels (b) and (d), corresponding to the point-spread functions shown in panels (a) and (c), respectively. The transfer function of the wide-field microscope has a support that is half of that of the confocal microscope in all three-dimensions, confirming the previously noted lower resolution of the wide-field microscope. Note that along the z-axis, for x = y = 0, the transfer function is zero everywhere except at the origin. This missing cone is a well-known problem that prevents optical sectioning using a wide-field microscope.
The two-dimensional optical transfer function at the focal plane can be calculated by integration of the 3D optical transfer function along the z-axis. Although the 3D transfer function of the wide-field microscope (b) is zero on the z-axis for z ≠ 0; its integral, the 2D optical transfer, reaching a maximum at x = y = 0. This is only possible because the 3D optical transfer function diverges at the origin x = y = z = 0. The function values along the z-axis of the 3D optical transfer function correspond to the Dirac delta function.
Calculation
Most optical design software has functionality to compute the optical or modulation transfer function of a lens design. Ideal systems such as in the examples here are readily calculated numerically using software such as Julia, GNU Octave or Matlab, and in some specific cases even analytically. The optical transfer function can be calculated following two approaches:
as the Fourier transform of the incoherent point spread function, or
as the auto-correlation of the pupil function of the optical system
Mathematically both approaches are equivalent. Numeric calculations are typically most efficiently done via the Fourier transform; however, analytic calculation may be more tractable using the auto-correlation approach.
Example
Ideal lens system with circular aperture
Auto-correlation of the pupil function
Since the optical transfer function is the Fourier transform of the point spread function, and the point spread function is the square absolute of the inverse Fourier transformed pupil function, the optical transfer function can also be calculated directly from the pupil function. From the convolution theorem it can be seen that the optical transfer function is in fact the autocorrelation of the pupil function.
The pupil function of an ideal optical system with a circular aperture is a disk of unit radius. The optical transfer function of such a system can thus be calculated geometrically from the intersecting area between two identical disks at a distance of , where is the spatial frequency normalized to the highest transmitted frequency. In general the optical transfer function is normalized to a maximum value of one for , so the resulting area should be divided by .
The intersecting area can be calculated as the sum of the areas of two identical circular segments: , where is the circle segment angle. By substituting , and using the equalities and , the equation for the area can be rewritten as . Hence the normalized optical transfer function is given by:
A more detailed discussion can be found in and.
Numerical evaluation
The one-dimensional optical transfer function can be calculated as the discrete Fourier transform of the line spread function. This data is graphed against the spatial frequency data. In this case, a sixth order polynomial is fitted to the MTF vs. spatial frequency curve to show the trend. The 50% cutoff frequency is determined to yield the corresponding spatial frequency. Thus, the approximate position of best focus of the unit under test is determined from this data.
The Fourier transform of the line spread function (LSF) can not be determined analytically by the following equations:
Therefore, the Fourier Transform is numerically approximated using the discrete Fourier transform .<ref>Chapra, S.C.; Canale, R.P. (2006). Numerical Methods for Engineers (5th ed.). New York, New York: McGraw-Hill</ref>
where
= the value of the MTF
= number of data points
= index
= term of the LSF data
= pixel position
The MTF is then plotted against spatial frequency and all relevant data concerning this test can be determined from that graph.
The vectorial transfer function
At high numerical apertures such as those found in microscopy, it is important to consider the vectorial nature of the fields that carry light. By decomposing the waves in three independent components corresponding to the Cartesian axes, a point spread function can be calculated for each component and combined into a vectorial point spread function. Similarly, a vectorial optical transfer function can be determined as shown in () and ().
Measurement
The optical transfer function is not only useful for the design of optical system, it is also valuable to characterize manufactured systems.
Starting from the point spread function
The optical transfer function is defined as the Fourier transform of the impulse response of the optical system, also called the point spread function. The optical transfer function is thus readily obtained by first acquiring the image of a point source, and applying the two-dimensional discrete Fourier transform to the sampled image. Such a point-source can, for example, be a bright light behind a screen with a pin hole, a fluorescent or metallic microsphere, or simply a dot painted on a screen. Calculation of the optical transfer function via the point spread function is versatile as it can fully characterize optics with spatial varying and chromatic aberrations by repeating the procedure for various positions and wavelength spectra of the point source.
Using extended test objects for spatially invariant optics
When the aberrations can be assumed to be spatially invariant, alternative patterns can be used to determine the optical transfer function such as lines and edges. The corresponding transfer functions are referred to as the line-spread function and the edge-spread function, respectively. Such extended objects illuminate more pixels in the image, and can improve the measurement accuracy due to the larger signal-to-noise ratio. The optical transfer function is in this case calculated as the two-dimensional discrete Fourier transform of the image and divided by that of the extended object. Typically either a line or a black-white edge is used.
The line-spread function
The two-dimensional Fourier transform of a line through the origin, is a line orthogonal to it and through the origin. The divisor is thus zero for all but a single dimension, by consequence, the optical transfer function can only be determined for a single dimension using a single line-spread function (LSF). If necessary, the two-dimensional optical transfer function can be determined by repeating the measurement with lines at various angles.
The line spread function can be found using two different methods. It can be found directly from an ideal line approximation provided by a slit test target or it can be derived from the edge spread function, discussed in the next sub section.
Edge-spread function
The two-dimensional Fourier transform of an edge is also only non-zero on a single line, orthogonal to the edge. This function is sometimes referred to as the edge spread function (ESF). However, the values on this line are inversely proportional to the distance from the origin. Although the measurement images obtained with this technique illuminate a large area of the camera, this mainly benefits the accuracy at low spatial frequencies. As with the line spread function, each measurement only determines a single axes of the optical transfer function, repeated measurements are thus necessary if the optical system cannot be assumed to be rotational symmetric.
As shown in the right hand figure, an operator defines a box area encompassing the edge of a knife-edge test target image back-illuminated by a black body. The box area is defined to be approximately 10% of the total frame area. The image pixel data is translated into a two-dimensional array (pixel intensity and pixel position). The amplitude (pixel intensity) of each line within the array is normalized and averaged. This yields the edge spread function.
where
ESF = the output array of normalized pixel intensity data
= the input array of pixel intensity data
= the ith element of
= the average value of the pixel intensity data
= the standard deviation of the pixel intensity data
= number of pixels used in average
The line spread function is identical to the first derivative of the edge spread function, which is differentiated using numerical methods. In case it is more practical to measure the edge spread function, one can determine the line spread function as follows:
Typically the ESF is only known at discrete points, so the LSF is numerically approximated using the finite difference:
where:
= the index
= position of the pixel
= ESF of the pixel
Using a grid of black and white lines
Although 'sharpness' is often judged on grid patterns of alternate black and white lines, it should strictly be measured using a sine-wave variation from black to white (a blurred version of the usual pattern). Where a square wave pattern is used (simple black and white lines) not only is there more risk of aliasing, but account must be taken of the fact that the fundamental component of a square wave is higher than the amplitude of the square wave itself (the harmonic components reduce the peak amplitude). A square wave test chart will therefore show optimistic results (better resolution of high spatial frequencies than is actually achieved). The square wave result is sometimes referred to as the 'contrast transfer function' (CTF).
Factors affecting MTF in typical camera systems
In practice, many factors result in considerable blurring of a reproduced image, such that patterns with spatial frequency just below the Nyquist rate may not even be visible, and the finest patterns that can appear 'washed out' as shades of grey, not black and white. A major factor is usually the impossibility of making the perfect 'brick wall' optical filter (often realized as a 'phase plate' or a lens with specific blurring properties in digital cameras and video camcorders). Such a filter is necessary to reduce aliasing by eliminating spatial frequencies above the Nyquist rate of the display.
Oversampling and downconversion to maintain the optical transfer function
The only way in practice to approach the theoretical sharpness possible in a digital imaging system such as a camera is to use more pixels in the camera sensor than samples in the final image, and 'downconvert' or 'interpolate' using special digital processing which cuts off high frequencies above the Nyquist rate to avoid aliasing whilst maintaining a reasonably flat MTF up to that frequency. This approach was first taken in the 1970s when flying spot scanners, and later CCD line scanners were developed, which sampled more pixels than were needed and then downconverted, which is why movies have always looked sharper on television than other material shot with a video camera. The only theoretically correct way to interpolate or downconvert is by use of a steep low-pass spatial filter, realized by convolution with a two-dimensional sin(x)/x'' weighting function which requires powerful processing. In practice, various mathematical approximations to this are used to reduce the processing requirement. These approximations are now implemented widely in video editing systems and in image processing programs such as Photoshop.
Just as standard definition video with a high contrast MTF is only possible with oversampling, so HD television with full theoretical sharpness is only possible by starting with a camera that has a significantly higher resolution, followed by digitally filtering. With movies now being shot in 4k and even 8k video for the cinema, we can expect to see the best pictures on HDTV only from movies or material shot at the higher standard. However much we raise the number of pixels used in cameras, this will always remain true in absence of a perfect optical spatial filter. Similarly, a 5-megapixel image obtained from a 5-megapixel still camera can never be sharper than a 5-megapixel image obtained after down-conversion from an equal quality 10-megapixel still camera. Because of the problem of maintaining a high contrast MTF, broadcasters like the BBC did for a long time consider maintaining standard definition television, but improving its quality by shooting and viewing with many more pixels (though as previously mentioned, such a system, though impressive, does ultimately lack the very fine detail which, though attenuated, enhances the effect of true HD viewing).
Another factor in digital cameras and camcorders is lens resolution. A lens may be said to 'resolve' 1920 horizontal lines, but this does not mean that it does so with full modulation from black to white. The 'modulation transfer function' (just a term for the magnitude of the optical transfer function with phase ignored) gives the true measure of lens performance, and is represented by a graph of amplitude against spatial frequency.
Lens aperture diffraction also limits MTF. Whilst reducing the aperture of a lens usually reduces aberrations and hence improves the flatness of the MTF, there is an optimum aperture for any lens and image sensor size beyond which smaller apertures reduce resolution because of diffraction, which spreads light across the image sensor. This was hardly a problem in the days of plate cameras and even 35 mm film, but has become an insurmountable limitation with the very small format sensors used in some digital cameras and especially video cameras. First generation HD consumer camcorders used 1/4-inch sensors, for which apertures smaller than about f4 begin to limit resolution. Even professional video cameras mostly use 2/3 inch sensors, prohibiting the use of apertures around f16 that would have been considered normal for film formats. Certain cameras (such as the Pentax K10D) feature an "MTF autoexposure" mode, where the choice of aperture is optimized for maximum sharpness. Typically this means somewhere in the middle of the aperture range.
Trend to large-format DSLRs and improved MTF potential
There has recently been a shift towards the use of large image format digital single-lens reflex cameras driven by the need for low-light sensitivity and narrow depth of field effects. This has led to such cameras becoming preferred by some film and television program makers over even professional HD video cameras, because of their 'filmic' potential. In theory, the use of cameras with 16- and 21-megapixel sensors offers the possibility of almost perfect sharpness by downconversion within the camera, with digital filtering to eliminate aliasing. Such cameras produce very impressive results, and appear to be leading the way in video production towards large-format downconversion with digital filtering becoming the standard approach to the realization of a flat MTF with true freedom from aliasing.
Digital inversion of the OTF
Due to optical effects the contrast may be sub-optimal and approaches zero before the Nyquist frequency of the display is reached. The optical contrast reduction can be partially reversed by digitally amplifying spatial frequencies selectively before display or further processing. Although more advanced digital image restoration procedures exist, the Wiener deconvolution algorithm is often used for its simplicity and efficiency. Since this technique multiplies the spatial spectral components of the image, it also amplifies noise and errors due to e.g. aliasing. It is therefore only effective on good quality recordings with a sufficiently high signal-to-noise ratio.
Limitations
In general, the point spread function, the image of a point source also depends on factors such as the wavelength (color), and field angle (lateral point source position). When such variation is sufficiently gradual, the optical system could be characterized by a set of optical transfer functions. However, when the image of the point source changes abruptly, the optical transfer function does not describe the optical system accurately. Inaccuracies can often be mitigated by a collection of optical transfer functions at well-chosen wavelengths or field-positions. However, a more complex characterization may be necessary for some imaging systems such as the Light field camera.
See also
Bokeh
Gamma correction
Minimum resolvable contrast
Minimum resolvable temperature difference
Optical resolution
Signal-to-noise ratio
Signal transfer function
Strehl ratio
Transfer function
Wavefront coding
References
External links
"Modulation transfer function", by Glenn D. Boreman on SPIE Optipedia.
"How to Measure MTF and other Properties of Lenses", by Optikos Corporation.
Transfer function | Optical transfer function | [
"Physics",
"Chemistry"
] | 5,925 | [
"Applied and interdisciplinary physics",
"Optics",
" molecular",
"Atomic",
" and optical physics"
] |
4,079,830 | https://en.wikipedia.org/wiki/List%20of%20edible%20flowers | This is a list of edible flowers.
See also
List of culinary herbs and spices
List of edible nuts
Lists of useful plants
References
flowers, edible
flowers, edible
'
flowers, edible
flowers, edible
'flowers
Lists of flowers | List of edible flowers | [
"Biology"
] | 45 | [
"Lists of biota",
"Lists of plants",
"Plants"
] |
4,080,651 | https://en.wikipedia.org/wiki/Trypsinization | Trypsinization is the process of cell dissociation using trypsin, a proteolytic enzyme which breaks down proteins, to dissociate adherent cells from the vessel in which they are being cultured. When added to cell culture, trypsin breaks down the proteins that enable the cells to adhere to the vessel. Trypsinization is often used to pass cells to a new vessel. When the trypsinization process is complete the cells will be in suspension and appear rounded.
For experimental purposes, cells are often cultivated in containers that take the form of plastic flasks or plates. In such flasks, cells are provided with a growth medium comprising the essential nutrients required for proliferation, and the cells adhere to the container and each other as they grow.
This process of cell culture or tissue culture requires a method to dissociate the cells from the container and each other. Trypsin, an enzyme commonly found in the digestive tract, can be used to "digest" the proteins that facilitate adhesion to the container and between cells.
Once cells have detached from their container it is necessary to deactivate the trypsin, unless the trypsin is synthetic, as cell surface proteins will also be cleaved over time and this will affect cell functioning. Serum can be used to inactivate trypsin, as it contains protease inhibitors. Because of the presence of these inhibitors, the serum must be removed before treatment of a growth vessel with trypsin and must not be added again to the growth vessel until cells have detached from their growth surface - this detachment can be confirmed by visual observation using a microscope.
Trypsinization is often used to permit passage of adherent cells to a new container, observation for experimentation, or reduction of the degree of confluency in a culture flask through the removal of a percentage of the cells.
References
Cell culture | Trypsinization | [
"Biology"
] | 393 | [
"Model organisms",
"Cell culture"
] |
4,080,917 | https://en.wikipedia.org/wiki/Data%20transformation%20%28computing%29 | In computing, data transformation is the process of converting data from one format or structure into another format or structure. It is a fundamental aspect of most data integration and data management tasks such as data wrangling, data warehousing, data integration and application integration.
Data transformation can be simple or complex based on the required changes to the data between the source (initial) data and the target (final) data. Data transformation is typically performed via a mixture of manual and automated steps. Tools and technologies used for data transformation can vary widely based on the format, structure, complexity, and volume of the data being transformed.
A master data recast is another form of data transformation where the entire database of data values is transformed or recast without extracting the data from the database. All data in a well designed database is directly or indirectly related to a limited set of master database tables by a network of foreign key constraints. Each foreign key constraint is dependent upon a unique database index from the parent database table. Therefore, when the proper master database table is recast with a different unique index, the directly and indirectly related data are also recast or restated. The directly and indirectly related data may also still be viewed in the original form since the original unique index still exists with the master data. Also, the database recast must be done in such a way as to not impact the applications architecture software.
When the data mapping is indirect via a mediating data model, the process is also called data mediation.
Data transformation process
Data transformation can be divided into the following steps, each applicable as needed based on the complexity of the transformation required.
Data discovery
Data mapping
Code generation
Code execution
Data review
These steps are often the focus of developers or technical data analysts who may use multiple specialized tools to perform their tasks.
The steps can be described as follows:
Data discovery is the first step in the data transformation process. Typically the data is profiled using profiling tools or sometimes using manually written profiling scripts to better understand the structure and characteristics of the data and decide how it needs to be transformed.
Data mapping is the process of defining how individual fields are mapped, modified, joined, filtered, aggregated etc. to produce the final desired output. Developers or technical data analysts traditionally perform data mapping since they work in the specific technologies to define the transformation rules (e.g. visual ETL tools, transformation languages).
Code generation is the process of generating executable code (e.g. SQL, Python, R, or other executable instructions) that will transform the data based on the desired and defined data mapping rules. Typically, the data transformation technologies generate this code based on the definitions or metadata defined by the developers.
Code execution is the step whereby the generated code is executed against the data to create the desired output. The executed code may be tightly integrated into the transformation tool, or it may require separate steps by the developer to manually execute the generated code.
Data review is the final step in the process, which focuses on ensuring the output data meets the transformation requirements. It is typically the business user or final end-user of the data that performs this step. Any anomalies or errors in the data that are found and communicated back to the developer or data analyst as new requirements to be implemented in the transformation process.
Types of data transformation
Batch data transformation
Traditionally, data transformation has been a bulk or batch process, whereby developers write code or implement transformation rules in a data integration tool, and then execute that code or those rules on large volumes of data. This process can follow the linear set of steps as described in the data transformation process above.
Batch data transformation is the cornerstone of virtually all data integration technologies such as data warehousing, data migration and application integration.
When data must be transformed and delivered with low latency, the term "microbatch" is often used. This refers to small batches of data (e.g. a small number of rows or small set of data objects) that can be processed very quickly and delivered to the target system when needed.
Benefits of batch data transformation
Traditional data transformation processes have served companies well for decades. The various tools and technologies (data profiling, data visualization, data cleansing, data integration etc.) have matured and most (if not all) enterprises transform enormous volumes of data that feed internal and external applications, data warehouses and other data stores.
Limitations of traditional data transformation
This traditional process also has limitations that hamper its overall efficiency and effectiveness.
The people who need to use the data (e.g. business users) do not play a direct role in the data transformation process. Typically, users hand over the data transformation task to developers who have the necessary coding or technical skills to define the transformations and execute them on the data.
This process leaves the bulk of the work of defining the required transformations to the developer, which often in turn do not have the same domain knowledge as the business user. The developer interprets the business user requirements and implements the related code/logic. This has the potential of introducing errors into the process (through misinterpreted requirements), and also increases the time to arrive at a solution.
This problem has given rise to the need for agility and self-service in data integration (i.e. empowering the user of the data and enabling them to transform the data themselves interactively).
There are companies that provide self-service data transformation tools. They are aiming to efficiently analyze, map and transform large volumes of data without the technical knowledge and process complexity that currently exists. While these companies use traditional batch transformation, their tools enable more interactivity for users through visual platforms and easily repeated scripts.
Still, there might be some compatibility issues (e.g. new data sources like IoT may not work correctly with older tools) and compliance limitations due to the difference in data governance, preparation and audit practices.
Interactive data transformation
Interactive data transformation (IDT) is an emerging capability that allows business analysts and business users the ability to directly interact with large datasets through a visual interface, understand the characteristics of the data (via automated data profiling or visualization), and change or correct the data through simple interactions such as clicking or selecting certain elements of the data.
Although interactive data transformation follows the same data integration process steps as batch data integration, the key difference is that the steps are not necessarily followed in a linear fashion and typically don't require significant technical skills for completion.
There are a number of companies that provide interactive data transformation tools, including Trifacta, Alteryx and Paxata. They are aiming to efficiently analyze, map and transform large volumes of data while at the same time abstracting away some of the technical complexity and processes which take place under the hood.
Interactive data transformation solutions provide an integrated visual interface that combines the previously disparate steps of data analysis, data mapping and code generation/execution and data inspection. That is, if changes are made at one step (like for example renaming), the software automatically updates the preceding or following steps accordingly. Interfaces for interactive data transformation incorporate visualizations to show the user patterns and anomalies in the data so they can identify erroneous or outlying values.
Once they've finished transforming the data, the system can generate executable code/logic, which can be executed or applied to subsequent similar data sets.
By removing the developer from the process, interactive data transformation systems shorten the time needed to prepare and transform the data, eliminate costly errors in interpretation of user requirements and empower business users and analysts to control their data and interact with it as needed.
Transformational languages
There are numerous languages available for performing data transformation. Many transformation languages require a grammar to be provided. In many cases, the grammar is structured using something closely resembling Backus–Naur form (BNF). There are numerous languages available for such purposes varying in their accessibility (cost) and general usefulness. Examples of such languages include:
AWK - one of the oldest and popular textual data transformation language;
Perl - a high-level language with both procedural and object-oriented syntax capable of powerful operations on binary or text data.
Template languages - specialized to transform data into documents (see also template processor);
TXL - prototyping language-based descriptions, used for source code or data transformation.
XSLT - the standard XML data transformation language (suitable by XQuery in many applications);
Additionally, companies such as Trifacta and Paxata have developed domain-specific transformational languages (DSL) for servicing and transforming datasets. The development of domain-specific languages has been linked to increased productivity and accessibility for non-technical users. Trifacta's “Wrangle” is an example of such a domain specific language.
Another advantage of the recent domain-specific transformational languages trend is that a domain-specific transformational language can abstract the underlying execution of the logic defined in the domain-specific transformational language. They can also utilize that same logic in various processing engines, such as Spark, MapReduce, and Dataflow. In other words, with a domain-specific transformational language, the transformation language is not tied to the underlying engine.
Although transformational languages are typically best suited for transformation, something as simple as regular expressions can be used to achieve useful transformation. A text editor like vim, emacs or TextPad supports the use of regular expressions with arguments. This would allow all instances of a particular pattern to be replaced with another pattern using parts of the original pattern. For example:
foo ("some string", 42, gCommon);
bar (someObj, anotherObj);
foo ("another string", 24, gCommon);
bar (myObj, myOtherObj);
could both be transformed into a more compact form like:
foobar("some string", 42, someObj, anotherObj);
foobar("another string", 24, myObj, myOtherObj);
In other words, all instances of a function invocation of foo with three arguments, followed by a function invocation with two arguments would be replaced with a single function invocation using some or all of the original set of arguments.
Another advantage to using regular expressions is that they will not fail the null transform test. That is, using your transformational language of choice, run a sample program through a transformation that doesn't perform any transformations. Many transformational languages will fail this test.
See also
Data cleansing
Data mapping
Data integration
Data preparation
Data wrangling
Extract, transform, load
Information integration
References
External links
File Formats, Transformation, and Migration, a related Wikiversity article
Metadata
Articles with example C++ code
Data management
Data warehousing | Data transformation (computing) | [
"Technology"
] | 2,199 | [
"Data management",
"Metadata",
"Data"
] |
4,080,976 | https://en.wikipedia.org/wiki/K252a | K252a is an alkaloid isolated from Nocardiopsis bacteria. This staurosporine analog is a highly potent cell permeable inhibitor of CaM kinase and phosphorylase kinase (IC50 = 1.8 and 1.7 nmol/L, respectively). At higher concentrations it is also an efficient inhibitor of serine/threonine protein kinases (IC50 of 10 to 30 nmol/L).
K252a is reported to promote myogenic differentiation in C2 mouse myoblasts and has been shown to block the neuronal differentiation of rat pheochromocytoma PC12 cells by inhibition of trk tyrosine kinase activity.
K252a has been reported in preclinical research as a potential treatment for psoriasis.
K252a inhibits tyrosine phosphorylation of Trk A induced by NGF. PC12 cells were incubated in the presence or absence of 10 ng/ml NGF with or without various concentrations of K252a.
See also
K252b
Lestaurtinib
ANA-12
Cyclotraxin B
References
Further reading
Indole alkaloids
Indolocarbazoles
Lactams
Protein kinase inhibitors
TrkB antagonists
Tertiary alcohols
Methyl esters | K252a | [
"Chemistry"
] | 274 | [
"Alkaloids by chemical classification",
"Indole alkaloids"
] |
4,081,099 | https://en.wikipedia.org/wiki/Identity%20transform | The identity transform is a data transformation that copies the source data into the destination data without change.
The identity transformation is considered an essential process in creating a reusable transformation library. By creating a library of variations of the base identity transformation, a variety of data transformation filters can be easily maintained. These filters can be chained together in a format similar to UNIX shell pipes.
Examples of recursive transforms
The "copy with recursion" permits, changing little portions of code, produce entire new and different output, filtering or updating the input. Understanding the "identity by recursion" we can understand the filters.
Using XSLT
The most frequently cited example of the identity transform (for XSLT version 1.0) is the "copy.xsl" transform as expressed in XSLT. This transformation uses the xsl:copy command to perform the identity transformation:
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:template match="@*|node()">
<xsl:copy>
<xsl:apply-templates select="@*|node()"/>
</xsl:copy>
</xsl:template>
</xsl:stylesheet>
This template works by matching all attributes (@*) and other nodes (node()), copying each node matched, then applying the identity transformation to all attributes and child nodes of the context node. This recursively descends the element tree and outputs all structures in the same structure they were found in the original file, within the limitations of what information is considered significant in the XPath data model. Since node() matches text, processing instructions, root, and comments, as well as elements, all XML nodes are copied.
A more explicit version of the identity transform is:
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:template match="@*|*|processing-instruction()|comment()">
<xsl:copy>
<xsl:apply-templates select="*|@*|text()|processing-instruction()|comment()"/>
</xsl:copy>
</xsl:template>
</xsl:stylesheet>
This version is equivalent to the first, but explicitly enumerates the types of XML nodes that it will copy. Both versions copy data that is unnecessary for most XML usage (e.g., comments).
XSLT 3.0
XSLT 3.0 specifies an on-no-match attribute of the xsl:mode instruction that allows the identity transform to be declared rather than implemented as an explicit template rule. Specifically:
<xsl:stylesheet version="3.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:mode on-no-match="shallow-copy" />
</xsl:stylesheet>
is essentially equivalent to the earlier template rules. See the XSLT 3.0 standard's description of shallow-copy for details.
Finally, note that markup details, such as the use of CDATA sections or the order of attributes, are not necessarily preserved in the output, since this information is not part of the XPath data model. To show CDATA markup in the output, the XSLT stylesheet that contains the identity transform template (not the identity transform template itself) should make use of the xsl:output attribute called cdata-section-elements.
cdata-section-elements specifies a list of the names of elements whose text node children should be output using CDATA sections.
For example:
<xsl:output method="xml" encoding="utf-8" cdata-section-elements="element-name-1 element-name-2"/>
Using XQuery
XQuery can define recursive functions. The following example XQuery function copies the input directly to the output without modification.
declare function local:copy($element as element()) {
element {node-name($element)}
{$element/@*,
for $child in $element/node()
return if ($child instance of element())
then local:copy($child)
else $child
}
};
The same function can also be achieved using a typeswitch-style transform.
xquery version "1.0";
(: copy the input to the output without modification :)
declare function local:copy($input as item()*) as item()* {
for $node in $input
return
typeswitch($node)
case document-node()
return
document {
local:copy($node/node())
}
case element()
return
element {name($node)} {
(: output each attribute in this element :)
for $att in $node/@*
return
attribute {name($att)} {$att}
,
(: output all the sub-elements of this element recursively :)
for $child in $node
return local:copy($child/node())
}
(: otherwise pass it through. Used for text(), comments, and PIs :)
default return $node
};
The typeswitch transform is sometime preferable since it can easily be modified by simply adding a case statement for any element that needs special processing.
Non-recursive transforms
Two simple and illustrative "copy all" transforms.
Using XSLT
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:template match="/">
<xsl:copy-of select="."/>
</xsl:template>
</xsl:stylesheet>
Using XProc
<p:pipeline name="pipeline" xmlns:p="http://www.w3.org/ns/xproc">
<p:identity/>
</p:pipeline>
Here one important note about the XProc identity, is that it can take either one document like this example or a sequence of document as input.
More complex examples
Generally the identity transform is used as a base on which one can make local modifications.
Remove named element transform
Using XSLT
The identity transformation can be modified to copy everything from an input tree to an output tree except a given node. For example, the following will copy everything from the input to the output except the social security number:
<xsl:template match="@*|node()">
<xsl:copy>
<xsl:apply-templates select="@*|node()"/>
</xsl:copy>
</xsl:template>
<!-- remove all social security numbers -->
<xsl:template match="PersonSSNID"/>
Using XQuery
declare function local:copy-filter-elements($element as element(),
$element-name as xs:string*) as element() {
element {node-name($element) }
{ $element/@*,
for $child in $element/node()[not(name(.)=$element-name)]
return if ($child instance of element())
then local:copy-filter-elements($child,$element-name)
else $child
}
};
To call this one would add:
$filtered-output := local:copy-filter-elements($input, 'PersonSSNID')
Using XProc
<p:pipeline name="pipeline" xmlns:p="http://www.w3.org/ns/xproc">
<p:identity/>
<p:delete match="PersonSSNID"/>
</p:pipeline>
See also
Data mapping
XML pipeline
Further reading
XSLT Cookbook, O'Reilly Media, Inc., December 1, 2002, by Sal Mangano,
Priscilla Walmsley, XQuery, O'Reilly Media, Inc., Chapter 8 Functions – Recursive Functions – page 109
References
Computer programming
Transforms | Identity transform | [
"Mathematics",
"Technology",
"Engineering"
] | 1,808 | [
"Functions and mappings",
"Computer programming",
"Mathematical objects",
"Software engineering",
"Mathematical relations",
"Transforms",
"Computers"
] |
4,081,192 | https://en.wikipedia.org/wiki/Sustainable%20habitat | A Sustainable habitat is an ecosystem that produces food and shelter for people and other organisms, without resource depletion and in such a way that no external waste is produced. Thus the habitat can continue into the future tie without external infusions of resources. Such a sustainable habitat may evolve naturally or be produced under the influence of man. A sustainable habitat that is created and designed by human intelligence will mimic nature, if it is to be successful. Everything within it is connected to a complex array of organisms, physical resources, and functions. Organisms from many different biomes can be brought together to fulfill various ecological niches.
Definition
A sustainable habitat is achieving stability between the economic and social development of human habitats together with the defense of the environment, shelter, basic services, social infrastructure, and transportation.
A sustainable habitat is required to make sure that one species' waste ends up being the energy or food source for another species. It involves the preservation of the ecological balance in terms of a symbiotic perspective on urban development while developing urban extensions of existing towns.
The term often refers to sustainable human habitats, which typically involves some form of green building or environmental planning.
History
In creating the sustainable habitats, environmental scientists, designers, engineers and architects must not consider any elements as a waste product to be disposed of somewhere off site, but as a nutrient stream for another process to feed on. Researching ways to interconnect waste streams to production creates a more sustainable society by minimizing pollution.
Sustainability of marine ecosystems is a concern. Rigorous fishing has decreased top trophic levels and affected the ecological dynamics and resilience of fisheries by reducing the numbers and lengths of food webs. Historically intense commercial and rising recreational fishing pressures have resulted in "unsustainable rates of exploitation for 70% of the snapper-grouper complex, which consists of over 50 species, mainly of groupers and snappers" in Florida and the Florida Keys. The systematic and widespread conversion of estuarine habitats into agricultural, industrial, and urban uses has demonstrated a historical devotion to valuing the use of land for purposes from a position of simple but defective logic. Unused land provides no products, which is useless land.
The ecosystem services approach fills gaps in a sustainability analysis by demanding the account for the linkages between ecosystem goods and services, and ecosystem processes and human wellbeing.
The World Commission on Environment and Development states that "sustaining oceans are marked by a fundamental unity." Interconnected cycles of energy, climate, marine living resources, and human activities move through coastal waters, regional seas, and the closed oceans. Global pressures on the ocean include rising levels of greenhouse gas emissions, which impact species and food webs throughout ocean ecosystems, deoxygenation, overfishing, and run-off pollution from land and coastal sources.
Transformation to a thriving ocean system requires changes in governance across sectors and scales. "The end result would be a form of polycentric governance that can manage shared resources and ocean space." A polycentric governance goal from The World Commission on Environment and Development is "to support multiple governing bodies by establishing a shared vision and creating principled guiding frameworks and processes to facilitate coherent systems-oriented regulation."
Types of sustainable habitats
Coral reefs
A coral reef is an underwater ecosystem characterized by reef-building corals. Coral reefs serve as a habitat for a diverse range of fish and invertebrates, while also providing economic resources to fishing communities.
The coral reefs' foundation is made up of stony corals with calcareous skeletons that protect shores from storm surges. They also help produce sand for recreational beaches and aquariums.
Coral reefs are a largely self-sustaining ecosystem and up to 90% of the corals' nutrients may come from their symbiotic relationships. The coral polyps and microscopic algae zooxanthellae in coral reefs have a symbiotic relationship wherein the algae provide nourishment to the coral polyps from within their tissue.
Parks
A park is a protected area of wildlife. It is a natural sustainable habitat. Parks promote a culture of wellness that engages members of their surrounding communities and promotes healthy and active lifestyles. People who volunteer at parks may support these sustainable habitats and help to maintain them.
Parks may serve as recreational areas for communities, encouraging people to spend time in nature. Urban parks are in urban areas, creating a natural space that benefits those living in cities.
Plants and animals may flourish in parks, where they are able to have a sustainable habitat away from the interference of humans. This is especially true of national parks, where land is set aside and preserved. These habitats are sustainable in nature.
Cities
A sustainable city is a city that is designed and built in an ecologically friendly way. Sustainable cities may also be known as eco-cities or green cities. These cities are constructed with guidelines about spatial planning and operational rules pertaining to urbanism in mind. Spatial planning takes into account ecological, social, cultural, and economic issues and policies. This leads to the creation of mindfully built cities that are aware and conscious of their impact on the environment.
Sustainable cities in earthquake-prone areas are built with input from civil engineers, architects, and urban planners who collaborate on safe architecture that can withstand disasters. This reduces waste and ensures that buildings will last for many years to come. In areas that are protected because of nature and cultural heritage, this heritage may be reflected in the choice of construction materials and the design of the buildings. This helps to preserve culture. Additionally, construction materials and building orientation may be chosen with the intent to mitigate the effects of climate change. Cities may also be planned to include green spaces and trees that reduce heat stress.
Creating sustainable habitats
In creating sustainable habitats, environmental scientists, designers, engineers and architects must not consider any elements as a waste product to be disposed of somewhere off-site, but as a nutrient stream for another process to feed on.
Net-Zero Energy Buildings (NZEB)
These buildings are made to use the minimum amount of energy possible. When these buildings contain renewable sources they are able to produce the specific amount of energy required to function. In some cases they can produce more than the energy they need and they will harness this energy.
Energy positive buildings
Currently, "buildings account for almost 40 percent of global carbon emissions." Energy-positive buildings produce more energy than the energy they demand, this is a demand for most countries that are focused on total carbon emissions. Hydro and the Zero Emission Resource Organisation (ZERO) is a specific company that has created energy-positive buildings in Norway. They have an interesting approach that includes embodied energy, which means that the total energy with every step of collecting materials and constructing the building. For example, timber or wood takes less energy to collect, cut, and construct into something than concrete. Whereas recycled material contains the lowest embodied energy. This company has engineered its buildings to self-ventilate, have maximum daylight, and more. This is one alternative to building sustainable habitats.
Sustainable building materials
Concrete
Sustainable building materials can change the way we move forward as a society. A very common form of building material is concrete. However, this is not a sustainable resource for building materials because it can crack and degrade over time. An alternative to concrete is bacterial concrete (self-healing concrete), which is a substance that mixes Bacillus pseudofirmus, Bacillus cohnii, and concrete. This mixture can be a sustainable switch because it is a self-healing substance. Since concrete can crack from weathering, plates shifting, and the temperature it is important to consider using something that will last a long time and won't need several repairs. This bacteria concrete improves strength, reduces water absorption, and more. Depending on the bacteria used you can have different effects on the overall durability of the concrete. For example, in a place where chloride is used, you can add Sporosarcina pasteuria to increase the overall resistance to the chloride ion that can penetrate the concrete. Another example is water absorption, in this situation Bacillus sphaericus reduced water absorption. The different types of bacteria can assist in the sustainability of the overall structure and length of the substance. The cost of adding bacteria can be 2.3 to 3.9 times higher in cost than normal concrete.
Wood
Wood can be a great resource for building structures because of the longevity of the material. However, since wood is a natural resource specific protocols need to be followed for using this material in order to be a sustainable building. Wood is the most commonly used building material in the United States. Wood has a low carbon impact and a low embodied energy. This is the amount of energy that is required to harvest and create said building.
The process of environmental planning
Environmental planning can be numerous things including building structures, effeminacy, and useability. A lot of factors go into play for planning something that is sustainable, and environmentally friendly, while still implementing culture and aspects to improve society. One topic why environmental planning is so important is tourism. When people visit a new place they spend a lot of money, this money goes to the economy of the town with several tourists.
List of steps for planning
Create a planning team
Make a vision for future
Figure out community wants and needs for the environment
Find solutions
Create a plan
Proceed with plan
Evaluate steps and fix any issues.
This list can create a wonderful set of baseline monitoring. This is important for sustainable habitats because it is a framework to ensure that the environment will not be negatively impacted by human actions of creating specific things like parks, houses, community buildings, and more.
Sustainable transportation
Transportation can be considered an important way that an economy can help society succeed. Transportation actually produces 23% of the carbon emissions in the world. Also, it accounts for 64% of the world's oil use. This is a huge percentage of natural resources going into transportation. There are solutions that can be implemented to create a sustainable habitat for the communities and economies of the world. An example of sustainable public transportation in Jakarta, Indonesia, which has won the Sustainable Transport Award. One way they one this award and implemented sustainability is by connecting local buses, vehicles, and micro busses within their cities and urban regions. The city of Jakarta has created a transportation system called BRT system that had specific lanes just for public transportation. This has decreased traffic overall because more people are using the BRT system instead of driving. Something else that the BRT transportation system has is that it can take people farther than the individual car can. This lowered carbon emissions and oil consumption.
Green energy
Green energy is an alternative to using fossil fuels. Some examples are solar energy, wind energy, and nuclear energy. These alternatives use natural energy instead of fossil fuels to promote green electricity. The use of green energy can boost any economy, for example in India it could create a green energy market worth 80 billion by 2030. India has created 59 solar parks in the country. One of the largest parks in India has a capacity of 30 GW for a solar wind hybrid park. All of the parks in India have changed the way the economy works overall. They have decreased the amount of money it cost using fossil fuels because they are using natural energy. They have also implemented a self-cleaning tool that cleans the solar panels in the solar parks they created. Solar panels can get dirty from weathering. This tool cleans the top of the solar panel so that the maximum amount of energy is produced.
Remedial efforts
Restoration and protection of parks
The restoration and protection of parks begins with the acknowledgement of the need for actions. After a government or state is aware of the need for restoration, protection, and the creation of these sustainable habitats, action takes place.
The need for funding creates the foundational roadblock in protecting and restoring parks. Funding can be received by state legislations and fundraising projects hosted by supporting organizations. This funding can then be systematically distributed to encompass movements that make a significant stride towards protecting and restoring parks. These movements include but are not limited to setting up fences around parks, establishing park security, and supplying and resupplying proper nutritional elements to the parks to sustain and promote growth of habitats.
Ocean Governance
Ocean Governance is defined as the “integrated conduct of the policy, actions, and affairs regarding the world’s oceans to protect ocean environment, sustainable use of coastal and marine resources as well as to conserve its biodiversity.”
Ocean governance as a process is recommended to be integrated horizontally and vertically. Integrating a process horizontally entails requiring the participation of “governmental institutions, the private sector, NGOs, academics, [and] scientists”, while integrating a process vertically entails essential communication, collaboration, and coordination between the chosen governmental institutions and other participatory agencies.
Partnership is an essential aspect of ocean governance as it covers all bases of collective remedial efforts. Essentially, it connects local and state governments who both want to induce the remedial efforts. Communication between inter-governmental agencies and regional institutions aids in strengthening collective efforts that are set into motion.
Coastal national parks and oceans are facing many threatening changes to their equilibrium. These include but are not limited to rising sea levels, damaged coral reefs, storm activity, and erosion. At the Timucuan Ecological & Historic Preserve and the Cumberland Island National Seashore, teams such as the National Park Foundation (NPF), National Park Services (NPS), and the Green Team Youth Corps at Groundwork Jacksonville are all making strides to prevent and stabilize eroding shorelines, regrowing native marsh grasses, and reemerging the once stable habitat that was once known as home for a plethora of marine species.
Green building
Green building is a foundationally different mode of building and operating a series of buildings that contrast to those built in the past in their aspects of sustainability. The buildings funded for by the Green Building Initiative and the United States Green Building Council enable access to “environmentally and socially responsible, healthy, and prosperous environment[s] that improve[s] the quality of life.”
A system by the name of LEED, is “the world’s most widely used green building system with more than 100,000 buildings participating” to date.
Buildings that are funded by the Green Building Initiative and LEED have been proven to be financially, environmentally, and efficiently healthier for individuals. Lower carbon emissions, healthier living spaces, and improved efficiency are all the reap of the crop of the USGBC’s remedial efforts that are “constructed and operated through LEED.”
See also
Alternative natural materials
Autonomous building
Ecovillage
Integrated Pest Management
Permaculture
Principles of Intelligent Urbanism
References
External links
Creating sustainable communities in harmony with nature. Urban Permaculture.
Path to Freedom - Urban Agriculture & Sustainability
Helping create sustainable habitats around the world-the SHIRE
Habitats
Sustainable design
Habitat
Human habitats
Sustainable agriculture
Sustainable architecture
Sustainable gardening
Sustainable urban planning
Habitat | Sustainable habitat | [
"Engineering",
"Environmental_science"
] | 3,017 | [
"Sustainable architecture",
"Environmental social science",
"Architecture"
] |
4,081,215 | https://en.wikipedia.org/wiki/C3H6 | {{DISPLAYTITLE:C3H6}}
The molecular formula C3H6 (molar mass: 42.08 g/mol, exact mass: 42.0470 u) may refer to:
Cyclopropane
Propylene, also known as propene | C3H6 | [
"Chemistry"
] | 61 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
4,081,361 | https://en.wikipedia.org/wiki/Nachbin%27s%20theorem | In mathematics, in the area of complex analysis, Nachbin's theorem (named after Leopoldo Nachbin) is a result used to establish bounds on the growth rates for analytic functions. In particular, Nachbin's theorem may be used to give the domain of convergence of the generalized Borel transform, also called Nachbin summation.
This article provides a brief review of growth rates, including the idea of a function of exponential type. Classification of growth rates based on type help provide a finer tool than big O or Landau notation, since a number of theorems about the analytic structure of the bounded function and its integral transforms can be stated.
Exponential type
A function defined on the complex plane is said to be of exponential type if there exist constants and such that
in the limit of . Here, the complex variable was written as to emphasize that the limit must hold in all directions . Letting stand for the infimum of all such , one then says that the function is of exponential type .
For example, let . Then one says that is of exponential type , since is the smallest number that bounds the growth of along the imaginary axis. So, for this example, Carlson's theorem cannot apply, as it requires functions of exponential type less than .
Ψ type
Additional function types may be defined for other bounding functions besides the exponential function. In general, a function is a comparison function if it has a series
with for all , and
Comparison functions are necessarily entire, which follows from the ratio test. If is such a comparison function, one then says that is of -type if there exist constants and such that
as . If is the infimum of all such one says that is of -type .
Nachbin's theorem states that a function with the series
is of -type if and only if
This is naturally connected to the root test and can be considered a relative of the Cauchy–Hadamard theorem.
Generalized Borel transform
Nachbin's theorem has immediate applications in Cauchy theorem-like situations, and for integral transforms. For example, the generalized Borel transform is given by
If is of -type , then the exterior of the domain of convergence of , and all of its singular points, are contained within the disk
Furthermore, one has
where the contour of integration γ encircles the disk . This generalizes the usual Borel transform for functions of exponential type, where . The integral form for the generalized Borel transform follows as well. Let be a function whose first derivative is bounded on the interval and that satisfies the defining equation
where . Then the integral form of the generalized Borel transform is
The ordinary Borel transform is regained by setting . Note that the integral form of the Borel transform is the Laplace transform.
Nachbin summation
Nachbin summation can be used to sum divergent series that Borel summation does not, for instance to asymptotically solve integral equations of the form:
where , may or may not be of exponential type, and the kernel has a Mellin transform. The solution can be obtained using Nachbin summation as with the from and with the Mellin transform of . An example of this is the Gram series
In some cases as an extra condition we require to be finite and nonzero for
Fréchet space
Collections of functions of exponential type can form a complete uniform space, namely a Fréchet space, by the topology induced by the countable family of norms
See also
Divergent series
Borel summation
Euler summation
Cesàro summation
Lambert summation
Mittag-Leffler summation
Phragmén–Lindelöf principle
Abelian and tauberian theorems
Van Wijngaarden transformation
References
L. Nachbin, "An extension of the notion of integral functions of the finite exponential type", Anais Acad. Brasil. Ciencias. 16 (1944) 143–147.
Ralph P. Boas, Jr. and R. Creighton Buck, Polynomial Expansions of Analytic Functions (Second Printing Corrected), (1964) Academic Press Inc., Publishers New York, Springer-Verlag, Berlin. Library of Congress Card Number 63-23263. (Provides a statement and proof of Nachbin's theorem, as well as a general review of this topic.)
Integral transforms
Theorems in complex analysis
Summability methods | Nachbin's theorem | [
"Mathematics"
] | 897 | [
"Sequences and series",
"Theorems in mathematical analysis",
"Mathematical structures",
"Summability methods",
"Theorems in complex analysis"
] |
4,081,616 | https://en.wikipedia.org/wiki/Relaxation%20labelling | Relaxation labelling is an image treatment methodology. Its goal is to associate a label to the pixels of a given image or nodes of a given graph.
See also
Digital image processing
References
Further reading
(Full text: )
(Full text: )
Computer vision | Relaxation labelling | [
"Engineering"
] | 52 | [
"Artificial intelligence engineering",
"Packaging machinery",
"Computer vision"
] |
4,081,806 | https://en.wikipedia.org/wiki/Graffiti%20%28program%29 | Graffiti is a computer program which makes conjectures in various subfields of mathematics (particularly graph theory) and chemistry, but can be adapted to other fields. It was written by Siemion Fajtlowicz and Ermelinda DeLaViña at the University of Houston. Research on conjectures produced by Graffiti has led to over 60 publications by other mathematicians.
References
External links
Graffiti & Automated Conjecture-Making
Siemion Fajtlowicz
Chemistry software
Mathematical software | Graffiti (program) | [
"Chemistry",
"Mathematics"
] | 96 | [
"Chemistry software",
"Theoretical chemistry stubs",
"Computational chemistry stubs",
"Computational chemistry",
"nan",
"Physical chemistry stubs",
"Mathematical software"
] |
4,082,104 | https://en.wikipedia.org/wiki/A%20capriccio | A capriccio (Italian: "following one's fancy") is a tempo marking indicating a free and capricious approach to the tempo (and possibly the style) of the piece. This marking will usually modify another, such as lento a capriccio, often used in the Hungarian Rhapsodies of Franz Liszt. Perhaps the most famous piece to use the term is Ludwig van Beethoven's Rondò a capriccio (Op. 129), better known as Rage Over a Lost Penny.
See also
Capriccio (music)
External links
References
"Capriccio, a", Grove Music Online ed. L. Macy (Accessed 28 April 2006)
Music performance
Musical notation
Rhythm and meter | A capriccio | [
"Physics"
] | 147 | [
"Spacetime",
"Rhythm and meter",
"Physical quantities",
"Time"
] |
4,082,119 | https://en.wikipedia.org/wiki/D3O | D3O is the namesake ingredient brand of British company D3O Lab, specializing in rate-sensitive impact protection technologies.
The brand comprises more than 30 technologies and materials, including set foams, formable foams, set elastomers, and formable elastomers.
D3O is sold in more than 50 countries. It is used in sports and motorcycle gear, protective cases for consumer electronics, including phones, industrial workwear, and military protection, including helmet pads and limb protectors.
History
In 1999, the materials scientists Richard Palmer and Philip Green experimented with a dilatant fluid with non-Newtonian properties. Unlike water, this fluid was free-flowing at rest but became instantly hard upon impact.
As snowboarders, Palmer and Green drew inspiration from snow and decided to replicate its matrix-like quality to develop a flexible material that incorporated the dilatant fluid. After experimenting with numerous materials and formulas, they invented a flexible, pliable material that locked together and solidified in the event of a collision.
When incorporated into clothing, the material moves with the wearer while providing comprehensive protection.
Palmer and Green filed a patent application, which they used as the foundation for commercializing their invention and setting up a business in 1999.
D3O was used commercially for the first time by the United States Ski Team and the Canadian ski team at the 2006 Olympic Winter Games. D3O first entered the motorcycle market in 2009 when the ingredient was incorporated into CE-certified armour for the apparel brand Firstgear.
Philip Green left D3O in 2006, and in 2009 founder Richard Palmer brought in Stuart Sawyer as interim CEO. Palmer took a sabbatical in 2010 and left the business in 2011, at which point executive leadership was officially handed over to Sawyer, who has remained in the position since.
In 2014, D3O received one of the Queen’s Awards for Enterprise and was awarded £237,000 by the Technology Strategy Board—now known as Innovate UK—to develop a shock absorption helmet system prototype for the defence market to reduce the risk of traumatic brain injury.
The following year, Sawyer secured £13 million in private equity funding from venture capital investor Beringea, allowing D3O to place more emphasis on product development and international marketing. D3O opened headquarters in London, which include test laboratories and house its global business functions.
With exports to North America making up an increasing part of its business, the company set up a new operating base located within the Virginia Tech Corporate Research Center (VTCRC), a research park for high-technology companies located in Blacksburg, Virginia. The same year, D3O consumer electronics brand partner Gear4 became the UK’s number 1 phone case brand in volume and value. Gear 4 has since become present in consumer electronics retail stores worldwide including Verizon, AT&T and T-Mobile.
In 2017, D3O became part of the American National Standards Institute (ANSI)/International Safety Equipment Association (ISEA) committee which developed the first standard in North America to address the risk to hands from impact injuries: ANSI/ISEA 138-2019, American National Standard for Performance and Classification for Impact Resistant Hand Protection.
D3O was acquired in September 2021 by independent private equity fund Elysian Capital III LP. The acquisition saw previous owners Beringea US & UK and Entrepreneurs Fund exit the business after six years of year-on-year growth.
D3O applications
D3O has various applications, such as in electronics (low-profile impact protection for phones, laptops, and other electronic devices), sports (protective equipment), motorcycle riding gear, defence (helmet liners and body protection; footwear) and industrial workwear (personal protective equipment such as gloves, knee pads and metatarsal guards for boots).
In 2020, D3O became the specified helmet suspension pad supplier for the US Armed Forces' Integrated Helmet Protection System (IHPS) Suspension System.
Product development
D3O uses patented and proprietary technologies to create both standard and custom products.
In-house rapid prototyping and testing laboratories ensure each D3O development is tested to CE standards for sports and motorcycle applications, ISEA 138 for industrial applications, and criteria set by government agencies for defense applications.
Sponsorship
D3O sponsors athletes including:
Downhill mountain bike rider Tahnée Seagrave
Seth Jones, ice hockey defenseman and alternate captain for the Columbus Blue Jackets in the NHL
Motorcycle racer Michael Dunlop, 25-times winner of the Isle of Man TT
The Troy Lee Designs team of athletes including three-times Red Bull Rampage winner Brandon Semenuk
Enduro rider Rémy Absalon, 12-times Megavalanche winner.
Awards and recognition
D3O has received the following awards and recognition:
2014: Queen’s Award for Enterprise
2016: Inclusion in the Sunday Times Tech Track 100 ‘Ones to Watch’ list
2017: T3 Awards together with Three: Best Mobile Accessory
2018: British Yachting Awards – clothing innovation
2019: ISPO Award – LP2 Pro
2020: Red Dot - Snickers Ergo Craftsmen Kneepads
2022/2023: ISPO Textrends Award - Accessories & Trim
2023: IF Design Award - D3O Ghost Reactiv Body Protection
2023: ISPO Award – D3O Ghost back protector
References
Materials
Non-Newtonian fluids
Motorcycle apparel | D3O | [
"Physics"
] | 1,095 | [
"Materials",
"Matter"
] |
4,082,177 | https://en.wikipedia.org/wiki/PAH%20world%20hypothesis | The PAH world hypothesis is a speculative hypothesis that proposes that polycyclic aromatic hydrocarbons (PAHs), known to be abundant in the universe, including in comets, and assumed to be abundant in the primordial soup of the early Earth, played a major role in the origin of life by mediating the synthesis of RNA molecules, leading into the RNA world. However, as yet, the hypothesis is untested.
Background
The 1952 Miller–Urey experiment demonstrated the synthesis of organic compounds, such as amino acids, formaldehyde and sugars, from the original inorganic precursors the researchers presumed to have been present in the primordial soup (but is no longer considered likely). This experiment inspired many others. In 1961, Joan Oró found that the nucleotide base adenine could be made from hydrogen cyanide (HCN) and ammonia in a water solution. Experiments conducted later showed that the other RNA and DNA nucleobases could be obtained through simulated prebiotic chemistry with a reducing atmosphere.
The RNA world hypothesis shows how RNA can become its own catalyst (a ribozyme). In between there are some missing steps such as how the first RNA molecules could be formed. The PAH world hypothesis was proposed by Simon Nicholas Platts in May 2004 to try to fill in this missing step. A more thoroughly elaborated idea has been published by Ehrenfreund et al.
Polycyclic aromatic hydrocarbons
Polycyclic aromatic hydrocarbons are the most common and abundant of the known polyatomic molecules in the visible universe, and are considered a likely constituent of the primordial sea. PAHs, along with fullerenes (or "buckyballs"), have been recently detected in nebulae. Buckminsterfullerene (C60) has been identified in the interstellar medium spaces. (Fullerenes are also implicated in the origin of life; according to astronomer Letizia Stanghellini, "It's possible that buckyballs from outer space provided seeds for life on Earth.") PAHs, subjected to interstellar medium (ISM) conditions, are transformed, through hydrogenation, oxygenation and hydroxylation, to more complex organics — "a step along the path toward amino acids and nucleotides, the raw materials of proteins and DNA, respectively". Further, as a result of these transformations, the PAHs lose their spectroscopic signature which could be one of the reasons "for the lack of PAH detection in interstellar ice grains, particularly the outer regions of cold, dense clouds or the upper molecular layers of protoplanetary disks."
In 2013, polycyclic aromatic hydrocarbons were detected in the upper atmosphere of Titan, the largest moon of the planet Saturn.
Low-temperature chemical pathways from simple organic compounds to complex PAHs have been demonstrated. Such chemical pathways may help explain the presence of PAHs in the low-temperature atmosphere of Saturn moon Titan, and may be significant pathways, in terms of the PAH world hypothesis, in producing precursors to biochemicals related to life as we know it.
PAHs are not normally very soluble in sea water, but when subject to ionizing radiation such as solar UV light, the outer hydrogen atoms can be stripped off and replaced with a hydroxyl group, rendering the PAHs far more soluble.
These modified PAHs are amphiphilic, which means that they have parts that are both hydrophilic and hydrophobic. When in solution, they assemble in discotic mesogenic (liquid crystal) stacks which, like lipids, tend to organize with their hydrophobic parts protected.
In 2014, NASA announced a database for tracking polycyclic aromatic hydrocarbons (PAHs) in the universe. More than 20% of the carbon in the universe may be associated with PAHs, possible starting materials for the formation of life. PAHs seem to have been formed as early as a couple of billion years after the Big Bang, are abundant in the universe, and are associated with new stars and exoplanets.
Reactions
Attachment of nucleobases to PAH scaffolding
In the self-ordering PAH stack, the separation between adjacent rings is 0.34 nm. This is the same separation found between adjacent nucleotides of RNA and DNA. Smaller molecules will naturally attach themselves to the PAH rings. However PAH rings, while forming, tend to swivel around on one another, which will tend to dislodge attached compounds that would collide with those attached to those above and below. Therefore, it encourages preferential attachment of flat molecules such as pyrimidine and purine nucleobases, the key constituents (and information carriers) of RNA and DNA. These bases are similarly amphiphilic and so also tend to line up in similar stacks.
Attachment of oligomeric backbone
According to the hypothesis, once the nucleobases are attached (via hydrogen bonds) to the PAH scaffolding, the inter-base distance would select for "linker" molecules of a specific size, such as small formaldehyde (methanal) oligomers, also taken from the prebiotic "soup", which will bind (via covalent bonds) to the nucleobases as well as each other to add a flexible structural backbone.
Detachment of the RNA-like strands
A subsequent transient drop in the ambient pH (increase in acidity), for example as a result of a volcanic discharge of acidic gases such as sulfur dioxide or carbon dioxide, would allow the bases to break off from their PAH scaffolding, forming RNA-like molecules (with the formaldehyde backbone instead of the ribose-phosphate backbone used by "modern" RNA, but the same 0.34 nm pitch).
Formation of ribozyme-like structures
The hypothesis further speculates that once long RNA-like single strands are detached from the PAH stacks, and after ambient pH levels became less acidic, they would tend to fold back on themselves, with complementary sequences of nucleobases preferentially seeking out each other and forming hydrogen bonds, creating stable, at least partially double-stranded RNA-like structures, similar to ribozymes. The formaldehyde oligomers would eventually be replaced with more stable ribose-phosphate molecules for the backbone material, resulting in a starting milestone for the RNA world hypothesis, which speculates about further evolutionary developments from that point.
See also
Astrochemistry
Atomic and molecular astrophysics
Cosmochemistry
Extragalactic astronomy
Extraterrestrial materials
History of the Earth
Iron-sulfur world theory
List of interstellar and circumstellar molecules
Thermosynthesis
Tholin
Other possible RNA precursors:
Glycol nucleic acid (GNA)
Peptide nucleic acid (PNA)
Threose nucleic acid (TNA)
References
External links
Life's ingredients found in early universe New Scientist Magazine 14:49 July 29, 2005
RNA-directed amino acid homochirality
Origin of life
Biological hypotheses
Polycyclic aromatic hydrocarbons | PAH world hypothesis | [
"Biology"
] | 1,463 | [
"Biological hypotheses",
"Origin of life"
] |
4,082,546 | https://en.wikipedia.org/wiki/12AT7 | 12AT7 (also known in Europe by the Mullard–Philips tube designation of ECC81) is a miniature nine-pin medium-gain (60) dual-triode vacuum tube popular in guitar amplifiers. It belongs to a large family of dual triode vacuum tubes which share the same pinout (EIA 9A), including in particular the very commonly used low-mu 12AU7 and high-mu 12AX7.
The 12AT7 has somewhat lower voltage gain than the 12AX7, but higher transconductance and plate current, which makes it suitable for high-frequency applications.
Originally the tube was intended for operation in VHF circuits, such as TV sets and FM tuners, as an oscillator/frequency converter, but it also found wide use in audio as a driver and phase-inverter in vacuum tube push–pull amplifier circuits.
This tube is essentially two 6AB4/EC92s in a single envelope. Unlike the situation with the 6C4 and 12AU7, both the 6AB4 and the 12AT7 are described by manufacturer's data sheets as R.F. (Radio Frequency) devices operating up to VHF frequencies.
The tube has a center-tapped filament so it can be used in either 6.3V 300mA or 12.6V 150mA heater circuits.
the 12AT7 was manufactured in Russia (Electro-Harmonix brand), Slovakia (JJ Electronic), and China.
See also
12AU7
12AX7 - includes a comparison of similar twin-triode designs
List of vacuum tubes
References
External links
12AT7 twin triode data sheet from General Electric
Reviews of 12at7 tubes.
Vacuum tubes
Guitar amplification tubes | 12AT7 | [
"Physics"
] | 362 | [
"Vacuum tubes",
"Vacuum",
"Matter"
] |
4,082,574 | https://en.wikipedia.org/wiki/12AU7 | The 12AU7 and its variants are miniature nine-pin (B9A base) medium-gain dual-triode vacuum tubes. It belongs to a large family of dual-triode vacuum tubes which share the same pinout (RETMA 9A). 12AU7 is also known in Europe under its Mullard–Philips tube designation ECC82. There are many equivalent tubes with different names, some identical, some designed for ruggedness, long life, or other characteristics; examples are the US military 5814A and the European special-quality ECC82 and E182CC.
The tube is popular in hi-fi vacuum tube audio as a low-noise line amplifier, driver (especially for tone stacks), and phase-inverter in vacuum tube push–pull amplifier circuits. It was widely used, in special-quality versions such as ECC82 and 5814A, in pre-semiconductor digital computer circuitry. Use of special-quality versions outside of the purpose they were designed for may not be optimal; for example, a version for digital computers may be designed for long life without cathode poisoning when mostly switched to low-current mode in switching applications, but with little attention to parameters of interest only for linear applications such as linearity of transfer characteristic, matching between the two sections, microphony, etc.
This tube is essentially two 6C4/EC90s in the same envelope. However, this latter type is officially described in manufacturer's data as "a special quality R.F. power amplifier or oscillator for frequencies up to 150 MHz". The 12AU7, on the other hand, is described as an "A.F. double triode". Data sheets suggest an upper frequency limit of 30 kHz for the 12AU7/ECC82 and it is not described as a "special quality" device. This contrasts with the 6AB4/EC92 and 12AT7/ECC81 which are both R.F. devices operating up to VHF.
Double triodes of the 12AU7 family have a center-tapped filament for use in either 6.3V 300mA or 12.6V 150mA heater circuits.
the 12AU7 continued to be manufactured in Russia, Slovakia (JJ Electronic), and China.
See also
12AX7 - includes a comparison of similar twin-triode designs
12AT7
References
External links
12AU7 datasheet from the RCA RC-29 Receiving Tubes Manual (NJ7P Tube Database)
Several tube datasheets
Reviews of 12au7 tubes.
(JJ Electronic ECC82 Datasheet)
Vacuum tubes
Guitar amplification tubes | 12AU7 | [
"Physics"
] | 558 | [
"Vacuum tubes",
"Vacuum",
"Matter"
] |
4,082,701 | https://en.wikipedia.org/wiki/Shooting%20reconstruction | Shooting incident reconstruction is the examination of the physical evidence recovered or documented at the scene of a shooting. Shooting reconstruction may also include the laboratory analysis of the evidence recovered at the scene. The goal is an attempt to gain an understanding of what may or may not have happened during the incident. Once all reasonable explanations have been considered, one can evaluate the significance of witness or suspect accounts of the incident.
In many cases valuable evidence necessary for reconstruction analysis exists at the crime scene. Should this evidence go undocumented or unrecovered during the initial processing of the shooting scene, the information it can give investigators may be lost forever. Poor shooting incident processing can not be compensated for by excellent laboratory work.
There are many questions that can be answered from the proper reconstruction of a shooting incident. Some of the questions typically answered by a shooting reconstruction investigation include (but not limited to) the distance of the shooter from the target, The path of the bullet(s), The number of shots fired and possibly the sequence of multiple discharges at a shooting incident.
The Association of Firearm and Tool Mark Examiners is an international non-profit organization dedicated to the advancement of firearm and tool mark identification, including shooting reconstruction.
References
External links
Association for Crime Scene Reconstruction
Association of Firearm and Toolmark Examiners
Shooting Reconstruction vs Shooting Reenactment
Further reading
Ballistics
Forensic techniques
Gun violence | Shooting reconstruction | [
"Physics"
] | 276 | [
"Applied and interdisciplinary physics",
"Ballistics"
] |
4,082,874 | https://en.wikipedia.org/wiki/Workplace%20bullying | Workplace bullying is a persistent pattern of mistreatment from others in the workplace that causes either physical or emotional harm. It includes verbal, nonverbal, psychological, and physical abuse, as well as humiliation. This type of workplace aggression is particularly difficult because, unlike typical school bullies, workplace bullies often operate within the established rules and policies of both their organization and society. In most cases, workplace bullying is reported as being carried out by someone who is in a position of authority over the victim. However, bullies can also be peers or subordinates. When subordinates participate in bullying, this is referred to as ‘upwards bullying.’ The least visible form of workplace bullying involves upwards bullying where bullying tactics are manipulated and applied against a superior, often for strategically motivated outcomes.
Research has also investigated the impact of the larger organizational context of bullying, as well as the group-level dynamics that contribute to the occurrence and persistence of bullying behavior. Bullying can be covert or overt, sometimes unnoticed by superiors while also being widely known throughout an organization. The negative effects of workplace bullying are not limited to the targeted individuals, and can potentially lead to a decline in employee morale and shifts in organizational culture. Workplace bullying can also manifest as overbearing supervision, constant criticism and obstructing promotions.
Definitions
Although there is no universally accepted formal definition of workplace bullying, and some researchers question whether a single, uniform definition is possible due to its complex and multifaceted forms, several researchers have attempted to define it:
According to the widely used definition from Olweus, "[Workplace bullying is] a situation in which one or more persons systematically and over a long period of time perceive themselves to be on the receiving end of negative treatment on the part of one or more persons, in a situation in which the person(s) exposed to the treatment has difficulty in defending themselves against this treatment".
According to Einarsen, Hoel, Zapf and Cooper, "Bullying at work means harassing, offending, socially excluding someone or negatively affecting someone's work tasks. In order for the label bullying (or mobbing) to be applied to a particular activity, interaction or process it has to occur repeatedly and regularly (e.g. weekly) and over a period of time (e.g. about six months). Bullying is an escalated process in the course of which the person confronted ends up in an inferior position and becomes the target of systematic negative social acts."
According to Tracy, Lutgen-Sandvik, and Alberts, researchers associated with the Arizona State University's Project for Wellness and Work-Life, workplace bullying is most often "a combination of tactics in which numerous types of hostile communication and behaviour are used"
Gary and Ruth Namie define workplace bullying as "repeated, health-harming mistreatment, verbal abuse, or conduct which is threatening, humiliating, intimidating, or sabotage that interferes with work or some combination of the three."
Pamela Lutgen-Sandvik expands this definition, stating that workplace bullying is "persistent verbal and nonverbal aggression at work, that includes personal attacks, social ostracism, and a multitude of other painful messages and hostile interactions."
Catherine Mattice and Karen Garman define workplace bullying as "systematic aggressive communication, manipulation of work, and acts aimed at humiliating or degrading one or more individual that create an unhealthy and unprofessional power imbalance between bully and target(s), result in psychological consequences for targets and co-workers, and cost enormous monetary damage to an organization's bottom line"
Dr. Jan Kircher attempts to redefine workplace bullying, what she calls persistent workplace aggression, as an issue thought primarily about through the lens of individual conflict to an issue of organizational culture, arguing, "One of the biggest misconceptions that people have about workplace bullying it that it is similar to conflict and therefore, persistent workplace aggression is handled like conflict." However, according to Kircher, this approach is detrimental, and actually prevents organizations from being able to effectively prevent, handle or resolve bullying situations in the work environment.
The most common type of complaint filed with the U.S. Equal Employment Opportunity Commission involves retaliation, where an employer harasses or bullies an employee for objecting to illegal discrimination. Patricia Barnes, author of Surviving Bullies, Queen Bees & Psychopaths in the Workplace, argues that employers that bully are a critical but often overlooked aspect of the problem in the United States.
Because it can occur in a variety of contexts and forms, it is also useful to define workplace bullying by the key features that these behaviours possess. Bullying is characterized by:
Repetition (occurs regularly)
Duration (is enduring)
Escalation (increasing aggression)
Power disparity (the target lacks the power to successfully defend themselves)
Attributed intent
This distinguishes bullying from isolated behaviours and other forms of job stress and allows the term workplace bullying to be applied in various contexts and to behaviors that meet these characteristics. Many observers agree that bullying is often a repetitive behavior. However, some experts who have dealt with a great many people who report abuse also categorize some once-only events as bullying, for example, with cases where there appear to be severe sequelae. Expanding the common understanding of bullying to include single, severe episodes also parallels the legal definitions of sexual harassment in the US.
According to Pamela Lutgin-Sandvik, the lack of unifying language to name the phenomenon of workplace bullying is a problem because without a unifying term or phrase, individuals have difficulty naming their experiences of abuse, and therefore have trouble pursuing justice against the bully. Unlike sexual harassment, which named a specific problem and is now recognized in law of many countries (including the U.S.), workplace bullying is still being established as a relevant social problem and is in need of a specific vernacular.
Euphemisms intended to trivialize bullying and its impact on bullied people include: incivility, disrespect, difficult people, personality conflict, negative conduct, and ill treatment. Bullied people are labelled as insubordinate when they resist the bullying treatment.
There is no exact definition for bullying behaviours in workplace, which is why different terms and definitions are common. For example, mobbing is a commonly used term in France and Germany, where it refers to a "mob" of bullies, rather than a single bully; this phenomenon is not often seen in other countries. In the United States, aggression and emotional abuse are frequently used terms, whereas harassment is the term preferred in Finland. Workplace bullying is primarily used in Australia, UK, and Northern Europe. While the terms "harassment" and "mobbing" are often used to describe bullying behaviors, "workplace bullying" tends to be the most commonly used term by the research community.
Statistics
Approximately 72% of bullies outrank their victims.
Prevalence
Research suggests that a significant number of people are exposed to persistent workplace bullying, with a majority of studies reporting a 10 to 15% prevalence in Europe and North America. This figure can vary dramatically upon what definition of workplace bullying is used.
Statistics from the 2007 WBI-Zogby survey show that 13% of U.S. employees report being bullied currently, 24% say they have been bullied in the past and an additional 12% say they have witnessed workplace bullying. Nearly half of all American workers (49%) report that they have been affected by workplace bullying, either being a target themselves or having witnessed abusive behaviour against a co-worker.
Although socioeconomic factors may play a role in the abuse, researchers from the Project for Wellness and Work-Life suggest that "workplace bullying, by definition, is not explicitly connected to demographic markers such as sex and ethnicity".
According to the 2015 National Health Interview Survey Occupational Health Supplement (NHIS-OHS), the national prevalence rate for workers reporting having been threatened, bullied, or harassed by anyone on the job was 7.4%.
In 2008, Dr. Judy Fisher-Blando wrote a doctoral research dissertation on Aggressive behaviour: Workplace Bullying and Its Effect on Job Satisfaction and Productivity. The scientific study determined that almost 75% of employees surveyed had been affected by workplace bullying, whether as a target or a witness. Further research showed the types of bullying behaviour, and organizational support.
Gender
In terms of gender, the Workplace Bullying Institute (2007) states that women appear to be at greater risk of becoming a bullying target, as 57% of those who reported being targeted for abuse were women. Men are more likely to participate in aggressive bullying behaviour (60%), however when the bully is a woman her target is more likely to be a woman as well (71%).
In 2015, the National Health Interview Survey found a higher prevalence of women (8%) workers who were threatened, bullied, or harassed than men.
However, varying results have been found. The research of Samnani and Singh (2012) reviews the findings from 20 years' literature and claims that inconsistent findings could not support the differences across gender. Carter et al. (2013) found that male staff reported higher prevalence of workplace bullying within UK healthcare.
It is important to consider if there may be gender differences in level of reporting.
Race
Race also may play a role in the experience of workplace bullying. According to the Workplace Bullying Institute (2007), the comparison of reported combined bullying (current plus ever bullied) prevalence percentages in the USA reveals the pattern from most to least:
Hispanics (52.1%)
Blacks (46%)
Whites (33.5%)
Asian (30.6%)
The reported rates of witnessing bullying were:
Asian (28.5%)
Blacks (21.1%)
Hispanics (14%)
Whites (10.8%)
The percentages of those reporting that they have neither experienced nor witnessed mistreatment were:
Asians (57.3%)
Whites (49.7%)
Hispanics (32.2%)
Blacks (23.4%)
Research psychologist Tony Buon published one of the first reviews of bullying in China in PKU Business Review in 2005.
Marital status
Higher prevalence rates for experiencing a hostile work environment were identified for divorced or separated workers compared to married workers, widowed workers, and never married workers.
Education
Higher prevalence rates for experiencing a hostile work environment were identified for workers with some college education or workers with high school diploma or GED, compared to workers with less than a high school education.
Age
Lower prevalence rates for experiencing a hostile work environment were identified for workers aged 65 and older compared to workers in other age groups.
With respect to age, conflicting findings have been reported. A study by Einarsen and Skogstad (1996) indicates older employees tend to be more likely to be bullied than younger ones.
Industry
The prevalence of a hostile work environment varies by industry. In 2015, the broad industry category with the highest prevalence was healthcare and social assistance 10%. According to the Bureau of Labor Statistics, 16,890 workers in the private industry experienced physical trauma from nonfatal workplace violence in 2016.
Occupation
The prevalence of hostile work environment varies by occupation. In 2015, the occupation groups with the highest prevalence was protective services (24%) and community and social services (15%).
Within UK healthcare, it has been found that 20% of staff have experienced bullying, and 43% witnessed bullying, with managers being the most common source of bullying.
Disability
In the UK's National Health Service, individuals with disabilities are also at a higher risk of experiencing workplace bullying.
Profiling
Researchers Caitlin Buon and Tony Buon suggest that attempts to profile 'the bully' have been damaging. They state that the "bully" profile is that 'the bully' is always aware of what they are doing, deliberately sets out to harm their 'victims', targets a particular individual or type of person, and has some kind of underlying personality flaw, insecurity, or disorder. But this is unproven and lacks evidence. The researchers suggest referring to workplace bullying as generic harassment along with other forms of non-specific harassment, as this would enable employees to use less emotionally charged language for starting a dialogue about their experiences, rather than being repelled by having to define their experiences as victims. Tony Buon and Caitlin Buon also suggest that the perception and profile of the workplace bully does not facilitate interventions. They suggest that to make significant progress and achieve long-term behaviour change, organisations and individuals need to embrace the notion that everyone potentially houses 'the bully' within them and their organisations. It exists in workplace cultures, belief systems, interactions, and emotional competencies, and cannot be transformed if externalization and demonization further the problem by profiling 'the bully' rather than talking about behaviours and interpersonal interactions.
Relationship among participants
Based on research by H. Hoel and C.L. Cooper, most perpetrators are supervisors. The second most common group is peers, followed by subordinates and customers. The three main relationships among the participants in workplace bullying:
Between supervisor and subordinate
Among co-workers
Employees and customers
Bullying may also occur between an organization and its employees.
Bullying behaviour by supervisors toward subordinates typically manifests as an abuse of power by the supervisor in the workplace. Bullying behaviours by supervisors may be associated with a culture of bullying and the management style of the supervisors. An authoritative management style, specifically, often includes bullying behaviours, which can make subordinates fearful and allow supervisors to bolster their authority over others.
If an organization wishes to discourage bullying in the workplace, strategies and policies must be put into place to dissuade and counter bullying behavior. Lack of monitoring or of punishment/corrective action will result in an organizational culture that supports/tolerates bullying.
In addition to supervisor – subordinate bullying, bullying behaviours also occur between colleagues. Peers can be either the target or perpetrator. If workplace bullying happens among the co-workers, witnesses will typically choose sides, either with the target or the perpetrator. Perpetrators usually "win" since witnesses do not want to be the next target. This outcome encourages perpetrators to continue their bullying behaviour. In addition, the sense of the injustice experienced by a target might lead that person to become another perpetrator who bullies other colleagues who have less power than they do, thereby proliferating bullying in the organization.
Maarit Varitia, a workplace bullying researcher, found that 20% of interviewees who experienced workplace bullying attributed their being targeted to their being different from others.
The third relationship in the workplace is between employees and customers. Although less frequent, such cases play a significant role in the efficiency of the organization. Overly stressed or distressed employees may be less able to perform optimally and can impact the quality of service overall.
The fourth relationship in the workplace is between the organization or system and its employees. An article by Andreas Liefooghe (2012) notes that many employees describe their employer as a "bully".
These cases, the issue is not simply an organizational culture or environmental factors facilitating bullying, but bullying-like behaviour by an employer against an employee. Tremendous power imbalances between an organization and its employees enables the employer to "legitimately exercise" power (e.g., by monitoring and controlling employees) in a manner consistent with bullying.
Although the terminology of bullying traditionally implies an interpersonal relationship between the perpetrator and target, organizations' or other collectives' actions can constitute bullying both by definition and in their impacts on targets. However, while defining bullying as an interpersonal phenomenon is considered legitimate, classifying incidences of employer exploitation, retaliation, or other abuses of power against an employee as a form of bullying is often not taken as seriously.
Organizational culture
Bullying is seen to be prevalent in organizations where employees and managers feel that they have the support, or at least the implicit blessing of senior managers to carry on their abusive and bullying behaviour. Vertical violence is a specific type of workplace violence based on the hierarchical or managerial structure present in many healthcare based establishments. This type of workplace violence, “is usually generated by a power imbalance, whether due to a real hierarchical structure or perceived by professionals. It generates feelings of humiliation, vulnerability, and helplessness in the victims, limiting their ability to develop competency and defend themselves” (Pérez-Fuentes et al. 2021, pg 2) Furthermore, new managers will quickly come to view this form of behaviour as acceptable and normal if they see others get away with it and are even rewarded for it.
When bullying happens at the highest levels, the effects may be far reaching. People may be bullied irrespective of their organizational status or rank, including senior managers, which indicates the possibility of a negative domino effect, where bullying may cascade downwards, as the targeted supervisors might offload their own aggression onto their subordinates. In such situations, a bullying scenario in the boardroom may actually threaten the productivity of the entire organisation.
Workplace bullying and occupational stress
The relationship between occupational stress and bullying was drawn in the matter of the UK Health and Safety Executive (HSE) issuing an Improvement Notice to the West Dorset General Hospital NHS Trust. This followed a complaint raised with the HSE by an employee who was off sick having suffered from bullying in the workplace. His managers had responded by telling him that in the event of his returning to work it was unlikely that anything would be done about the bullying. The HSE found that the Trust did not have an occupational stress policy and directed them to create one in accordance with the soon to be published HSE Management Standards. These are standards that managers should meet in their work if they are to ensure a safe workplace, as is required by the Health and Safety at Work Act 1974 as was amended by the Management of Health and Safety at Work Regulations 1999, the latter directing that risks in the workplace must be identified, assessed and controlled. These risks include those hazards known to cause occupational stress. One of the six standards relates to managing relationships between employees, a matter in which the Trust had shown itself to be deficient.
UK Legal protection from workplace bullying
The six HSE Management Standards define a set of behaviours by managers that address the main reported causes of occupational stress. Managers that operate against the standards can readily be identified as workplace bullies i.e. have no regard for the demands, remove control whenever possible, let them struggle, allow bullying to run uncontrolled and never let them know what is going to happen next (mushroom management) i.e. 'show them who is in charge'. The standards define the main known causes of occupational stress, in accord with the DCS Model, but also provide a 'bullying checklist'.
The HSE Management Standards
Demands – this includes issues such as workload, work patterns and the work environment
Control – how much say the person has in the way they do their work
Support – this includes the encouragement, sponsorship and resources provided by the organisation, line management and colleagues
Relationships – this includes promoting positive working to avoid conflict and dealing with unacceptable behaviour
Role – whether people understand their role within the organisation and whether the organisation ensures that they do not have conflicting roles
Change – how organisational change (large or small) is managed and communicated in the organisation
Geographical culture
Research investigating the acceptability of the bullying behaviour across different cultures (e.g. Power et al., 2013) clearly shows that culture affects the perception of the acceptable behaviour.
National background also influences the prevalence of workplace bullying (Harvey et al., 2009; Hoel et al., 1999; Lutgen-Sandvik et al., 2007).
Humane orientation is negatively associated with the acceptability of work-related bullying.
Performance orientation is positively associated with the acceptance of bullying. Future orientation is negatively associated with the acceptability of bullying. A culture of femininity suggests that individuals who live and work in this kind of culture tend to value interpersonal relationships to a greater degree.
Three broad dimensions have been mentioned in relation to workplace bullying: power distance; masculinity versus femininity; and individualism versus collectivism (Lutgen-Sandvik et al., 2007).
In Confucian Asia, which has a higher performance orientation than Latin America and Sub-Saharan Africa, bullying may be seen as an acceptable price to pay for performance. The value Latin America holds for personal connections with employees and the higher humane orientation of Sub-Saharan Africa may help to explain their distaste for bullying. A culture of individualism in the US implies competition, which may increase the likelihood of workplace bullying situations.
Culture of fear
Ashforth discussed potentially destructive sides of leadership and identified what he referred to as petty tyrants, i.e., leaders who exercise a tyrannical style of management, resulting in a climate of fear in the workplace. Partial or intermittent negative reinforcement can create an effective climate of fear and doubt. When employees get the sense that bullies "get away with it", a climate of fear may be the result. Several studies have confirmed a relationship between bullying, on the one hand, and an autocratic leadership and an authoritarian way of settling conflicts or dealing with disagreements, on the other. An authoritarian style of leadership may create a climate of fear, where there is little or no room for dialogue and where complaining may be considered futile. In professions where workplace bullying is common, and employees do not receive sufficient support from their coworkers or managers, it often generates feelings of resignation that lead them to believe that the abuse is a normal and inevitable part of the job. In a study of public-sector union members, approximately one in five workers reported having considered leaving the workplace as a result of witnessing bullying taking place. Rayner explained these figures by pointing to the presence of a climate of fear in which employees considered reporting to be unsafe, where bullies had "got away with it" previously despite management knowing of the presence of bullying.
Kiss up kick down
The workplace bully may be respectful when talking to upper management but the opposite when it comes to their relationship with those whom they supervise: the "kiss up kick down" personality. Bullies tend to ingratiate themselves to their bosses while intimidating subordinates. They may be socially popular with others in management, including those who will determine their fate. Often, a workplace bully will have mastered kiss up kick down tactics that hide their abusive side from superiors who review their performance.
As a consequence of this kiss up kick down strategy:
A bully's mistakes are always concealed or blamed on underlings or circumstances beyond their control
A bully keeps the target under constant stress
A bully's power base is fear, not respect
A bully withholds information from subordinates and keeps the information flow top-down only
A bully blames conflicts and problems on subordinate's lack of competence, poor attitude, or character flaws
A bully creates an unnatural work environment where people constantly walk on eggshells and are compelled to behave in ways they normally would not
The flow of blame in an organization may be a primary indicator of that organization's robustness and integrity. Blame flowing downwards, from management to staff, or laterally between professionals or partner organizations, indicates organizational failure. In a blame culture, problem-solving is replaced by blame-avoidance. Confused roles and responsibilities also contribute to a blame culture. Blame culture reduces the capacity of an organization to take adequate measures to prevent minor problems from escalating into uncontrollable situations. Several issues identified in organizations with a blame culture contradicts high reliability organizations best practices. Blame culture is considered a serious issue in healthcare organizations by the World Health Organization, which recommends to promote a no-blame culture, or just culture, a means to increase patients safety.
Fight or flight
The most typical reactions to workplace bullying are to do with the survival instinct – "fight or flight" – and these are probably a victim's healthier responses to bullying. Flight is often a response to bullying. It is very common, especially in organizations in which upper management cannot or will not deal with the bullying. In hard economic times, however, flight may not be an option, and fighting may be the only choice.
Fighting the bullying can require near heroic action, especially if the bullying targets just one or two individuals. It can also be a difficult challenge. There are some times when confrontation is called for. First, there is always a chance that the bully boss is labouring under the impression that this is the way to get things done and does not recognize the havoc being wrought on subordinates.
Typology of bullying behaviours
With some variations, the following typology of workplace bullying behaviours has been adopted by a number of academic researchers. The typology uses five different categories.
Threat to professional status – including belittling opinions, public professional humiliation, accusations regarding lack of effort, intimidating use of discipline or competence procedures.
Threat to personal standing – including undermining personal integrity, destructive innuendo and sarcasm, making inappropriate jokes about the target, persistent teasing, name calling, insults, intimidation.
Isolation – including preventing access to opportunities, physical or social isolation, withholding necessary information, keeping the target out of the loop, ignoring or excluding.
Overwork – including undue pressure, impossible deadlines, unnecessary disruptions.
Destabilisation – including failure to acknowledge good work, allocation of meaningless tasks, removal of responsibility, repeated reminders of blunders, setting target up to fail, shifting goal posts without telling the target.
Tactics
Research by the Workplace Bullying Institute, suggests that the following are the 25 most common workplace bullying tactics:
Falsely accused someone of "errors" not actually made (71%).
Stared, glared, was nonverbally intimidating and was clearly showing hostility (68%).
Unjustly discounted the person's thoughts or feelings ("oh, that's silly") in meetings (64%).
Used the "silent treatment" to "ice out" and separate from others (64%).
Exhibited presumably uncontrollable mood swings in front of the group (61%).
Made-up rules on the fly that even they did not follow (61%).
Disregarded satisfactory or exemplary quality of completed work despite evidence (discrediting) (58%).
Harshly and constantly criticized, having a different standard for the target (57%).
Started, or failed to stop, destructive rumours or gossip about the person (56%).
Encouraged people to turn against the person being tormented (55%).
Singled out and isolated one person from other co-workers, either socially or physically (54%).
Publicly displayed gross, undignified, but not illegal, behaviour (53%).
Yelled, screamed, threw tantrums in front of others to humiliate a person (53%).
Stole credit for work done by others (plagiarism) (47%).
Abused the evaluation process by lying about the person's performance (46%).
Declared target "insubordinate" for failing to follow arbitrary commands (46%).
Used confidential information about a person to humiliate privately or publicly (45%).
Retaliated against the person after a complaint was filed (45%).
Made verbal put-downs/insults based on gender, race, accent, age or language, disability (44%).
Assigned undesirable work as punishment (44%).
Created unrealistic demands (workload, deadlines, duties) for person singled out (44%).
Launched a baseless campaign to oust the person; effort not stopped by the employer (43%).
Encouraged the person to quit or transfer rather than to face more mistreatment (43%).
Sabotaged the person's contribution to a team goal and reward (41%).
Ensured failure of person's project by not performing required tasks, such as sign-offs, taking calls, working with collaborators (40%)
Abusive workplace behaviours
According to Bassman, common abusive workplace behaviours are:
Disrespecting and devaluing the individual, often through disrespectful and devaluing language or verbal abuse
Overwork and devaluation of personal life (particularly salaried workers who are not compensated)
Harassment through micromanagement of tasks and time
Over evaluation and manipulating information (for example concentration on negative characteristics and failures, setting up subordinate for failure).
Managing by threat and intimidation
Stealing credit and taking unfair advantage
Preventing access to opportunities
Downgrading an employee's capabilities to justify downsizing
Impulsive destructive behaviour
According to Hoel and Cooper, common abusive workplace behaviours are:
Ignoring opinions and views
Withholding information in order to affect the target's performance
Exposing the target to an unmanageable workload
Threatening employees’ personal self esteem and work status.
Giving tasks with unreasonable or impossible targets or deadlines
Ordering the target to do work below competence
Ignoring or presenting hostility when the target approaches
Humiliation or ridicule in connection with work
Excessive monitoring of a target's work (see micromanagement)
Spreading gossip
Insulting or making offensive remarks about the target's person (i.e. habits and background), attitudes, or private life
Removing or replacing key areas of responsibility with more trivial or unpleasant tasks.
According to Faghihi, some abusive workplace behaviors include:
Excessive workload
Placement in an area where there is less experience or uncomfortable
Low salary
Working overtime without benefits
Poor work environment
Increase in stress in the workplace
Lack of facilities
Abusive cyberbullying in the workplace can have serious socioeconomic and psychological consequences on the victim. Workplace cyberbullying can lead to sick leave due to depression which in turn can lead to loss of profits for the organisation.
In specific professions
Academia
Several aspects of academia, such as the generally decentralized nature of academic institutions and the particular recruitment and career procedures, lend themselves to the practice of bullying and discourage its reporting and mitigation.
Blue-collar jobs
Bullying has been identified as prominent in blue collar jobs including on oil rigs, and in mechanical areas and machine shops, warehouses and factories. It is thought that intimidation and fear of retribution cause decreased incident reports, which, in the socioeconomic and cultural milieu of such industries, would likely lead to a vicious circle. This is often used in combination with manipulation and coercion of facts to gain favour among higher ranking administrators. For example, an investigation conducted following a hazing incident at Portland Bureau of Transportation within the city government of Portland, Oregon, found ritual hazing kept hidden for years under the guise of "no snitching", where whistleblowing was punished and loyalty was praised. Two-thirds of the interviewed employees in this investigation declared they deemed the best way they found to deal with the workplace's bad behaviors was "not to get involved", as they "feared retaliation if they did intervene or report the problems."
Information technology
A culture of bullying is common in information technology (IT), leading to high sickness rates, low morale, poor productivity and high staff turnover. Deadline-driven project work and stressed-out managers take their toll on IT workers.
Legal profession
Bullying in the legal profession is believed to be more common than in some other professions. It is believed that its adversarial, hierarchical tradition contributes towards this. Women, trainees and solicitors who have been qualified for five years or less are more impacted, as are ethnic minority lawyers and lesbian, gay and bisexual lawyers.
Medicine
Bullying in the medical profession is common, particularly of student or trainee doctors. In a study on the violence that occurs in healthcare, it was found that from 2002 to 2013 alone, the occurrence of abuse became four times as likely. It is thought that this is at least in part an outcome of conservative traditional hierarchical structures and teaching methods in the medical profession which may result in a bullying cycle.
Military
Bullying exists to varying degrees in the military of some countries, often involving various forms of hazing or abuse by higher members of the military hierarchy.
Nursing
Bullying has been identified as being particularly prevalent in the nursing profession although the reasons are not clear. It is thought that relational aggression (psychological aspects of bullying such as gossiping and intimidation) are relevant. Relational aggression has been studied amongst girls but not so much amongst adult women. A lot of bullying directed towards nurses is inflicted by patients, and nurses are at such higher risk because the most patient exposure out of any healthcare professional. Especially today with the shortage of nurses, nurses are seeing more patients for longer amounts of time which can lead to increased stress levels if they are a victim of bullying.
Teaching
School teachers are commonly the subject of bullying but they are also sometimes the originators of bullying within a school environment.
Volunteering
Bullying can be common in volunteering settings. For example, one study found bullying to be the most significant factor of complaints amongst volunteers. Volunteers often do not have access to protections available to paid employees, so while laws may indicate that bullying is a violation of rights, volunteers may have no means to address it.
Forms
Tim Field suggested that workplace bullying takes these forms:
Serial bullying – the source of all dysfunction can be traced to one individual, who picks on one employee after another and destroys them, then moves on. Probably the most common type of bullying.
Secondary bullying – the pressure of having to deal with a serial bully causes the general behaviour to decline and sink to the lowest level.
Pair bullying – this takes place with two people, one active and verbal, the other often watching and listening.
Gang bullying or group bullying – is a serial bully with colleagues. Gangs can occur anywhere, but flourish in corporate bullying climates. It is often called mobbing and usually involves scapegoating and victimisation.
Vicarious bullying – two parties are encouraged to fight. This is the typical "triangulation" where the aggression gets passed around.
Regulation bullying – where a serial bully forces their target to comply with rules, regulations, procedures or laws regardless of their appropriateness, applicability or necessity.
Residual bullying – after the serial bully has left or been fired, the behaviour continues. It can go on for years.
Legal bullying – the bringing of a vexatious legal action to control and punish a person.
Pressure bullying or unwitting bullying – having to work to unrealistic time scales or inadequate resources.
Corporate bullying – where an employer abuses an employee with impunity, knowing the law is weak and the job market is soft.
Organizational bullying – a combination of pressure bullying and corporate bullying. Occurs when an organization struggles to adapt to changing markets, reduced income, cuts in budgets, imposed expectations and other extreme pressures.
Institutional bullying – entrenched and is accepted as part of the culture.
Client bullying – an employee is bullied by those they serve, for instance subway attendants or public servants.
Cyberbullying – the use of information and communication technologies to support deliberate, repeated, and hostile behaviour by an individual or group, that is intended to harm others.
Adult bullying can come in an assortment of forms. There are about five distinctive types of adult bullies. A narcissistic bully is described as a self-centred person whose egotism is frail and possesses the need to put others down. An impulsive bully is someone who acts on bullying based on stress or being upset at the moment. A physical bully uses physical injury and the threat of harm to abuse their victims, while a verbal bully uses demeaning language and cynicism to debase their victims. Lastly, a secondary adult bully is portrayed as a person that did not start the initial bullying but participates in afterwards to avoid being bullied themselves ("Adult Bullying").
Emotional intelligence
Workplace bullying is reported to be far more prevalent than perhaps commonly thought. For some reason, workplace bullying seems to be particularly widespread in healthcare organizations; 80% of nurses report experiencing workplace bullying. Similar to the school environment for children, the work environment typically places groups of adult peers together in a shared space on a regular basis. In such a situation, social interactions and relationships are of great importance to the function of the organizational structure and in pursuing goals. The emotional consequences of bullying put an organization at risk of losing victimized employees. Bullying also contributes to a negative work environment, is not conducive to necessary cooperation and can lessen productivity at various levels.
Bullying in the workplace is associated with negative responses to stress. The ability to manage emotions, especially emotional stress, seems to be a consistently important factor in different types of bullying. The workplace in general can be a stressful environment, so a negative way of coping with stress or an inability to do so can be particularly damning. Workplace bullies may have high social intelligence and low emotional intelligence (EI). In this context, bullies tend to rank high on the social ladder and are adept at influencing others. The combination of high social intelligence and low empathy is conducive to manipulative behaviour, such that Hutchinson (2013) describes workplace bullying to be. In working groups where employees have low EI, workers can be persuaded to engage in unethical behaviour. With the bullies' persuasion, the work group is socialized in a way that rationalizes the behaviour, and makes the group tolerant or supportive of the bullying.
Hutchinson & Hurley (2013) make the case that EI and leadership skills are both necessary to bullying intervention in the workplace, and illustrates the relationship between EI, leadership and reductions in bullying. EI and ethical behaviour among other members of the work team have been shown to have a significant impact on ethical behaviour of nursing teams. Higher EI is linked to improvements in the work environment and is an important moderator between conflict and reactions to conflict in the workplace. The self-awareness and self-management dimensions of EI have both been illustrated to have strong positive correlations with effective leadership and the specific leadership ability to build healthy work environments and work culture.
Related concepts
Abusive supervision
Abusive supervision overlaps with workplace bullying in the workplace context. Research suggests that 75% of workplace bullying incidents are perpetrated by hierarchically superior agents. Abusive supervision differs from related constructs such as supervisor bullying and undermining in that it does not describe the intentions or objectives of the supervisor.
Power and control
A power and control model has been developed for the workplace, divided into the following categories:
Workplace mobbing
Workplace mobbing overlaps with workplace bullying. The concept originated from the study of animal behaviour. It concentrates on bullying by a group.
Workplace incivility
Workplace bullying overlaps to some degree with workplace incivility but tends to encompass more intense and typically repeated acts of disregard and rudeness. Negative spirals of increasing incivility between organizational members can result in bullying, but isolated acts of incivility are not conceptually bullying despite the apparent similarity in their form and content. In bullying, the intent of harm is less ambiguous, an unequal balance of power (both formal and informal) is more salient, and the target of bullying feels threatened, vulnerable and unable to defend themself against negative recurring actions.
Lateral/Vertical Violence
Terms often used within nursing and healthcare. Lateral violence (also known as horizontal violence) refers to bullying behaviours exhibited by colleagues. Vertical violence refers to bullying behaviours exhibited by supervisors to employees below them hierarchically. Despite the use of the term violence, these terms often do not encompass physically aggressive behaviours.
Personality characteristics
Executives
In 2005, psychologists Belinda Board and Katarina Fritzon at the University of Surrey, UK, interviewed and gave personality tests to high-level British executives and compared their profiles with those of criminal psychiatric patients at Broadmoor Hospital in the UK. They found that three out of eleven personality disorders were actually more common in executives than in the disturbed criminals. They were:
Histrionic personality disorder: including superficial charm, insincerity, egocentricity and manipulation
Narcissistic personality disorder: including grandiosity, self-focused lack of empathy for others, exploitativeness and independence.
Obsessive-compulsive personality disorder: including perfectionism, excessive devotion to work, rigidity, stubbornness and dictatorial tendencies.
They described these business people as successful psychopaths and the criminals as unsuccessful psychopaths.
According to leading leadership academic Manfred F.R. Kets de Vries, it seems almost inevitable these days that there will be some personality disorders in a senior management team.
Industrial/organizational psychology research has also examined the types of bullying that exist among business professionals and the prevalence of this form of bullying in the workplace as well as ways to measure bullying empirically.
Psychopathy
Bullying is used by corporate psychopaths as a tactic to humiliate subordinates. Bullying is also used as a tactic to scare, confuse and disorient those who may be a threat to the activities of the corporate psychopath Using meta data analysis on hundreds of UK research papers, Boddy concluded that 36% of bullying incidents were caused by the presence of corporate psychopaths. According to Boddy there are two types of bullying:
Predatory bullying – the bully just enjoys bullying and tormenting vulnerable people for the sake of it.
Instrumental bullying – the bullying is for a purpose, helping the bully achieve their goals.
A corporate psychopath uses instrumental bullying to further their goals of promotion and power as the result of causing confusion and divide and rule.
People with high scores on a psychopathy rating scale are more likely to engage in bullying, crime and drug use than other people. Hare and Babiak noted that about 29% of corporate psychopaths are also bullies. Other research has also shown that people with high scores on a psychopathy rating scale were more likely to engage in bullying, again indicating that psychopaths tend to be bullies in the workplace.
A workplace bully or abuser will often have issues with social functioning. These types of people often have psychopathic traits that are difficult to identify in the hiring and promotion process. These individuals often lack anger management skills and have a distorted sense of reality. Consequently, when confronted with the accusation of abuse, the abuser is not aware that any harm was done.
Narcissism
Narcissism, lack of self-regulation, lack of remorse and lack of conscience have been identified as traits displayed by bullies. These traits are shared with psychopaths, indicating that there is some theoretical
cross-over between bullies and psychopaths. In 2007, researchers Catherine Mattice and Brian Spitzberg at San Diego State University, USA, found that narcissism revealed a positive relationship with bullying. Narcissists were found to prefer indirect bullying tactics (such as withholding information that affects others' performance, ignoring others, spreading gossip, constantly reminding others of mistakes, ordering others to do work below their competence level, and excessively monitoring others' work) rather than direct tactics (such as making threats, shouting, persistently criticizing, or making false allegations). The research also revealed that narcissists are highly motivated to bully, and that to some extent, they are left with feelings of satisfaction after a bullying incident occurs.
Machiavellianism
According to Namie, Machiavellians manipulate and exploit others to advance their perceived personal agendas. In his view, Machiavellianism represents one of the core components of workplace bullying.
Health effects
According to Gary and Ruth Namie, as well as Tracy, et al., workplace bullying can harm the health of the targets of bullying. Organizations are beginning to take note of workplace bullying because of the costs to the organization in terms of the health of their employees.
According to scholars at The Project for Wellness and Work-Life at Arizona State University, "workplace bullying is linked to a host of physical, psychological, organizational, and social costs." Stress is the most predominant health effect associated with bullying in the workplace. Research indicates that workplace stress has significant negative effects that are correlated to poor mental health and poor physical health, resulting in an increase in the use of "sick days" or time off from work.
The negative effects of bullying are so severe that posttraumatic stress disorder (PTSD) and even suicide are not uncommon. Tehrani found that 1 in 10 targets experience PTSD, and that 44% of her respondents experienced PTSD similar to that of battered women and victims of child abuse. Matthiesen and Einarsen found that up to 77% of targets experience PTSD.
In addition, co-workers who witness workplace bullying can also have negative effects, such as fear, stress, and emotional exhaustion. Those who witness repetitive workplace abuse often choose to leave the place of employment where the abuse took place. Workplace bullying can also hinder the organizational dynamics such as group cohesion, peer communication, and overall performance.
According to the 2012 survey conducted by Workplace Bullying Institute (516 respondents), Anticipation of next negative event is the most common psychological symptom of workplace bullying reported by 80%. Panic attacks afflict 52%. Half (49%) of targets reported being diagnosed with clinical depression. Sleep disruption, loss of concentration, mood swings, and pervasive sadness and insomnia were more common (ranging from 77% to 50%). Nearly three-quarters (71%) of targets sought treatment from a physician. Over half (63%) saw a mental health professional for their work-related symptoms. Respondents reported other symptoms that can be exacerbated by stress: migraine headaches (48%), irritable bowel disorder (37%), chronic fatigue syndrome (33%) and sexual dysfunction (27%).
Depression
Workplace depression can occur in many companies of various size and profession, and can have negative effects on positive profit growth. Stress factors that are unique to one's working environment, such as bullying from co-workers or superiors and poor social support for high pressure occupations, can build over time and create inefficient work behavior in depressed individuals. In addition, inadequate or negative communication techniques can further drive an employee to become disconnected from the company's mission and goals. One way that companies can combat the destructive consequences associated with employee depression is to offer more support for counseling and consider bringing in experts to educate staff on the consequences of bullying. Ignoring the problem of depression and decreased workplace performance creates intergroup conflict and lasting feelings of disillusionment.
Financial costs to employers
Several studies have attempted to quantify the cost of bullying to an organization.
According to the National Institute for Occupational Safety and Health (NIOSH), mental illness among the workforce leads to a loss in employment amounting to $19 billion and a drop in productivity of $3 billion.
In a report commissioned by the ILO, Hoel, Sparks, & Cooper did a comprehensive analysis of the costs involved in bullying. They estimated a cost 1.88 billion pounds plus the cost of lost productivity.
Based on the replacement cost of those who leave as a result of being bullied or witnessing bullying, Rayner and Keashly (2004) estimated that for an organization of 1,000 people, the cost would be $1.2 million US. This estimate did not include the cost of litigation should victims bring suit against the organization.
A recent Finnish study of more than 5,000 hospital staff found that those who had been bullied had 26% more certified sickness absence than those who were not bullied, when figures were adjusted for base-line measures one year prior to the survey (Kivimäki et al., 2000). According to the researchers these figures are probably an underestimation as many of the targets are likely to have been bullied already at the time the base-line measures were obtained.
The city government of Portland, Oregon, was sued by a former employee for hazing abuse on the job. The victim sought damages of $250,000 and named the city, as well as the perpetrator Jerry Munson, a "lead worker" for the organization who was in a position of authority. The suit stated a supervisor was aware of the issue, but "failed to take any form of immediate appropriate and corrective action to stop it". After an investigation, the municipal government settled for US$80,000 after it believed that "there is risk the city may be found liable."
Researcher Tamara Parris discusses how employers need to be more attentive in managing various discordant behaviors such as bullying in the workplace, as they not only create a financial cost to the organization, but also erode the company's human resource assets. In an effort to bring about change in the workplace, Flynn discusses how employers need to not only support regulations set in place but also need to support their staff when such instances occur.
By country
Workplace bullying is known in some Asian countries as:
Japan: power harassment
South Korea: gapjil
Singapore: In an informal survey among 50 employees in Singapore, 82% said they had experienced toxicity from their direct superior or colleagues in their careers, with some 33.3% experiencing it on a daily basis. Some of the other reports was failing to agree with the boss was considered being a trouble maker, always having to give praise to the superior, the senior colleague has a tendency to shout at people. Many respondents reported that they had to quit because of the toxic environment. In other surveys, it is clear that the company is aware but does nothing. A Kantar survey in 2019 suggested that employees in Singapore were the most likely to be made to "feel uncomfortable" by their employers, compared with those in the other countries that the company polled.
History
Research into workplace bullying stems from the initial Scandinavian investigations into school bullying in the late 1970s.
Legal aspects
See also
References
Academic journals
Aglietta M, Reberioux A, Babiak P. "Psychopathic manipulation in organizations: pawns, patrons and patsies", in Cooke A, Forth A, Newman J, Hare R (Eds), International Perspectives and Psychopathy, British Psychological Society, Leicester, pp. 12–17. (1996)
Aglietta, M.; Reberioux, A.; Babiak, P. "Psychopathic manipulation at work", in Gacono, C.B. (Ed), The Clinical and Forensic Assessment of Psychopathy: A Practitioner's Guide, Erlbaum, Mahwah, NJ, pp. 287–311. (2000)
Business ethics
Organizational behavior
Abuse
Ethically disputed working conditions
Deviance (sociology)
1990s neologisms | Workplace bullying | [
"Biology"
] | 10,291 | [
"Behavior",
"Abuse",
"Aggression",
"Organizational behavior",
"Deviance (sociology)",
"Human behavior"
] |
4,082,887 | https://en.wikipedia.org/wiki/Worth%204%20dot%20test | The Worth Four Light Test, also known as the Worth's four dot test or W4LT, is a clinical test mainly used for assessing a patient's degree of binocular vision and binocular single vision. Binocular vision involves an image being projected by each eye simultaneously into an area in space and being fused into a single image. The Worth Four Light Test is also used in detection of suppression of either the right or left eye. Suppression occurs during binocular vision when the brain does not process the information received from either of the eyes. This is a common adaptation to strabismus, amblyopia and aniseikonia.
The W4LT can be performed by the examiner at two distances, at near (at 33 cm from the patient) and at far (at 6 m from the patient). At both testing distances the patient is required to wear red-green goggles (with one red lens over one eye, usually the right, and one green lens over the left) When performing the test at far (distance) the W4LT instrument is composed of a silver box (mounted on the wall in front of the patient), which has four lights inside it. The lights are arranged in a diamond formation, with a red light at the top, two green lights at either side (left and right) and a white light at the bottom. When performing the test at near (at 33 cm ) the lights are arranged in exactly the same manner (diamond formation), with the difference being that at near, the lights are located in a hand-held instrument which is similar to a light torch.
Because the red filter blocks the green light and the green filter blocks the red light, it is possible to determine if the patient is using both eyes simultaneously and in a coordinated manner. With both eyes open, a patient with normal binocular vision will appreciate four lights. If the patient either closes or suppresses an eye they will see either two or three lights. If the patient does not fuse the images of the two eyes, they will see five lights (diplopia).
Indications for use
The Worth Four Light Test is indicated for use when assessing the binocular functions, the ability of eyes to work in coordination, of an individual. It can be used to develop a diagnosis or to support or confirm an initial diagnosis. It can be used when wanting to assess whether the individual has a normal or abnormal binocular single vision response (BSV). It can be used to establish whether a patient has the ability for the eyes to fuse the light that is received from each eye into 4 lights. The test is indicated with the use of a presence of a prism in individuals with a strabismus and fusion is considered present if 4 lights are maintained, with or without the use of a prism. The W4LT can also be indicated when aiding a person to develop and strengthen their fusional capacities.
If the images are unable to be fused the W4LT is still indicated to help to determine if an individual appreciates diplopia (double vision) or are suppressing an image from one eye. In cases of manifest strabismus the test can help in determining the nature and type of the diplopia or which eye is suppressing. Therefore, is indicated in cases of a suspected central suppression scotoma as it can be used to detect where the lights may not be appreciated from the eye with the scotoma though in some cases of minimal deviation in the eye as demonstrated in a microtropic deviation a normal response of 4 lights may be reported. Though it can be used in these patients to prove the presence of peripheral fusion and that they have bi-foveal fixation.
Other indications for the test include establishing an individual's dominant eye dominant eye compared to the other and when evaluating reduced monocular visual acuity which shows no improve on pinhole testing.
Whilst there are no contraindications of the W4LT there needs to be caution in interpreting the results of individuals with BSV in natural conditions as they may show a diplopic response under the dissociation of the test. Also in individuals who have abnormal retinal correspondence (ARC) they may provide an unexpected response, and those who have misaligned visual axis whom in natural conditions suppress may actually provide a diplopic response upon testing.
Method of assessment
The Worth Four Light Test is relatively simple to undertake. First you must place the red/green goggles over the patients eyes, with the red goggle traditionally placed over the right eye.
Next you must dim the room lighting. This allows the patient to see the lights better.
For a distance measurement, you should have the patient set up six metres away from the light source. For a near measurement, the test should be performed at approximately one third of a metre, or thirty three centimetres, with a handheld Worth's Four Lights torch.
Then, ask the patient what they see. They should respond with "I see … number of lights" provided they have understood what you have asked them.
Ask them to describe the lights to you. You must ask about the colour of the lights. If they see five lights, ask whether the green dots are higher or lower than the red dots. Ask about the positioning of the dots, for example are the red dots to the left or the right of the green dots. Also ask if the dots are flashing on and off or switching between red and green.
This series of questions is essential in order to ensure you correctly record exactly what the patient is seeing, so that the clinician can interpret the patient's results and then make an accurate diagnosis.
Recording and interpreting outcomes
When recording results for the W4LT it is important to ask the patient a series of questions in order to ensure you correctly record exactly what they are seeing. This is essential in order to interpret the patient's results and then make an accurate diagnosis.
The questions are:
How many lights are you seeing?
What colour are they? Where are they located?
Are all the lights in line? Or are some higher than the others?
Do all the lights show up at one time, or are they flashing on and off?
When recording results it is important to indicate the test used, a description of the lights seen and an indication of what the result means. It is also important to note the distance at which the test was conducted and whether or not the patient wore their own refractive correction.
Where communication is difficult between clinician and patient, such as in the presence of a language barrier, or when working with a child, it may be a good idea to get the patient to draw what they are seeing. The clinician can then interpret the results from the image.
Results
There are a number of possible results demonstrated by a W4LT
Normal retinal correspondence
In the absence of a deviation, the patient will see the lights exactly as they appear. When questioned they will report that:
They see 4 lights, 1 red, 2 green and one mixed colour
The two green lights will be to either side with the red light slightly above them and the mixed coloured light below the red
This is recorded as :
W4LT (D): 4 lights (BSV)
Abnormal retinal correspondence
It will be demonstrated on cover test that the patient has a manifest deviation. When questioned about the lights the patient will give a normal response and will see the lights exactly as they appear.
They will report that:
They see 4 lights, 1 red, 2 green and one mixed colour
The two green lights will be to either side with the red light slightly above them and the mixed coloured light below the red
This is recorded as :
W4LT (D): 4 lights (ARC)
NB: ARC can only be confirmed in conjunction with additional clinical tests for retinal correspondence. The patient must demonstrate a manifest deviation on cover test. Despite their apparent deviation, when tested with the W4LT they will produce a normal BSV result, indicating the presence of Abnormal Retinal Correspondence.
Esotropia
In an Esotropic (ET) deviation, the patient will experience uncrossed diplopia. When questioned about the position of the lights, they will report that:
They see 5 lights, 2 red and 3 green
The lights are horizontally displaced, seen side by side
The 2 red lights from the right eye are seen on the right side
The 3 green lights from the left eye are seen on the left side
This is recorded as: W4LT (D): 5 lights (Uncrossed Diplopia) ET
NB: The clinician will be unable to indicate which eye is the deviating eye based on these results alone. The results should be interpreted with other clinical findings in order to produce a final diagnosis.
Exotropia
In an Exotropic (XT) deviation, the patient will experience crossed diplopia.
When questioned about the position of the lights, they will report that:
They see 5 lights, 2 Red and 3 Green
The lights are horizontally displaced, and are seen side by side
The 2 Red Lights from the Right eye are on the left side
The 3 Green lights from the Left eye are on the right side
This is recorded as:
W4LT (D): 5 lights (Crossed Diplopia) XT
NB: The clinician will be unable to indicate which eye is the deviating eye based on these results alone. The results should be interpreted with other clinical findings in order to produce a final diagnosis.
Hypotropia or hypertropia
In cases of vertical deviations, patients will report that:
They see 5 lights: 2 red and 3 green
The lights are vertically displaced in relation to one another
The green lights (left eye) are on top of the red lights (right eye),
which is interpreted as : R HT or LHypoT
The red lights (right eye) are on top of the green lights (left eye),
which is interpreted as: RHypoT or L H T
This is recorded as: W4LT (D): 5 lights (vertical diplopia)
The clinician can relate the position of the lights directly back to the deviation and height of the eye (i.e.) the higher lights belong to the lower eye, and the lower lights belong to the higher eye.
NB: If the lights are not situated directly above one another, but are also separated horizontally, it is normally indicative of a mixed deviation where there is a horizontal, as well as vertical strabismus present.
Suppression
In cases of manifest strabismus, it is not always expected that the patient will experience diplopia.
Suppression is indicated when the patient reports that:
They see only the 3 Green lights from the Left eye
Which is interpreted as R Suppression
They see only the 2 Red lights from the Right eye
Which is interpreted as L Suppression
They see 2 Red lights OR 3 Green lights
All 5 lights are never present at the same time, but the patient is switching between the two responses.
This result is interpreted as Alternating Suppression
This can be recorded as:
W4LT (D): 3 Lights (R Supp.)
W4LT (D): 2 Lights (L Supp.)
W4LT (D): 2 or 3 Lights (Alt. Supp.)
Advantages and disadvantages
Advantages
Can be quick and simple to perform in the clinic as the test is easy to orientate and the red green goggles are simply put over the eyes. There is no turning of lenses, as in Bagolini Striated Glasses Test which is another test measuring binocular functions, making the interpretation of the results less complicated
There are no large glasses frames such as in the Bagolini striated glasses test so the goggles are minimally obstructive to the patient's vision
Refractive correction can be worn under the goggles
Good starting point when investigating the nature of diplopia i.e. to find manifest, intermittent, crossed or uncrossed diplopia
It is less dissociative than a cover test
Can be used to determine if a patient will demonstrate binocular single vision with corrective prism or head posture
Relatively easy to record and interpret the results
Disadvantages
Subjective in nature and relies on patient responses
The patient must have fusion and stereopsis to get accurate results
It is a highly dissociative test resulting in responses being less relevant to what the patient sees in their normal daily environment, as the environment would normally be different
A. Lights need to be off or dimmed in order to see the dots / lights
B. There is no common colour to fuse
C. Dark filters in the goggles are used and are less like natural conditions and therefore less relevant to what the patient sees in their normal daily environment
People with Red/green colour blindness cannot accurately perform the test as the colours used on the test are red and green
The test results are only useful in combination with other testing and results and not on their own
If performing the test twice, for example at near and at distance, the patient (especially children) may remember their previous answer and simply give the same answer from the last test, providing inaccurate results
See also
Eye examination
Diplopia
Strabismus
References
Eskridge, JB, Amos, JF, Bartlett, JD. Clinical procedures in Optometry. Lippincott Co. New York 1991.
Carlson, NB, et al. Clinical Procedures for Ocular Examination. Second Ed. Mc Graw-Hill. New York 1996.
Madge, SN, Kersey, JW, Hawker, MJ, Lamont, M. Clinical Techniques in Ophthalmology. Churchill Livingstone. London 2006.
Ansons, A. & Davis, H. (2008). Diagnosis and Management of Ocular Motility Disorders, Third Edition. [Wiley Online Library]. DOI: 10.1002/9780470698839
Pratt-Johnson, J, Tillson, G. Management of Strabismus and Amblyopia, Second Edition. Thieme. New York 2001.
Gunter, K, Von Noorden, Emilio, C. Campos Binocular Vision and Ocular Motility (Theory and Management of Strabismus), 6th Edition Page (230)
Anson, A, Davies, H. Diagnosis and Management of Ocular Mobility Disorders, Fourth Edition. John Wilet & Sons. West Sussex, 2014.
Pratt-Johnson, J, Tillson G, Management of Strabismus and Amblyopia: A Practical Guide. Thieme Medical Publishers, 2006.
Mitchell, P. R., Parksm M, M (2006) Sensory Tests and Treatment of Binocular Vision Adaptations. Retrieved from http://www.eyecalcs.com/DWAN/pages/v1/v1c009.html
American Academy of Ophthalmology (2014). Worth's 4-Dot Test Retrieved http://one.aao.org/bcscsnippetdetail.aspx?id=8200e4a2-f7ee-47f4-b8b7-985b30b52f67
Diagnostic ophthalmology
Medical equipment
Optometry | Worth 4 dot test | [
"Biology"
] | 3,121 | [
"Medical equipment",
"Medical technology"
] |
4,083,089 | https://en.wikipedia.org/wiki/Academic%20genealogy | An academic genealogy (or scientific genealogy) organizes a family tree of scientists and scholars according to mentoring relationships, often in the form of dissertation supervision relationships, and not according to genetic relationships as in conventional genealogy. Since the term academic genealogy has now developed this specific meaning, its additional use to describe a more academic approach to conventional genealogy would be ambiguous, so the description scholarly genealogy is now generally used in the latter context.
Overview
The academic lineage or academic ancestry of someone is a chain of professors who have served as academic mentors or thesis advisors of each other, ending with the person in question. Many genealogical terms are often recast in terms of academic lineages, so one may speak of academic descendants, children, siblings, etc. One method of developing an academic genealogy is to organize individuals by prioritizing their degree of relationship to a mentor/advisor as follows: (1). doctoral students, (2). post-doctoral researchers, (3). master's students and (4). current students, including undergraduate researchers.
Through the 19th century, particularly for graduates in sciences such as chemistry, it was common to have completed a degree in medicine or pharmacy before continuing with post-graduate or post-doctoral studies. Until the early 20th century, attaining professorial status or mentoring graduate students did not necessarily require a doctorate or graduate degree. For instance, the University of Cambridge did not require a formal doctoral thesis until 1919, and academic genealogies that include earlier Cambridge students tend to substitute an equivalent mentor. Academic genealogies are particularly easy to research in the case of Spain's doctoral degrees, because until 1954 only Complutense University had the power to grant doctorates. This means that all holders of a doctorates in Spain can trace back their academic lineage to a doctoral supervisor who was a member of Complutense's Faculty.
Websites such as the Mathematics Genealogy Project or the Chemical Genealogy document academic lineages for specific subject areas, while some other sites, such as Neurotree and Academic Family Tree aim to provide a complete academic genealogy across all fields of academia.
Influence
Academic genealogy may influence research results in areas of active research. Hirshman et al. examined a controversial medical question, the value of maximal surgery for high grade glioma, and demonstrated that a physician's medical academic genealogy can affect his or her findings and approaches to treatment.
References
External links
The Academic Family Tree: A project combining academic genealogies of 38 (as of August 2015) academic disciplines
Neurotree: The neuroscience family tree
Linguistree: The linguistics family tree
Mathematics genealogy search (includes much of computer science and physics)
The Astronomy Genealogy Project
Chemical genealogy
Scientific genealogy master list (two sections: Scientists Associated with Concepts in Chemistry & Physics; Scientists Associated with Discovering the Elements)
How to trace your scientific genealogy
Philosophy Family Tree
Automatic doctoral advisor genealogy diagram using Wikipedia by Nghia Ho
Genealogy
Genealogy
History of science | Academic genealogy | [
"Technology",
"Biology"
] | 589 | [
"Phylogenetics",
"History of science",
"Genealogy",
"History of science and technology"
] |
4,083,646 | https://en.wikipedia.org/wiki/Electrofusion | Electrofusion is a method of joining MDPE, HDPE and other plastic pipes using special fittings that have built-in electric heating elements which are used to weld the joint together.
The pipes to be joined are cleaned, inserted into the electrofusion fitting and then alignment clamps and a voltage (typically 40V) is applied for a fixed time depending on the fitting in use. The built in heater coils then melt the inside of the fitting and the outside of the pipe wall, which weld together producing a very strong homogeneous joint. The assembly is then left to cool for a specified time.
Electrofusion welding is beneficial because it does not require the operator to use dangerous or sophisticated equipment. After some preparation, the electrofusion welder will guide the operator through the steps to take. Welding heat and time is dependent on the type and size of the fitting. All electrofusion fittings are not created equal – precise positioning of the energising coils of wire in each fitting ensures uniform melting for a strong joint and the minimisation of welding and cooling time.
The operator must be qualified according to the local and national laws. In Australia, an electrofusion course can be done within 8 hours. Electrofusion welding training focuses on the importance of accurately fusing EF fittings. Both manual and automatic methods of calculating electrofusion time gives operators the skills they need in the field. There is much to learn about the importance of preparation, timing, pressure, temperature, cool down time and handling, etc.
Training and certification are very important in this field of welding, as the product can become dangerous under certain circumstances. There has been cases of major harm and death, including when molten polyethylene spurts out of the edge of a mis-aligned weld, causing skin burns. Another case was due to a tapping saddle being incorrectly installed on a gas line, causing the death of the two welders in the trench due to gas inhalation. There are many critical parts to electrofusion welding that can cause weld failures, most of which can be greatly reduced by using welding clamps, and correct scraping equipment.
To keep their qualification current, a trained operator can get their fitting tested, which involves cutting open the fitting and examining the integrity of the weld.
References
Piping
Plumbing | Electrofusion | [
"Chemistry",
"Engineering"
] | 470 | [
"Building engineering",
"Chemical engineering",
"Plumbing",
"Construction",
"Mechanical engineering",
"Piping"
] |
4,083,754 | https://en.wikipedia.org/wiki/Arolsen%20Archives | The Arolsen Archives – International Center on Nazi Persecution formerly the International Tracing Service (ITS), in German Internationaler Suchdienst, in French Service International de Recherches in Bad Arolsen, Germany, is an internationally governed centre for documentation, information and research on Nazi persecution, forced labour and the Holocaust in Nazi Germany and its occupied regions. The archive contains about 30 million documents from concentration camps, details of forced labour, and files on displaced persons. ITS preserves the original documents and clarifies the fate of those persecuted by the Nazis. The archives have been accessible to researchers since 2007. In May 2019 the Center uploaded around 13 million documents and made it available online to the public. The archives are currently being digitised and transcribed through the crowdsourcing platform Zooniverse. As of September 2022, approximately 46% of the archives have been transcribed.
History
In 1943, the international section of the British Red Cross was asked by the Headquarters of the Allied Forces to set up a registration and tracing service for missing people. The organization was formalized under the Supreme Headquarters Allied Expeditionary Forces and named the Central Tracing Bureau on February 15, 1944. After the war the bureau was moved from London to Versailles, then to Frankfurt am Main, and finally to Bad Arolsen, which was considered a central location among the areas of Allied occupation and had an intact infrastructure unaffected by war.
On July 1, 1947, the International Refugee Organization took over administration of the bureau, and on January 1, 1948, the name was changed to International Tracing Service. In April 1951, administrative responsibilities for the service were placed under the Allied High Commission for Germany. When the status of occupation of Germany was repealed in 1954, the ICRC took over the administration of the ITS. The Bonn Agreement of 1955 (which stated that no data that could harm the former Nazi victims or their families should be published) and their amendment protocols dating from 2006 provided the legal foundation of the International Tracing Service. The daily operations were managed by a director appointed by the ICRC, who had to be a Swiss citizen. After some discussion, in 1990 the Federal Republic of Germany renewed its continuing commitment to funding the operations of the ITS. The documents in the ITS archives were opened to public access on November 28, 2007.
Tracing missing persons, clarifying people's fates, providing family members with information, also for compensation and pension matters, have been the principal tasks of the ITS since its beginning. Since the opening of the archives, new tasks such as research and education and the archival description of the documents gain more importance in relation to the tasks of tracing and clarifying fates. Since these new activities are not part of its humanitarian mission, the ICRC withdrew from the management of the ITS in December 2012. The Bonn Agreement was replaced on December 9, 2011, when the eleven member states of the International Commission signed two new agreements in Berlin on the future tasks and management of the ITS.
ITS was founded as an organization dedicated to finding missing persons, typically lost to family and friends as a result of war, persecution or forced labour during World War II. The service operates under the legal authority of the Berlin Agreements from December 2011 and is funded by the government of Germany. The German Federal Archives are the institutional partner for the ITS since January 2013.
Organization
The organization is governed by an International Commission with representatives from Belgium, France, Germany, Greece, Israel, Italy, Luxembourg, Netherlands, Poland, United Kingdom, and the United States. The Commission draws up the guidelines for the work to be carried out by the ITS and monitors these in the interests of the former victims of persecution.
The director of the ITS is appointed by the International Commission and is accountable directly to the commission. Since January 2016, Floriane Azoulay is the director. There are about 220 staff employed by the ITS. The institution is funded by the German Federal Foreign Office.
Application for information
Application forms
On November 28, 2007, the ITS archives were made broadly available to the general public. The ITS records may be consulted in person, or by mail, telephone, fax or e-mail; addresses and contact numbers are available on the ITS website. Inquiries can be submitted to the ITS using the online form on the organization's website. The archives are also open for research.
New obligations
After the end of the Second World War the main task of the ITS was initially to conduct a search for the survivors of Nazi persecution and their family-members. Today, this accounts for no more than about three percent of its work. However, a large number of new obligations have been taken on over the course of the decades.
These include certification of the forms persecution took, confirmation for pension and compensation payments, allowing victims and their family members to inspect copies of the original documents and enabling the following generations to find out what happened to their forebears.
Answers
More than 70 years after the end of World War II, the ITS receives more than 1,000 inquiries every month from all around the world. Most of them now come from younger generations who are seeking information about the fate of their family members. In 2015, the ITS received around 15,500 requests regarding the fate of 21,909 persons from survivors, family members or researchers.
During the compensation phase of Eastern European forced labourers through the "Remembrance, Responsibility and Future" Foundation between 2000 and 2007, around 950,000 enquiries were sent to the Tracing Service. As a result of this flood of enquiries, the ITS was tremendously over-extended. Consequently, this created a gigantic backlog, which temporarily did considerable damage to the standing of the institution. Especially enquiries, which had no direct bearing on the foundation, remained unprocessed.
The archives
Inventory
ITS's total inventory comprises 26,000 linear metres of original documents from the Nazi era and post-war period, 232,710 meters of microfilm and more than 106,870 microfiches. Work is under way to digitize the files, both for purposes of easier search and for preserving the historical record. Since 2015, the digitized material is gradually being published on the archive's Digital Collection Online platform.
The inventory is split up into three main areas: incarceration, forced labour and displaced persons. The variety of documents is enormous. They include list material and individual documents, such as registration cards, transport lists, records of deaths, questionnaires, labour passports, health insurance and social insurance documents. Among the documents are also examples of prominent victims of Nazi persecution like Anne Frank and Elie Wiesel.
In addition to this there are smaller sections associated with the work of a tracing service: the alphabetical-phonetic Central Name Index, the child search archives and the correspondence files. The Central Name Index represents the key to the documents. With 50 million references on the fate of over 17.5 million people, it is based on an alphabetic-phonetic filing system that was developed especially for ITS.
Finding aids
Making the inventory researchable for all historical issues is an urgent responsibilities after opening the archives. To date, the arrangement of the documents having been collected over a period of six decades was subject to the requirements of a tracing service, which brought families together and clarified the fates of individuals. The Central Name Index was the key to the documents, while the documents were arranged according to victim groups.
This principle no longer is sufficient, since historians ask not only for names, but also for topics, events, locations or nationalities. The goal is to compile finding aids that can be accessed and published online and are based on international archival standards. The first series of inventories could be published on the Internet (for the time being in the German language only). The documents were indexed according to their origin and content. In view of the volume of the documents to be described, this process will take some years.
Copies made available
The International Commission at its May 2007 meeting approved the US Holocaust Memorial Museum's proposal to permit advance distribution of the material, as it is digitized, to the designated repository institutions prior to the completion of the agreement ratification process officially opening the material. In August 2007, the USHMM received the first installment of records and in November 2007, received the Central Name Index. Materials will continue to be received as they are digitized.
One institution is designated for each of the 11 countries to receive a copy of the archive. The following locations have been designated by their respective countries.
United States - United States Holocaust Memorial Museum
Israel - Yad Vashem
Poland - Institute of National Remembrance
Luxembourg - Centre de Documentation et de Recherche sur la Resistance
Belgium - National Archives of Belgium
France - French National Archives (Archives Nationales)
United Kingdom - The Wiener Library for the Study of the Holocaust & Genocide
On May 21, 2019, millions of digitized documents were made available online.
Other specialised archives
Archives on the fate of prisoners of war exist in Geneva at the ICRC, Central Tracing Agency. Inquiries are dealt with.
Other archives deal with missing Germans on occasion of flight and expulsion and with missing German Wehrmacht soldiers. German Red Cross searches for German missing persons except those who were prosecuted by Nazi regime. Kirchlicher Suchdienst has knowledge on population of former eastern regions of Germany. Deutsche Dienststelle (WASt) has the archives of Wehrmacht soldiers killed in action. German War Graves Commission has an online inventory of war graves.
Controversy
The ITS had been criticized before 2008 for refusing to open its archives to the public. The ITS, backed by the German government, had cited German archival law to support their position. The laws mandate a 100-year gap between releasing records in order to protect privacy. However, their critics argued that the ITS as such is not subject to German law. One accusation raised against Germany and the ITS by critics was that the archive was kept closed out of a desire to repress information about the Holocaust.
Critics cited the fact that all eleven governments sitting on the International Commission of the ITS endorsed the Stockholm International Forum Declaration of January 2000, which included a call for the opening of various Holocaust-era archives. However, since the Declaration was made, there had been little practical change in the operations of the ITS, despite repeated negotiations between the ITS, ICRC, and various Jewish and Holocaust survivor advocacy groups. A critical press release from the United States Holocaust Memorial Museum written in March 2006 charged that "In practice, however, the ITS and the ICRC have consistently refused to cooperate with the International Commission board and have kept the archive closed." In early 2006, several newspaper articles also raised questions about the quality of the ITS' management and the underlying reasons for the existing backlog.
In May 2006, the International Commission for the ITS decided to open the archives and documents for researchers use, and to transfer, upon request, one copy of the ITS archives and documents to each one of its member states. This took place once all 11 countries ratified the new ITS Protocol. On November 28, 2007, it was announced that Greece, as the last of the member countries, filed its ratification papers with the German Foreign Ministry. It was then announced that the documents in the archive were open to public access.
Covert role in Cold War
Associated Press (AP) reporters who were given access to ITS files found a carton of documents related to an escapee program run by the Truman Administration. The AP reporters used these files and declassified US documents to describe how the United States asked the ITS to run background checks on escapees from Eastern Europe. The Central Intelligence Agency reviewed their histories and then recruited some of them to return to their countries of origin, to spy for the United States. The program did not return very much useful intelligence, because these recruits, motivated to impress their handlers, supplied information that was not reliable, and because by 1952, the Soviets had largely exposed these efforts. Many recruits disappeared, presumed dead.
School projects
A group of students participated from 2013 to 2014 in the project "DENK MAL – Erinnerung im öffentlichen Raum" at the school. The students, including the author Tariq Abo Gamra, erected a plaque at the entrance of the school in remembrance of the murdered and prosecuted Jewish students in Nazi Germany. A commemoration ceremony took place on November 10, 2014. The project received letters from German Chancellor Angela Merkel and the German President Joachim Gauck congratulating them. The project was supported by the International Tracing Service.
References
External links
Digital Collection Online Platform
Erik Kirschbaum, Archive Holdings Online, Los Angeles Times, accessed 21 May 2019.
UC San Diego, Holocaust Living History Collection: Archiving Atrocity: The International Tracing Service and Holocaust Research – with Suzanne Brown-Fleming
Archives in Germany
Aftermath of World War II in Germany
Genealogy
International Red Cross and Red Crescent Movement
State archives
Jewish German history
Memory of the World Register
Jews and Judaism in Hesse | Arolsen Archives | [
"Biology"
] | 2,602 | [
"Phylogenetics",
"Genealogy"
] |
4,083,808 | https://en.wikipedia.org/wiki/Avenanthramide | Avenanthramides (anthranilic acid amides, formerly called "avenalumins") are a group of phenolic alkaloids found mainly in oats (Avena sativa), but also present in white cabbage butterfly eggs (Pieris brassicae and P. rapae), and in fungus-infected carnation (Dianthus caryophyllus). A number of studies demonstrate that these natural products have anti-inflammatory, antioxidant, anti-itch, anti-irritant, and antiatherogenic activities. Oat kernel extracts with standardized levels of avenanthramides are used for skin, hair, baby, and sun care products. The name avenanthramides was coined by Collins when he reported the presence of these compounds in oat kernels. It was later found that three avenanthramides were the open-ring amides of avenalumins I, II, and III, which were previously reported as oat phytoalexins by Mayama and co-workers.
History
Oats have been used for personal care purposes since antiquity. Indeed, wild oats (Avena sativa) were used in skin care in Egypt and the Arabian Peninsula 2000 BC. Oat baths were a common treatment of insomnia, anxiety, and skin diseases such as eczema and burns. In Roman times, their use as a medication for dermatological conditions was reported by Pliny, Columella, and Theophrastus. In the 19th century, oatmeal baths were often used to treat many cutaneous conditions, especially pruritic inflammatory eruptions. In the 1930s, the literature provided further evidence about the cleansing action of oats along with their ability to relieve itching and protect skin.
Colloidal oatmeal
In 2003, colloidal oatmeal was officially approved as a skin protectant by the FDA. However, little thought had been given to the active ingredient in oats responsible for the anti-inflammatory effect until more attention was paid to avenanthramides, which were first isolated and characterized in the 1980s by Collins.
Since then, many congeners have been characterized and purified, and it is known that avenanthramides have antioxidant, anti-inflammatory, and anti-atherosclerotic properties, and may be used as a treatment for people with inflammatory, allergy, or cardiovascular diseases. In 1999 studies made by Tufts University showed that avenanthramides are bioavailable and remain bioactive in humans after consumption. More recent studies made by the University of Minnesota showed that the antioxidant and anti-inflammatory activities can be increased through the consumption of 0.4 to 9.2 mg/day of avenanthramides over eight weeks.
The International Nomenclature of Cosmetic Ingredients (INCI) originally referred to an oat extract with a standardized level of avenanthramides as "Avena sativa kernel extract," but recently they have also accepted the INCI name "avenanthramides" to describe an extract containing 80% of these oat phenolic alkaloids.
Function in Avena sativa
A. sativa produces avenanthramides as defensive phytoalexins against infiltration by fungal plant pathogens. They were discovered as defensive chemicals especially concentrated in lesions of Puccinia coronata var. avenae f. sp. avenae (and at that time named "avenalumins").
Medical and personal care uses
Anti-inflammatory and anti-itch activity
Studies made by Sur (2008) provide evidence that avenanthramides significantly reduce the inflammatory response. Inflammation is a complex and self-protection reaction that occurs in the body against foreign substance, cell damage, infections, and pathogens. The inflammatory responses are controlled through a group called cytokines that is produced by the inflammatory cells. Furthermore, the expression of cytokines are regulated through inhibition of nuclear transcription factor kappa B (NF-κB). Many studies have demonstrated that avenanthramides can reduce the production of pro-inflammatory cytokines such as IL-6, IL-8, and MCP-1 by inhibiting NF-κB activation that is responsible for activating the genes of inflammatory response. Thus, these oat polyphenols mediate the decrease of inflammation by inhibiting the cytokine release. In addition, it was found that avenanthramides inhibit neurogenic inflammation, which is defined as an inflammation triggered by the nervous system that causes vasodilation, edema, warmth, and hypersensitivity. Also, avenanthramides significantly reduce the itching response, and its efficiency is comparable to the anti-itch effect produced by hydrocortisone.
Redness reduction
Avenanthramides have effective antihistaminic activity; they significantly reduce itch and redness compared with untreated areas.
Suggested mechanism of action
According to Sur (2008), the anti-inflammatory effect of the avenanthramides is due to the inhibition of the NF-κB activation in NF-κB dependent cytokine. Nuclear factor-kappa β (NF-κB) is responsible for regulating the transcription of DNA and participates in the activation of genes related to inflammatory and immune responses. Consequently, suppressing the NF-κB limits the proliferation of cancer cells and reduces the level of inflammation. Avenanthramides are able to inhibit the release of inflammatory cytokines that are present in pruritic skin diseases that cause itchiness. In addition, its anti-inflammatory activity may prevent the vicious itch-scratch cycle and reduce the scratching-induced secondary inflammation that often occur in atopic dermatitis and eczema, preventing the skin from disrupting its barrier. Avenanthramides also have a chemical structure similar to the drug tranilast, which has anti-histaminic action. The anti-itch activity of avenanthramides may be associated with the inhibition of histamine response. Taken together, these results show the effect of avenanthramides as powerful anti-inflammatory agents and their importance in dermatologic applications.
Antioxidant activity
Avenanthramides are known to have potent antioxidant activity, acting primarily by donating a hydrogen atom to a radical. An antioxidant is “any substance that, when present at low concentrations compared to those of an oxidisable substrate, significantly delays or prevents oxidation of that substrate” ( Halliwell, 1990). These phytochemicals are able to combat the oxidative stress present in the body that is responsible for causing cancer and cardiovascular disease. Among the avenanthramides, there are different antioxidant capacities, where C has the highest capacity, followed by B and A.
Dietary supplement
Avenanthramides extracted from oats show potent antioxidant properties in vitro and in vivo, and according to studies made by Dimberg (1992), its antioxidant activity is many times greater than other antioxidants such as caffeic acid and vanillin. Aven-C is one of the most significant avenanthramides present in the oats, and it is responsible for oats' antioxidant activity. The effects of the avenanthramide-enriched extract of oats has been investigated in animals, and a diet of 20 mg avenanthramide per kilogram body weight in rats has been shown to increase the superoxide dismutase (SOD) activity in skeletal muscle, liver, and kidneys. Also, a diet based on avenanthramides enhances glutathione peroxidase activity in heart and skeletal muscles, protecting the organism from oxidative damages.
Nomenclature
Avenanthramides consist of conjugates of one of three phenylpropanoids (p-coumaric, ferulic, or caffeic acid) and anthranilic acid (or a hydroxylated and/or methoxylated derivative of anthranilic acid). Collins and Dimberg have used different systems of nomenclature to describe the Avenanthramides in their publications. Collins assigned a system that classifies avenanthramides using alphabetic descriptors, while Dimberg assigned upper case letters to the anthranilate derivate and lower case to the accompanying phenylpropanoid, such as “c” for caffeic acid, “f” for ferulic acid, or “p” for anthranilic acid p-coumaric acid. Later, Dimberg's system was modified to use a numeric descriptor for the anthranilic acid. The following avenanthramides are most abundant in oats: avenanthramide A (also called 2p, AF-1 or Bp), avenanthramide B (also called 2f, AF-2 or Bf), avenanthramide C (also called 2c, AF-6 or Bc), avenanthramide O (also called 2pd), avenanthramide P (also called 2fd), and avenanthramide Q (also called 2 cd).
Biosynthesis
There is evidence that the amount of avenanthramides found in the grains is related to genotype, environment, crop year and location, and tissue (Matsukawa et al., 2000). The environmental factors are not clearly known, but it is believed that lower levels of avenanthramides are produced in oats when they are grown in a dry environment, which disfavors crown rust, a kind of fungus that has been shown to stimulate avenanthramides production in oats grains.
Chemical stability
pH
Avenanthramides are not all sensitive to pH and temperature. This was well illustrated in a study conducted on avenanthramides A, B and C. In this study it was found that avenanthramide A (2p) concentration was essentially unchanged in sodium phosphate buffer after three hours at either room temperature or at 95 °C. Avenanthramides B (2f) appeared to be more sensitive to the higher temperature at pH 7 and 12. Avenanthramides C (2c) underwent chemical reorganization at pH 12 at both temperatures and diminished by more than 85% at 95 °C, even at pH 7 (Dimberg et al., 2001).
UV
Avenanthramides are also affected by ultra-violet (UV) light. Dimberg found that the three avenanthramides tested (A, B, and C) remained in the trans conformation after 18 hours of exposure to UV light at 254 nm. On the other hand, Collins reported that the avenanthramides isomerize upon exposure to daylight or UV light.
Synthetic avenanthramides
Avenanthramides can be artificially synthesized. Avenanthramides A, B, D, and E were synthesized by Collins (1989), using chromatography methods, and adapting Bain and Smalley's procedure (1968). All four synthetic substances were identical to the ones extracted from oats.
References
Antibiotics
Antipruritics
Phytoalexins
Oats | Avenanthramide | [
"Chemistry",
"Biology"
] | 2,356 | [
"Chemical ecology",
"Biotechnology products",
"Antibiotics",
"Phytoalexins",
"Biocides"
] |
4,083,815 | https://en.wikipedia.org/wiki/Aleksandr%20Korkin | Aleksandr Nikolayevich Korkin (; – ) was a Russian mathematician. He made contribution to the development of partial differential equations, and was second only to Chebyshev among the founders of the Saint Petersburg Mathematical School. Among others, his students included Yegor Ivanovich Zolotarev.
Some publications
References
External links
Korkin's Biography , the St. Petersburg University Pages (in Russian, but with an image)
1837 births
1908 deaths
People from Vologda Oblast
People from Vologda Governorate
19th-century mathematicians from the Russian Empire
Mathematical analysts | Aleksandr Korkin | [
"Mathematics"
] | 113 | [
"Mathematical analysis",
"Mathematical analysts"
] |
4,083,909 | https://en.wikipedia.org/wiki/John%20Herivel | John William Jamieson Herivel (29 August 1918 – 18 January 2011) was a British science historian and World War II codebreaker at Bletchley Park.
As a codebreaker concerned with Cryptanalysis of the Enigma, Herivel is remembered chiefly for the discovery of what was soon dubbed the Herivel tip or Herivelismus. Herivelismus consisted of the idea, the Herivel tip and the method of establishing whether it applied using the Herivel square. It was based on Herivel's insight into the habits of German operators of the Enigma cipher machine that allowed Bletchley Park to easily deduce part of the daily key. For a brief but critical period after May 1940, the Herivel tip in conjunction with "cillies" (another class of operator error) was the main technique used to solve Enigma.
After the war, Herivel became an academic, studying the history and philosophy of science at Queen's University Belfast, particularly Isaac Newton, Joseph Fourier, Christiaan Huygens. In 1956, he took a brief leave of absence from Queen's to work as a scholar at the Dublin Institute for Advanced Studies. In retirement, he wrote an autobiographical account of his work at Bletchley Park entitled Herivelismus and the German Military Enigma.
Recruitment to Bletchley Park
John Herivel was born in Belfast, and attended Methodist College Belfast from 1924 to 1936. In 1937 he was awarded a Kitchener Scholarship to study mathematics at Sidney Sussex College, Cambridge, where his supervisor was Gordon Welchman. Welchman recruited Herivel to the Government Code and Cypher School (GC&CS) at Bletchley Park. Welchman worked with Alan Turing in the newly formed Hut 6 section created to solve Army and Air Force Enigma. Herivel, then aged 21, arrived at Bletchley on 29 January 1940, and was briefed on Enigma by Alan Turing and Tony Kendrick.
Enigma
At the time that Herivel started work at Bletchley Park, Hut 6 was having only limited success with Enigma-enciphered messages, mostly from the Luftwaffe Enigma network known as "Red". He was working alongside David Rees, another Cambridge mathematician recruited by Welchman, in nearby Elmers School, testing candidate solutions and working out plugboard settings. The process was slow, however, Herivel was determined to find a method to improve their attack, and he would spend his evenings trying to think up ways to do so.
Intercepted Morse coded messages had been enciphered by the Germans' Enigma, an electro-mechanical rotor cipher machine that implemented a polyalphabetic cipher. The main model in use in 1940 had three rotors that set an electrical pathway from the keyboard to the lampboard. Pressing a key caused one lamp to light and the right-most rotor to advance by one letter position. This changed the electrical pathway so that pressing the same key again caused a different letter to light up. At one of the 26 positions, a notch on the right-most rotor engaged with the middle rotor so that the two rotors advanced together, and similarly the middle rotor would engage with the left-most rotor, giving a very long period before the sequence repeated (26 × 26 × 26 = 17,576). The ring on the rotor that contained the notch and so caused the next rotor to advance, could be set to any one of the 26 positions. The three rotors were selected from a set of five, giving 60 different ways of mounting rotors in the machine. However, because the Germans laid down the rule that no rotor should be in the same position on successive days, if the previous days's rotors and their positions were known, this number was reduced to 32.
The Enigma machine worked reciprocally so that an identical machine with identical settings would, if fed the enciphered letters, show the deciphered letters on the lampboard. Hut 6 had Enigma replica machines that were logically identical to the machines that the Germans were using. To decipher the intercepted messages required that the selection of rotors, the ring settings and the plugboard connections were known. At this time, the first three letters of the prelude to the message were used as an indicator to tell the receiving operator the letters that should appear in the windows for this particular message.
Herivel tip
Herivel had an insight in February 1940 that some lazy German code clerks might give away the Enigma's ring settings (Ringstellung) in their first message of the day. If there were several lazy clerks, the first message Grundstellungs would not be random but would have a clustering around the Ringstellung. The insight became known as the Herivel tip. It was not needed at the time because the Luftwaffe was doubly-enciphering their message keys so techniques such as Zygalski sheets could be used. In May 1940, the Germans stopped the doubly-enciphered keys. Other methods becoming ineffective, Bletchley Park started using the Herivel tip to break Luftwaffe traffic. It continued to be the main method until the bombe was delivered in August 1940.
Enigma enciphering procedure
The rotors and the positioning of the ring containing the notch were changed daily. The settings were defined in a codebook that was common to all operators on that network. At the start of each day, before any messages were sent or received, Enigma operators implemented the day's rotor selection and ring settings. Having selected the three rotors, they adjusted the ring settings. That could be done before the rotors were mounted on their axle or after they had been inserted into the machine. It was possible to adjust the ring settings of the loaded rotors by moving the spring-loaded retaining pin to the right and turning the rotor to display the specified letter. Herivel thought it likely that at least some of the operators would adjust the rings after they had mounted the rotors in the machine. Having set the alphabet rings and closed the lid, the operator should then have moved the rotors well away from the positions that displayed the three letters of the ring setting in the windows, but some operators did not.
Herivel's great insight came to him one evening in February 1940 while he was relaxing in front of his landlady's fire. Stressed or lazy operators who had set the rings when the rotors were in the machine might then have left ring setting at or near the top and used those three letters for the first message of the day.
For each transmitted message, the sending operator would follow a standard procedure. From September 1938, he would use an initial position to encrypt the indicator and send it in clear, followed by the message key that had been enciphered at that setting. If the ground setting () was GKX for example, he would then use Enigma with the rotors set to GKX to encrypt the message setting, which he might choose to be RTQ; which might encrypt to LLP. (Before May 1940, the encrypted message setting was repeated, but that makes no difference to Herivel's insight.) The operator would then turn his rotors to RTQ and encrypt the actual message. Thus, the preamble to the message would be the unencrypted ground setting (GKX), followed by the encrypted message setting (LLP). A receiving Enigma operator could use the information to recover the message setting and then decrypt the message.
The ground setting (GKX in the above example) should have been chosen at random, but Herivel reasoned that if operators were lazy, in a hurry or otherwise under pressure, they might simply use whatever rotor setting was currently showing on the machine. If that was the first message of the day and the operator had set the ring settings with the rotors already inside the machine, the rotor position currently showing on the machine could well be the ring setting itself or be very close to it. (If that situation occurred in the above example, GKX would be the ring setting or close to it).
Polish cryptographers used the idea at PC Bruno during the Phoney War.
Herivel square
The day after his insight, Herivel's colleagues agreed that his idea was a possible way into Enigma. Hut 6 began looking for the effect predicted by the Herivel tip and arranged to have the first messages of the day from each transmitting station to be sent to them early. They plotted the indicators in a grid termed a "Herivel square", an example of which is shown below. The rows and columns of the grid are labelled with the alphabet. The first indicator of the first message of the day received from each station on the network, was entered into the grid. It was done by finding the column corresponding to the first letter, the row corresponding to the second letter, and entering the third letter into the cell where the row and column intersected. For example, GKX would be recorded by entering a X in the cell in column G and row K.
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
----------------------------------------------------------
Z| |Z
Y| S |Y
X| |X
W| L |W
V| |V
U| E |U
T| |T
S| |S
R| K |R
Q| S |Q
P| |P
O| |O
N| N |N
M| X |M
L| W T |L
K| X Y |K
J| W X |J
I| |I
H| Q |H
G| |G
F| |F
E| A |E
D| |D
C| V |C
B| J |B
A| P |A
----------------------------------------------------------
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
The Herivel tip suggested that there would be a cluster of entries close together, such as the cluster around GKX in the above example. That would narrow the options for the ring settings down from 17,576 to a small set of possibilities, perhaps 6 to 30, which could be tested individually.
The effect predicted by Herivel did not immediately show up in the Enigma traffic, however, and Bletchley Park had to continue to rely on a different technique to get into Enigma: the method of "perforated sheets", which had been passed on by Polish cryptologists. The situation changed on 1 May 1940, when the Germans changed their indicating procedure, rendering the perforated sheet method obsolete. Hut 6 was suddenly unable to decrypt Enigma.
Fortunately for the codebreakers, the pattern predicted by the Herivel tip began to manifest itself soon after on 10 May, when the Germans invaded the Netherlands and Belgium. David Rees spotted a cluster in the indicators, and on 22 May a Luftwaffe message sent on 20 May was decoded, the first since the change in procedure.
Additional key components
Although the Herivel tip provided the Enigma's ring settings, it did not provide other parts of the Enigma key: the rotor order and the plugboard settings. A Luftwaffe key at the time chose from 5 rotors, so there were 60 possible rotor orders. In addition, there might be 8 to 10 plugboard connections, which means that all but 6 of the 26 letters are permuted by the plugboard. The codebreakers had to use other methods to find the remaining portions of the Enigma key.
The Herivel tip was used in combination with another class of operator mistake, known as "cillies", to solve the settings and decipher the messages.
The Herivel tip was used for several months until specialised codebreaking machines designed by Alan Turing, the so-called "bombes", were ready for use.
Recognition
Gordon Welchman wrote that the Herivel tip was a vital part of breaking Enigma at Bletchley Park.
Because of the importance of his contribution, Herivel was singled out and introduced to Winston Churchill during a visit to Bletchley Park. He also taught Enigma cryptanalysis to a party of Americans assigned to Hut 6 in an intensive two-week course. Herivel later worked in administration in the "Newmanry", the section responsible for solving German teleprinter ciphers by using machine methods such as the Colossus computers, as assistant to the head of the section, mathematician Max Newman.
In 2005, researchers studying a set of Enigma-encrypted messages from World War II noted the occurrence of clustering, as predicted by the Herivel tip, in messages from August 1941.
After World War II
After the end of the war, Herivel taught mathematics in a school for a year, but he found he could not handle the "rumbustious boys". He then joined Queen's University Belfast, where he became reader in the History and Philosophy of Science. One of the students that he supervised was the actor Simon Callow, who said of him:
He published books and articles on Isaac Newton, Joseph Fourier and Christiaan Huygens. His publications include:
The research on which this paper is based was carried out in Paris in 1964 with the aid of a Bourse de Marque awarded by the French Government through their Embassy in London, and with a grant from the Research Committee of the Academic Council of the Queen's University, Belfast.
In 1978 he retired to Oxford, where he became a Fellow of All Souls College. In his retirement he published:
He died in Oxford in 2011.
He is survived by his daughter Josephine Herivel.
Notes
References
in
This contains an account of the pre-war work on Enigma in Poland, written with the care of a professional historian.
External links
"Mind of a Codebreaker", companion web site to "Decoding Nazi Secrets", originally broadcast on 9 November 1999. Part one and part two. (Contains similar material on the Herivel Tip to Smith, 1998).
1918 births
2011 deaths
People educated at Methodist College Belfast
Alumni of Sidney Sussex College, Cambridge
Bletchley Park people
Cryptographic attacks
British historians of science
British biographers
Academics of Queen's University Belfast
Fellows of All Souls College, Oxford
Historians from Northern Ireland
Newton scholars
Male non-fiction writers from Northern Ireland
Academics of the Dublin Institute for Advanced Studies
20th-century biographers from Northern Ireland | John Herivel | [
"Technology"
] | 3,103 | [
"Cryptographic attacks",
"Computer security exploits"
] |
4,083,994 | https://en.wikipedia.org/wiki/Tarnhelm | The Tarnhelm is a magic helmet in Richard Wagner's Der Ring des Nibelungen (written 1848–1874; first perf. 1876). It was crafted by Mime at the demand of his brother Alberich. It is used as a cloak of invisibility by Alberich in Das Rheingold. It also allows one to change one's form:
Alberich changes to a dragon and then a toad in Das Rheingold, Scene 3.
Fafner changes to a dragon after the end of Das Rheingold and appears thus in Siegfried Act II. (It is never made clear whether Fafner actually used the Tarnhelm to transform, or simply transformed as many giants and gods did in the myths. There is also no Tarnhelm present in the original Andvari myth from Reginsmál in the Poetic Edda from which Wagner drew inspiration for this scene.)
Siegfried changes to Gunther's form in Götterdämmerung Act I, Scene 3.
Finally, it allows one to travel long distances instantly, as Siegfried does in Götterdämmerung, Act II.
The stage directions in Das Rheingold and Siegfried describe it as a golden chain-mail helmet which covers the wearer's face.
In politics
Nacht und Nebel ("Night and Fog") was a directive of Adolf Hitler on 7 December 1941 that was originally intended to remove all political activists and resistance "helpers"; "anyone endangering German security" throughout Nazi Germany's occupied territories. The name was a direct reference to a magic spell involving the "Tarnhelm" ("stealth helmet") from Wagner's Rheingold.
In popular culture
In The Lord of the Rings, Éowyn adopts the name "Dernhelm" when she masquerades as a man before slaying the Witch-King of Angmar; "Dernhelm" is the Old English equivalent of "Tarnhelm".
See also
Huliðshjálmr (concealing helmet) of Norse dwarves
Fafnir's helmet Aegis
References
Helmets
Mythological clothing
Magic items
Der Ring des Nibelungen | Tarnhelm | [
"Physics"
] | 452 | [
"Magic items",
"Physical objects",
"Matter"
] |
4,084,112 | https://en.wikipedia.org/wiki/Division%20%28horticulture%29 | Division, in horticulture and gardening, is a method of asexual plant propagation, where the plant (usually an herbaceous perennial) is broken up into two or more parts. Each part has an intact root and crown. The technique is of ancient origin, and has long been used to propagate bulbs such as garlic and saffron. Another type of division is though a plant tissue culture. In this method the meristem (a type of plant tissue) is divided.
Overview
Division is one of the three main methods used by gardeners to increase stocks of plants (the other two are seed-sowing and cuttings). Division is usually applied to mature perennial plants, but may also be used for shrubs with suckering roots, such as gaultheria, kerria and sarcococca. Annual and biennial plants do not lend themselves to this procedure, as their lifespan is too short.
Practice
Most perennials should be divided and replanted every few years to keep them healthy. Plants that do not have enough space between them will start to compete for resources. Additionally, plants that are too close together will stay damp longer due to poor air circulation. This can cause the leaves develop a fungal disease. Most perennials bloom during the fall or during the spring/summer. The best time to divide a perennial is when it is not blooming. Perennials that bloom in the fall should be divided in the spring and perennials that bloom in the spring/summer should be divided in the fall. The ideal day to divide a plant is when it is cool and there is rain in the forecast.
Start by digging a circle around the plant about 4-6 inches from the base. Next, dig underneath the plant and lift it out of the hole. Use a shovel, gardening shears, or knife to physically divide the plant into multiple "divisions". This is also a good time to remove any bare patches or old growth. Each division should have a good number of healthy leaves and roots. If the division is not being replanted immediately, it should be watered and kept in a shady place. The new hole should be the same depth as the original hole. After the hole has been filled in, firmly press down on the soil around the base of the plant. This helps remove air pockets and makes the plant more stable. Plants that are divided in late fall when the ground is freezing should also be mulched. The division will have trouble staying rooted if the ground is freezing and thawing frequently. Continue to water the division(s) daily once until it has established itself.
Table of when to divide common perennials
The frequency a plant should be divided is a general guideline. A plant should be divided when it starts producing fewer flowers, has a lot of dead growth in the center (crown), or cannot support its own weight.
See also
Root cutting
Bare root
References
Horticulture
Asexual reproduction | Division (horticulture) | [
"Biology"
] | 597 | [
"Behavior",
"Asexual reproduction",
"Reproduction"
] |
4,084,346 | https://en.wikipedia.org/wiki/ISO/IEC%2011801 | International standard ISO/IEC 11801 Information technology — Generic cabling for customer premises specifies general-purpose telecommunication cabling systems (structured cabling) that are suitable for a wide range of applications (analog and ISDN telephony, various data communication standards, building control systems, factory automation). It is published by ISO/IEC JTC 1/SC 25/WG 3 of the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC). It covers both balanced copper cabling and optical fibre cabling.
The standard was designed for use within commercial premises that may consist of either a single building or of multiple buildings on a campus. It was optimized for premises that span up to 3 km, up to 1 km2 office space, with between 50 and 50,000 persons, but can also be applied for installations outside this range.
A major revision was released in November 2017, unifying requirements for commercial, home and industrial networks.
Classes and categories
The standard defines several link/channel classes and cabling categories of twisted-pair copper interconnects, which differ in the maximum frequency for which a certain channel performance is required:
Class A: Up to 100 kHz using Category 1 cable and connectors
Class B: Up to 1 MHz using Category 2 cable and connectors
Class C: Up to 16 MHz using Category 3 cable and connectors
Class D: Up to 100 MHz using Category 5e cable and connectors
Class E: Up to 250 MHz using Category 6 cable and connectors
Class EA: Up to 500 MHz using category 6A cable and connectors (Amendments 1 and 2 to ISO/IEC 11801, 2nd Ed.)
Class F: Up to 600 MHz using Category 7 cable and connectors
Class FA: Up to 1 GHz (1000 MHz) using Category 7A cable and connectors (Amendments 1 and 2 to ISO/IEC 11801, 2nd Ed.)
Class BCT-B: Up to 1 GHz (1000 MHz) using with coaxial cabling for BCT applications. (ISO/IEC 11801-1, Edition 1.0 2017-11)
Class I: Up to 2 GHz (2000 MHz) using Category 8.1 cable and connectors (ISO/IEC 11801-1, Edition 1.0 2017-11)
Class II: Up to 2 GHz (2000 MHz) using Category 8.2 cable and connectors (ISO/IEC 11801-1, Edition 1.0 2017-11)
The standard link impedance is 100 Ω. (The older 1995 version of the standard also permitted 120 Ω and 150 Ω in Classes A−C, but this was removed from the 2002 edition.)
The standard defines several classes of optical fiber interconnect:
OM1*: Multimode, 62.5 μm core; minimum modal bandwidth of 200 MHz·km at 850 nm
OM2*: Multimode, 50 μm core; minimum modal bandwidth of 500 MHz·km at 850 nm
OM3: Multimode, 50 μm core; minimum modal bandwidth of 2000 MHz·km at 850 nm
OM4: Multimode, 50 μm core; minimum modal bandwidth of 4700 MHz·km at 850 nm
OM5: Multimode, 50 μm core; minimum modal bandwidth of 4700 MHz·km at 850 nm and 2470 MHz·km at 953 nm
OS1*: Single-mode, maximum attenuation 1 dB/km at 1310 and 1550 nm
OS1a: Single-mode, maximum attenuation 1 dB/km at 1310, 1383, and 1550 nm
OS2: Single-mode, maximum attenuation 0.4 dB/km at 1310, 1383, and 1550 nm
*Grandfathered
OM5
OM5 fiber is designed for wideband applications using SWDM multiplexing of 4–16 carriers (40G=4λ×10G, 100G=4λ×25G, 400G=4×4λ×25G) in the 850–953 nm range.
Category 7
Class F channel and Category 7 cable are backward compatible with Class D/Category 5e and Class E/Category 6. Class F features even stricter specifications for crosstalk and system noise than Class E. To achieve this, shielding was added for individual wire pairs and the cable as a whole. Unshielded cables rely on the quality of the twists to protect from EMI. This involves a tight twist and carefully controlled design. Cables with individual shielding per pair such as Category 7 rely mostly on the shield and therefore have pairs with longer twists.
The Category 7 cable standard was ratified in 2002, and primarily introduced to support 10 gigabit Ethernet over 100 m of copper cabling. It contains four twisted copper wire pairs, just like the earlier standards, terminated either with GG45 electrical connectors or with TERA connectors rated for transmission frequencies of up to 600 MHz.
However, in 2006, Category 6A was ratified for Ethernet to allow 10 Gbit/s while still using the conventional 8P8C connector. Care is required to avoid signal degradation by mixing cable and connectors not designed for that use, however similar. Most manufacturers of active equipment and network cards have chosen to support the 8P8C for their 10 gigabit Ethernet products on copper and not the GG45, ARJ45, or TERA. Therefore, the Category 6 specification was revised to Category 6A to permit this use; products therefore require a Class EA channel (ie, Cat 6A).
some equipment has been introduced which has connectors supporting the Class F (Category 7) channel.
Note, however, that Category 7 is not recognized by the TIA/EIA.
Category 7A
Class FA (Class F Augmented) channels and Category 7A cables, introduced by ISO 11801 Edition 2 Amendment 2 (2010), are defined at frequencies up to 1000 MHz.
The intent of the Class FA was to possibly support the future 40 gigabit Ethernet: 40GBASE-T. Simulation results have shown that 40 gigabit Ethernet may be possible at 50 meters and 100 gigabit Ethernet at 15 meters. In 2007, researchers at Pennsylvania State University predicted that either 32 nm or 22 nm circuits would allow for 100 gigabit Ethernet at 100 meters.
However, in 2016, the IEEE 802.3bq working group ratified the amendment 3 which defines 25GBASE-T and 40GBASE-T on Category 8 cabling specified to 2000 MHz. The Class FA therefore does not support 40G Ethernet.
there is no equipment that has connectors supporting the Class FA (Category 7A) channel.
Category 7A is not recognized in TIA/EIA.
Category 8
Category 8 was ratified by the TR43 working group under ANSI/TIA 568-C.2-1. It is defined up to 2000 MHz and only for distances up to 30 m or 36 m, depending on the patch cords used.
ISO/IEC JTC 1/SC 25/WG 3 developed the equivalent standard ISO/IEC 11801-1:2017/COR 1:2018, with two options:
Class I channel (Category 8.1 cable): minimum cable design U/FTP or F/UTP, fully backward compatible and interoperable with Class EA (Category 6A) using 8P8C connectors;
Class II channel (Category 8.2 cable): F/FTP or S/FTP minimum, interoperable with Class FA (Category 7A) using TERA or GG45.
Abbreviations for twisted pairs
Annex E, Acronyms for balanced cables, provides a system to specify the exact construction for both unshielded and shielded balanced twisted pair cables. It uses three letters—U for unshielded, S for braided shielding, and F for foil shielding—to form a two-part abbreviation in the form of xx/xTP, where the first part specifies the type of overall cable shielding, and the second part specifies shielding for individual cable elements.
Common cable types include U/UTP (unshielded cable); U/FTP (individual pair shielding without the overall screen); F/UTP, S/UTP, or SF/UTP (overall screen without individual shielding); and F/FTP, S/FTP, or SF/FTP (overall screen with individual foil shielding).
2017 edition
In November 2017, a new edition was released by ISO/IEC JTC 1/SC 25 "Interconnection of information technology equipment" subcommittee. It is a major revision of the standard which has unified several prior standards for commercial, home, and industrial networks, as well as data centers, and defines requirements for generic cabling and distributed building networks.
The new series of standards replaces the former 11801 standard and includes six parts:
Versions
ISO/IEC 11801:1995 (Ed. 1)
ISO/IEC 11801:2000 (Ed. 1.1) – Edition 1, Amendment 1
ISO/IEC 11801:2002 (Ed. 2)
ISO/IEC 11801:2008 (Ed. 2.1) – Edition 2, Amendment 1
ISO/IEC 11801:2010 (Ed. 2.2) – Edition 2, Amendment 2
ISO/IEC 11801-1:2017, -1:2017/Cor 1:2018, -2:2017, -3:2017, -3:2017/Amd 1:2021, -3:2017/Cor 1:2018, -4:2017, -4:2017/Cor 1:2018, -5:2017, -5:2017/Cor 1:2018, -6:2017, -6:2017/Cor 1:2018 ( this set is current.)
See also
Ethernet over twisted pair
Twisted pair
TIA/EIA-568
ISO/IEC JTC 1/SC 25
References
Further reading
International standard ISO/IEC 11801: Information technology — Generic cabling for customer premises''.
European standard EN 50173: Information technology — Generic cabling systems. 1995.
11801
Telecommunications | ISO/IEC 11801 | [
"Technology"
] | 2,078 | [
"Information and communications technology",
"Telecommunications"
] |
4,084,789 | https://en.wikipedia.org/wiki/Meiocyte | A meiocyte is a type of cell that differentiates into a gamete through the process of meiosis. Through meiosis, the diploid meiocyte divides into four genetically different haploid gametes. The control of the meiocyte through the meiotic cell cycle varies between different groups of organisms.
Yeast
The process of meiosis has been extensively studied in model organisms, such as yeast. Because of this, the way in which the meiocyte is controlled through the meiotic cell cycle is best understood in this group of organisms. A yeast meiocyte that is undergoing meiosis must pass through a number of checkpoints in order to complete the cell cycle. If a meiocyte divides and this division results in a mutant cell, the mutant cell will undergo apoptosis and, therefore, will not complete the cycle.
In natural populations of the yeast Saccharomyces cerevisiae, diploid meiocytes produce haploid cells that then mainly undergo either clonal reproduction, or selfing (intratetrad mating) to form progeny diploid meiocytes. When the ancestry of natural S. cerevisiae strains was analyzed, it was determined that formation of diploid meiocytes by outcrossing (as opposed to inbreeding or selfing) occurs only about once every 50,000 cell divisions. These findings suggest that the principal adaptive function of meiocytes may not be related to the production of genetic diversity that occurs infrequently by outcrossing, but rather may be mainly related to recombinational repair of DNA damage (that can occur in meiocytes at each mating cycle).
Animal
The animal meiotic cell cycle is very much like that of yeast. Checkpoints within the animal meiotic cell cycle serve to stop mutant meiocytes from progressing further within the cycle. Like yeast meiocytes, if an animal meiocyte differentiates into a mutant cell, the cell will undergo apoptosis.
Plant
The meiotic cell cycle in plants is very different from that of yeast and animal cells. In plant studies, mutations have been identified that affect meiocyte formation or the process of meiosis. Most meiotic mutant plant cells complete the meiotic cell cycle and produce abnormal microspores. It appears that plant meiocytes do not undergo any checkpoints within the meiotic cell cycle and can, thus, proceed through the cycle regardless of any defect. By studying the abnormal microspores, the progression of the plant meiocyte through the meiotic cell cycle can be investigated further.
Mammalian infertility
Researching meiosis in mammals plays a crucial role in understanding human infertility. Meiosis research within mammal populations is restricted due to the fundamental nature of meiosis. In order to study mammalian meiosis, a culture technique that would allow for this process to be observed live under a microscope would need to be identified. By viewing live mammalian meiosis, one can observe the behavior of mutant meiocytes that may possibly compromise infertility within the particular organism. However, because of the size and small number of meiocytes, collecting samples of these cells has been difficult and is currently being researched.
References
Cell cycle | Meiocyte | [
"Biology"
] | 643 | [
"Cell cycle",
"Cellular processes"
] |
4,085,430 | https://en.wikipedia.org/wiki/Dinosaur%20size | Size is an important aspect of dinosaur paleontology, of interest to both the general public and professional scientists. Dinosaurs show some of the most extreme variations in size of any land animal group, ranging from tiny hummingbirds, which can weigh as little as two grams, to the extinct titanosaurs, such as Argentinosaurus and Bruhathkayosaurus which could weigh as much as .
The latest evidence suggests that dinosaurs' average size varied through the Triassic, early Jurassic, late Jurassic and Cretaceous periods, and dinosaurs probably only became widespread during the early or mid Jurassic. Predatory theropod dinosaurs, which occupied most terrestrial carnivore niches during the Mesozoic, most often fall into the category when sorted by estimated weight into categories based on order of magnitude, whereas recent predatory carnivoran mammals peak in the range of . The mode of Mesozoic dinosaur body masses is between one and ten metric tonnes. This contrasts sharply with the size of Cenozoic mammals, estimated by the National Museum of Natural History as about .
Size estimation
Scientists will probably never be certain of the largest and smallest dinosaurs. This is because only a small fraction of animals ever fossilize, and most of these remains will either never be uncovered, or will be unintentionally destroyed as a result of human activity. Of the specimens that are recovered, few are even relatively complete skeletons, and impressions of skin and other soft tissues are rarely discovered. Rebuilding a complete skeleton by comparing the size and morphology of bones to those of similar, better-known species is an inexact art (though governed by some established allometric trends), and reconstructing the muscles and other organs of the living animal is, at best, a process of educated guesswork, and never perfect. Mass estimates for dinosaurs are much more variable than length estimates given the lack of soft tissue preservation in the fossilization process. Modern mass estimation is often done with the laser scan skeleton technique that puts a "virtual" skin over the known or implied skeleton, but the limitations inherent in previous mass estimation techniques remain.
Sauropodomorphs
Sauropodomorph size is difficult to estimate given their usually fragmentary state of preservation. Sauropods are often preserved without their tails, so the margin of error in overall length estimates is high. Mass is calculated using the cube of the length, so for species in which the length is particularly uncertain, the weight is even more so. Estimates that are particularly uncertain (due to very fragmentary or lost material) are preceded by a question mark. Each number represents the highest estimate of a given research paper. One large sauropod, Maraapunisaurus fragillimus, was based on particularly scant remains that have been lost since their description by paleontologists in 1878. Analysis of the illustrations included in the original report suggested that M. fragillimus may have been the largest land animal of all time, possibly weighing and measuring between long. One later analysis of the surviving evidence, and the biological plausibility of such a large land animal, suggested that the enormous size of this animal was an over-estimate due partly to typographical errors in the original report. This would later be challenged by a different study, which argued Cope's measurements were genuine and that there was no basis for assuming typographical errors. The study, however, also reclassified the species and correspondingly gave a much lower length estimate of and a mass of . This in itself would later be disputed as being too small for an animal of such size, with some believing it to be even larger at around and weighing around .
Another large but even more controversial sauropod is Bruhathkayosaurus, which had a calculated weight ranging between and a length of Although the existence of this sauropod had long been dismissed as a potential fake or a misidentification of a petrified tree trunk, recent photographic evidence emerged, confirming its existence. More recent and reliable estimates in 2023 have rescaled Bruhathkayosaurus to weigh around with its most liberal estimate being , making it incredibly massive for such an animal. If the upper unlikely size estimates were to be taken at face value, Bruhathkayosaurus would not only be the largest dinosaur to have ever lived, but also the largest animal to have lived, exceeding even the largest blue whale recorded. According to Gregory S. Paul, 'super-sauropods' or 'land-whales' such as Maraapunisaurus, Bruhathkayosaurus and the "Broome Titanosaur footprints," as he calls them, should not be surprising as sauropods were more heat tolerant and grew rapidly, which allowed them to reach truly titanic sizes that rivaled the largest whales in mass despite the prevalence of air sacs. Other potential factors for such extreme sauropod sizes include increasing bone robustness and load-distributing cartilaginous features to better redistribute and support such massive weights.
Generally, the giant sauropods can be divided into two categories: the shorter but stockier and more massive forms (mainly titanosaurs and some brachiosaurids), and the longer but slenderer and more light-weight forms (mainly diplodocids).
Because different methods of estimation sometimes give conflicting results, mass estimates for sauropods can vary widely causing disagreement among scientists over the accurate number. For example, the titanosaur Dreadnoughtus was originally estimated to weigh 59.3 tonnes by the allometric scaling of limb-bone proportions, whereas more recent estimates, based on three-dimensional reconstructions, yield a much smaller figure of 22.1–38.2 tonnes.
The sauropods were the longest and heaviest dinosaurs. For much of the dinosaur era, the smallest sauropods were larger than almost anything else in their habitat, and the largest were an order of magnitude more massive than anything else known to have walked the Earth since. Giant prehistoric mammals such as Paraceratherium and Palaeoloxodon (the largest land mammals ever discovered) were dwarfed by the giant sauropods, and only modern whales approach or surpass them in weight, though they live in the oceans. There are several proposed advantages for the large size of sauropods, including protection from predation, reduction of energy use, and longevity, but it may be that the most important advantage was dietary. Large animals are more efficient at digestion than small animals, because food spends more time in their digestive systems. This also permits them to subsist on food with lower nutritive value than smaller animals. Sauropod remains are mostly found in rock formations interpreted as dry or seasonally dry, and the ability to eat large quantities of low-nutrient browse would have been advantageous in such environments.
One of the tallest and heaviest dinosaurs known from good skeletons is Giraffatitan brancai (previously classified as a species of Brachiosaurus). Its remains were discovered in Tanzania between 1907 and 1912. Bones from several similar-sized individuals were incorporated into the skeleton now mounted and on display at the Museum für Naturkunde Berlin; this mount is tall and long, and would have belonged to an animal that weighed between . One of the longest complete dinosaurs is the Diplodocus, which was discovered in Wyoming in the United States and displayed in Pittsburgh's Carnegie Natural History Museum in 1907.
There were larger dinosaurs, but knowledge of them is based entirely on a small number of fragmentary fossils. Most of the largest herbivorous specimens on record were discovered in the 1970s or later, and include the massive titanosaur Argentinosaurus huinculensis, which is the largest dinosaur known from uncontroversial and relatively substantial evidence, estimated to have been and long. Some of the longest sauropods were those with exceptionally long, whip-like tails, such as the Diplodocus hallorum (formerly Seismosaurus) and the 39 m Supersaurus.
In 2014, the fossilized remains of a previously unknown species of sauropod were discovered in Argentina. The titanosaur, named Patagotitan mayorum, was estimated to have been around long weighing around , larger than any other previously found sauropod. The specimens found were remarkably complete, significantly more so than previous titanosaurs. It since been suggested that Patagotitan was not necessarily larger than Argentinosaurus and Puertasaurus. In 2019, Patagotitan was estimated to have been long and about .
The largest of non-sauropod sauropodomorphs was the unnamed long unnamed Elliot giant. Another large sauropodomorph was Euskelosaurus. It reached in length and in weight. Yunnanosaurus youngi also reached a length of .
Theropods
Tyrannosaurus was for many decades the largest and best-known theropod to the general public. Since its discovery, however, a number of other giant carnivorous dinosaurs have been described, including Spinosaurus, Carcharodontosaurus, and Giganotosaurus. These large theropod dinosaurs are estimated to rival or even exceeded Tyrannosaurus rex in size, though more recent studies and reconstructions show that Tyrannosaurus, although shorter, was the bulkier animal overall. Specimens such as Sue and Scotty are both estimated to be the most massive theropods known to science. There is still no clear explanation for exactly why these animals grew so bulky and heavy compared to the land predators that came before and after them.
The largest extant theropod is the common ostrich, up to tall and weighs between .
The smallest non-avialan theropod known from adult specimens may be Anchiornis huxleyi, at in weight and in length, although later study discovered larger specimen reaching . However, some studies suggest that Anchiornis was actually an avialan. The smallest dinosaur known from adult specimens which is definitely not an avialan is Parvicursor remotus, at and measuring long. However, in 2022 its holotype was recognized as a juvenile individual. Among living dinosaurs, the bee hummingbird (Mellisuga helenae) is smallest at and long. The smallest theropod overall (including avians) is the currently extant bee hummingbird at 6.12 cm long and 2.6g for females, and 5.51 cm long and 3.25g for the males.
In the theropod lineage leading to birds, body size shrank continuously over a period of 50 million years, from an average of down to . This was the only dinosaur lineage to get continuously smaller over such an extended time period, and their skeletons developed adaptations at about four times the average rate for dinosaurs.
See also
Largest prehistoric animals
List of largest birds
Megafauna
Pterosaur size
References
External links
The Biggest Carnivore: Dinosaur History Rewritten
(Dinosaur size#References)
"Dinosaur records", Czech article by Vladimír Socha; DinosaurusBlog.com, August 1, 2016
Animal size
Dinosaur paleobiology | Dinosaur size | [
"Biology"
] | 2,264 | [
"Animal size",
"Organism size"
] |
4,085,603 | https://en.wikipedia.org/wiki/Ulysses%20pact | A Ulysses pact or Ulysses contract is a freely made decision that is designed and intended to bind oneself in the future. The term is used in medicine, especially in reference to advance directives (also known as living wills), where there is some controversy over whether a decision made by a person in one state of health should be considered binding upon that person when they are in a markedly different, usually worse, state of health.
The term refers to the pact that Ulysses (Greek name Ὀδυσσεύς, Odysseus) made with his men as they approached the Sirens. Ulysses wanted to hear the Sirens' song although he knew that doing so would render him incapable of rational thought. He put wax in his men's ears so that they could not hear and had them tie him to the mast so that he could not jump into the sea. He ordered them not to change course under any circumstances and to keep their swords upon him and to attack him if he should break free of his bonds.
Upon hearing the Sirens' song, Ulysses was driven temporarily insane and struggled with all of his might to break free so that he might join the Sirens, which would have meant his death. His men, however, kept their promise, and they refused to release him.
Psychiatric context
Psychiatric advance directives are sometimes referred to as Ulysses pacts or Ulysses contracts, where there is a legal agreement designed to override a present request from a legally incompetent patient in favor of a past request made by that previously competent patient. An example of when Ulysses contracts are invoked is when people with schizophrenia stop taking their medication at perceived remission times.
Technological context
In the wake of the Snowden revelations, digital technology companies and commentators have had to consider the situation of a technology provider being ordered by a government to act in a way that they feel morally opposed to. One example is that Apple, as part of the FBI–Apple encryption dispute, decided to engineer the iPhone in a way that made it impossible for them to read the data on it, which has been described as "a digital Ulysses pact". A related example is that of a warrant canary, which Cory Doctorow describes as being a Ulysses pact (albeit a "weak" one, since the issuer of the canary can fail or be forced not to kill the canary), as is binary transparency (applying the idea of certificate transparency to binary executable files), which he describes as a "much stronger, more effective Ulysses pact", since a public append-only log is harder to censor.
Policy context
Ulysses clauses in public policy are provisions that discourage or prevent future changes. For example, in the state of California initiatives generally include a Ulysses clause to prevent amendments.
See also
Advance health care directive
Decision theory
Greek mythology
Escalation of commitment
References
Bibliography
Decision-making
Medical ethics
Mental health law
Behavioral economics | Ulysses pact | [
"Biology"
] | 583 | [
"Behavior",
"Behavioral economics",
"Behaviorism"
] |
4,085,623 | https://en.wikipedia.org/wiki/Krypton-85 | Krypton-85 (85Kr) is a radioisotope of krypton.
Krypton-85 has a half-life of 10.756 years and a maximum decay energy of 687 keV. It decays into stable rubidium-85. Its most common decay (99.57%) is by beta particle emission with a maximum energy of 687 keV and an average energy of 251 keV. The second most common decay (0.43%) is by beta particle emission (maximum energy of 173 keV) followed by gamma ray emission (energy of 514 keV). Other decay modes have very small probabilities and emit less energetic gamma rays. Krypton-85 is mostly synthetic, though it is produced naturally in trace quantities by cosmic ray spallation.
In terms of radiotoxicity, 440 Bq of 85Kr is equivalent to 1 Bq of radon-222, without considering the rest of the radon decay chain.
Presence in Earth's atmosphere
Natural production
Krypton-85 is produced in small quantities by the interaction of cosmic rays with stable krypton-84 in the atmosphere. Natural sources maintain an equilibrium inventory of about 0.09 PBq in the atmosphere.
Anthropogenic production
As of 2009, the total amount in the atmosphere is estimated at 5500 PBq due to anthropogenic sources. At the end of the year 2000, it was estimated to be 4800 PBq, and in 1973, an estimated 1961 PBq (53 megacuries). The most important of these human sources is nuclear fuel reprocessing, as krypton-85 is one of the seven common medium-lived fission products. Nuclear fission produces about three atoms of krypton-85 for every 1000 fissions (i.e., it has a fission yield of 0.3%). Most or all of this krypton-85 is retained in the spent nuclear fuel rods; spent fuel on discharge from a reactor contains between 0.13–1.8 PBq/Mg of krypton-85. Some of this spent fuel is reprocessed. Current nuclear reprocessing releases the gaseous 85Kr into the atmosphere when the spent fuel is dissolved. It would be possible in principle to capture and store this krypton gas as nuclear waste or for use. The cumulative global amount of krypton-85 released from reprocessing activity has been estimated as 10,600 PBq as of 2000. The global inventory noted above is smaller than this amount due to radioactive decay; a smaller fraction is dissolved into the deep oceans.
Other man-made sources are small contributors to the total. Atmospheric nuclear weapons tests released an estimated 111–185 PBq. The 1979 accident at the Three Mile Island nuclear power plant released about . The Chernobyl accident released about 35 PBq, and the Fukushima Daiichi accident released an estimated 44–84 PBq.
The average atmospheric concentration of krypton-85 was approximately 0.6 Bq/m3 in 1976, and has increased to approximately 1.3 Bq/m3 as of 2005. These are approximate global average values; concentrations are higher locally around nuclear reprocessing facilities, and are generally higher in the northern hemisphere than in the southern hemisphere.
For wide-area atmospheric monitoring, krypton-85 is the best indicator for clandestine plutonium separations.
Krypton-85 releases increase the electrical conductivity of atmospheric air. Meteorological effects are expected to be stronger closer to the source of the emissions.
Uses in industry
Krypton-85 is used in arc discharge lamps commonly used in the entertainment industry for large HMI film lights as well as high-intensity discharge lamps. The presence of krypton-85 in discharge tube of the lamps can make the lamps easy to ignite. Early experimental krypton-85 lighting developments included a railroad signal light designed in 1957 and an illuminated highway sign erected in Arizona in 1969. A 60 μCi (2.22 MBq) capsule of krypton-85 was used by the random number server HotBits (an allusion to the radioactive element being a quantum mechanical source of entropy), but was replaced with a 5 μCi (185 kBq) Cs-137 source in 1998.
Krypton-85 is also used to inspect aircraft components for small defects. Krypton-85 is allowed to penetrate small cracks, and then its presence is detected by autoradiography. The method is called "krypton gas penetrant imaging". The gas penetrates smaller openings than the liquids used in dye penetrant inspection and fluorescent penetrant inspection.
Krypton-85 was used in cold-cathode voltage regulator electron tubes, such as the type 5651.
Krypton-85 is also used for Industrial Process Control mainly for thickness and density measurements as an alternative to Sr-90 or Cs-137.
Krypton-85 is also used as a charge neutralizer in aerosol sampling systems.
References
Fission products
Krypton-085 | Krypton-85 | [
"Chemistry"
] | 1,057 | [
"Nuclear fission",
"Isotopes of krypton",
"Isotopes",
"Fission products",
"Nuclear fallout"
] |
4,085,663 | https://en.wikipedia.org/wiki/Katanin | Katanin is a microtubule-severing AAA protein. It is named after the Japanese sword called a katana. Katanin is a heterodimeric protein first discovered in sea urchins. It contains a 60 kDa ATPase subunit, encoded by KATNA1, which functions to sever microtubules. This subunit requires ATP and the presence of microtubules for activation. The second 80 kDA subunit, encoded by KATNB1, regulates the activity of the ATPase and localizes the protein to centrosomes. Electron microscopy shows that katanin forms 14–16 nm rings in its active oligomerized state on the walls of microtubules (although not around the microtubule).
Mechanism and regulation of microtubule length
Structural analysis using electron microscopy has revealed that microtubule protofilaments change from a straight to a curved conformation upon GTP hydrolysis of β-tubulin. However, when these protofilaments are part of a polymerized microtubule, the stabilizing interactions created by the surrounding lattice lock subunits into a straight conformation, even after GTP hydrolysis. In order to disrupt these stable interactions, katanin, once bound to ATP, oligomerizes into a ring structure on the microtubule wall - in some cases oligomerization increases the affinity of katanin for microtubules and stimulates its ATPase activity. Once this structure is formed, katanin hydrolyzes ATP, and likely undergoes a conformational change that puts mechanical strain on the tubulin subunits, which destabilizes their interactions within the microtubule lattice. The predicted conformational change also likely decreases the affinity of katanin for tubulin as well as for other katanin proteins, which leads to disassembly of the katanin ring structure, and recycling of the individual inactivated proteins.
The severing of microtubules by katanin is regulated by protective microtubule-associated proteins (MAPs), and the p80 subunit (p60 severs microtubules much better in the presence of p80). These mechanisms have different consequences, depending on where in the cell they are activated or disrupted. For example, allowing katanin-mediated severing at the centrosome releases microtubules for free movement. In one experiment, anti-katanin antibodies were injected into a cell, causing a large accumulation of microtubules around the centrosome and inhibition of microtubule outgrowth. Therefore, katanin-mediated severing may serve to maintain organization in the cytoplasm by promoting microtubule disassembly and efficient movement. During cell division, severing at the spindle pole produces free microtubule ends and allows poleward flux of tubulin and retraction of the microtubule. Severing microtubules in the cytoplasm facilitates treadmilling and mobility, which is important during development.
Role in cell division
Katanin-mediated microtubule severing is an important step in mitosis and meiosis. It has been shown that katanin is responsible for severing microtubules during M-phase in Xenopus laevis. The disassembly of microtubules from their interphase structures is necessary to prepare the cell and the mitotic spindle for cell division. This regulation is indirect: MAP proteins, which protect the microtubules from being severed during interphase, dissociate and allow katanin to act. In addition, katanin is responsible for severing microtubules at the mitotic spindles when disassembly is required to segregate sister chromatids during anaphase.
Similar results have been obtained in relation to katanin's activity during meiosis in C. elegans. It was reported that Mei-1 and Mei-2 to encode similar proteins to the p60 and p80 subunits of katanin. Using antibodies, these two proteins were found to localize at the ends of microtubules in the meiotic spindle, and, when expressed in HeLa cells, these proteins initiated microtubule severing. These findings indicate that katanin serves a similar purpose in both mitosis and meiosis in segregating chromatids toward the spindle poles.
Role in development
Katanin is important in the development of many organisms. Both elimination and overexpression of katanin is deleterious to axonal growth, and, thus, katanin must be carefully regulated for proper neural development. In particular, severing microtubules in specific cellular spaces allows fragments to test various routes of growth. Katanin has proved necessary in this task. An experiment using time-lapse digital imaging of fluorescently labeled tubulin demonstrated that axon growth cones pause, and microtubules fragment, at sites of branching during neural development.
A similar experiment using fluorescently labeled tubulin observed local microtubule fragmentation in newt lung cell lamellipodia during developmental migration, in which the fragments run perpendicular to the advancing cell membrane to aid exploration. The local nature of both fragmentation events likely indicates regulation by katanin because it can be concentrated in specific cellular regions. This is supported by a study that demonstrated that the Fra2 mutation, which affects a katanin orthologue in Arabidopsis thaliana, leads to an aberrant disposition of cellulose microfibrils along the developing cell wall in these plants. This mutation produced a phenotype with reduced cell elongation, which suggests katanin's significance in development across a wide range of organisms.
Function in neurons
Katanin is known to be abundant in the nervous system and even modest levels of it can cause significant microtubule depletion. But microtubules need to be severed throughout other compartments of the neuron so that sufficient numbers of microtubules can undergo rapid transport.
In the nervous system, the ratio of the two subunits is dramatically different from other organs of the body. So it is important to be able to regulate the ratio to control microtubule severing. The monomer p80 is found in all the compartments of the neuron, which means its function cannot be solely to target katanin. The p80 katanin has multiple domains with different functions. One domain targets the centrosome, another augments microtubule severing by the p60 katanin, and the last suppresses microtubule severing. The abundance of katanin in the neurons show they can move along the axon. There is breakage of microtubules at the axonal branch points and in the growth cones of the neurons. The distribution of katanin in the neuron helps understand the phenomenon for regulating microtubule length and number, as well as releasing the microtubules from the centrosome.
Katanin is believed to be regulated by the phosphorylation of other proteins. Bending enhances the access of katanin to the lattice, facilitating severing.
Function in plants
Katanin is also found to have similar functions in higher plants. The form and structure of a plant cell is determined by the rigid cell wall, which contains highly organized cellulose, the orientation of which is affected by microtubules that serve to guide the deposition of forming fibers. The orientation of the cellulose microfibrils within the cell wall is determined by the microtubules, which are aligned perpendicular to the major axis of cell expansion. Because plant cells lack traditional centrosomes, katanin accumulates at the nuclear envelope during pre-prophase and prophase, where the spindle microtubules are forming.
During cell elongation, microtubules must adjust their orientation constantly to keep up with the increasing cell length. This constant change in microtubule organization was proposed to be performed by the rapid disassembly, assembly, and translocation of microtubules. Recently, mutations in the plant katanin homologue have been shown to alter transitions in microtubule organization, which, in turn, cause impairments in the proper deposition of cellulose and hemicellulose. This is presumed to be caused by the plant cell's lack of ability to regulate microtubule lengths.
There is no homologue for the p80 katanin regulatory subunit. Therefore, a His-tagged At-p60 was made to describe its functions in plants. The His-At-p60 can sever microtubules in vitro in the presence of ATP. It directly interacts with microtubules in co-sedimentation assays. The ATPase activity was stimulated in a non-hyperbolic way. ATP hydrolysis is stimulated at a low tubulin/At-p60 ratio and inhibited at higher ratios. The low ratios favor the katanin subunit interactions, whereas the high ratios show impairment. The At-p60 can oligomerize like the ones in animals. The At-p60 interacts directly with microtubules, whereas the animal p60 bind via their N-termini. The N-terminal part of p60 is not well conserved between the plant and animal kingdoms.
See also
Microtubule-severing ATPase
References
External links
Hartman, Jim. "Katanin, an AAA ATPase that Takes Apart Stable Microtubules." 2004.
McNally Lab research. "katanin" 2006
Proteins | Katanin | [
"Chemistry"
] | 1,936 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
4,085,687 | https://en.wikipedia.org/wiki/Thermoplastic%20elastomer | Thermoplastic elastomers (TPE), sometimes referred to as thermoplastic rubbers (TPR), are a class of copolymers or a physical mix of polymers (usually a plastic and a rubber) that consist of materials with both thermoplastic and elastomeric properties.
While most elastomers are thermosets, thermoplastic elastomers are not, in contrast making them relatively easy to use in manufacturing, for example, by injection moulding. Thermoplastic elastomers show advantages typical of both rubbery materials and plastic materials. The benefit of using thermoplastic elastomers is the ability to stretch to moderate elongations and return to its near original shape creating a longer life and better physical range than other materials.
The principal difference between thermoset elastomers and thermoplastic elastomers is the type of cross-linking bond in their structures. In fact, crosslinking is a critical structural factor which imparts high elastic properties.
Types
There are six generic classes of commercial TPEs (designations according to ISO 18064) together with one unclassified category:
Styrenic block copolymers, TPS (TPE-s)
Thermoplastic polyolefinelastomers, TPO (TPE-o)
Thermoplastic vulcanizates, TPV (TPE-v or TPV)
Thermoplastic polyurethanes, TPU (TPU)
Thermoplastic copolyester, TPC (TPE-E)
Thermoplastic polyamides, TPA (TPE-A)
Unclassified thermoplastic elastomers, TPZ
Examples
TPE materials that come from the block copolymers group include CAWITON†, MELIFLEX, THERMOLAST K†, THERMOLAST M†, Chemiton, Arnitel, Hytrel, Dryflex†, Mediprene, Kraton, Pibiflex, Sofprene†, Tuftec†and Laprene†.
† indicates styrenic block copolymers (TPE-s).
Laripur, Desmopan, Estane, Texin and Elastollan are examples of thermoplastic polyurethanes (TPU).
Sarlink, Santoprene, Termoton, Solprene, THERMOLAST V, Vegaprene, and Forprene are examples of TPV materials.
Examples of thermoplastic olefin elastomers (TPO) compound are For-Tec E or Engage. Ninjaflex used for 3D printing.
Criteria for thermoplastic elastomers
In order to qualify as a thermoplastic elastomer, a material must have these three essential characteristics:
The ability to be stretched to moderate elongations and, upon the removal of stress, return to something close to its original shape
Processable as a melt at elevated temperature
Absence of significant creep
History
TPE became a commercial reality when thermoplastic polyurethane polymers became available in the 1950s. During the 1960s styrene block copolymer became available, and in the 1970s a wide range of TPEs came on the scene. The worldwide usage of TPEs (680,000 tons/year in 1990) is growing at about nine percent per year.
Microstructure
The styrene-butadiene materials possess a two-phase microstructure due to incompatibility between the polystyrene and polybutadiene blocks, the former separating into spheres or rods depending on the exact composition. With low polystyrene content, the material is elastomeric with the properties of the polybutadiene predominating. Generally they offer a much wider range of properties than conventional cross-linked rubbers because the composition can vary to suit final construction goals.
Block copolymers can "microphase separate" to form periodic nanostructures, as in the styrene-butadiene-styrene (SBS) block copolymer (shown at right). The polymer is known as Kraton and is used for shoe soles and adhesives. Owing to the microfine structure, a transmission electron microscope (TEM) was needed to examine the structure. The butadiene matrix was stained with osmium tetroxide to provide contrast in the image.
The material was made by living polymerization so that the blocks are almost monodisperse, so helping to create a very regular microstructure. The molecular weight of the polystyrene blocks in the main picture is 102,000; the inset picture has a molecular weight of 91,000, producing slightly smaller domains. The spacing between domains has been confirmed by small-angle X-ray scattering, a technique which gives information about microstructure.
Since most polymers are incompatible with one another, forming a block polymer will usually result in phase separation, and the principle has been widely exploited since the introduction of the SBS block polymers, especially where one of the block is highly crystalline. One exception to the rule of incompatibility is the material Noryl, where polystyrene and polyphenylene oxide or PPO form a continuous blend with one another.
Other TPEs have crystalline domains where one kind of block co-crystallizes with other block in adjacent chains, such as in copolyester rubbers, achieving the same effect as in the SBS block polymers. Depending on the block length, the domains are generally more stable than the latter owing to the higher crystal melting point. That point determines the processing temperatures needed to shape the material, as well as the ultimate service use temperatures of the product. Such materials include Hytrel, a polyester-polyether copolymer and Pebax, a nylon or polyamide-polyether copolymer.
Advantages
Depending on the environment, TPEs have outstanding thermal properties and material stability when exposed to a broad range of temperatures and non-polar materials. TPEs consume less energy to produce, can be colored easily by most dyes, and allow economical quality control. TPE requires little or no compounding, with no need to add reinforcing agents, stabilizers or cure systems. Hence, batch-to-batch variations in weighting and metering components are absent, leading to improved consistency in both raw materials and fabricated articles. TPE materials have the potential to be recyclable since they can be molded, extruded and reused like plastics, but they have typical elastic properties of rubbers which are not recyclable owing to their thermosetting characteristics. They can also be ground up and turned into 3D printing filament with a recyclebot.
Processing
The two most important manufacturing methods with TPEs are extrusion and injection molding. TPEs can now be 3D printed and have been shown to be economically advantageous to make products using distributed manufacturing. Compression molding is seldom, if ever, used. Fabrication via injection molding is extremely rapid and highly economical. Both the equipment and methods normally used for the extrusion or injection molding of a conventional thermoplastic are generally suitable for TPEs. TPEs can also be processed by blow molding, melt calendaring, thermoforming, and heat welding.
Applications
TPEs are used where conventional elastomers cannot provide the range of physical properties needed in the product. These materials find large application in the automotive sector and in household appliances sector. For instance, copolyester TPEs are used in snowmobile tracks where stiffness and abrasion resistance are at a premium. Thermoplastic olefins (TPO) are increasingly used as a roofing material. TPEs are also widely used for catheters where nylon block copolymers offer a range of softness ideal for patients. Thermoplastic silicone and olefin blends are used for extrusion of glass run and dynamic weatherstripping car profiles. Styrene block copolymers are used in shoe soles for their ease of processing, and widely as adhesives.
Owing to their unrivaled abilities in two-component injection molding to various thermoplastic substrates, engineered TPS materials also cover a broad range of technical applications ranging from automotive market to consumer and medical products. Examples of those are soft grip surfaces, design elements, back-lit switches and surfaces, as well as sealings, gaskets, or damping elements. TPE is commonly used to make suspension bushings for automotive performance applications because of its greater resistance to deformation when compared to regular rubber bushings. Thermoplastics have experienced growth in the heating, ventilation, and air conditioning (HVAC) industry due to the function, cost effectiveness and adaptability to modify plastic resins into a variety of covers, fans and housings.
References
Further reading
PR Lewis and C Price, Polymer, 13, 20 (1972)
Modern Plastic Mid-October Encyclopedia Issue, Introduction to TPEs, page:109-110
Latest Material and Technological Developments for Activewear, (Joanne Yip, 2020, page 66-67)
Biomaterials
Polymers | Thermoplastic elastomer | [
"Physics",
"Chemistry",
"Materials_science",
"Biology"
] | 1,941 | [
"Biomaterials",
"Materials",
"Polymer chemistry",
"Polymers",
"Matter",
"Medical technology"
] |
4,085,963 | https://en.wikipedia.org/wiki/Coade%20stone | Coade stone or Lithodipyra or Lithodipra () is stoneware that was often described as an artificial stone in the late 18th and early 19th centuries. It was used for moulding neoclassical statues, architectural decorations and garden ornaments of the highest quality that remain virtually weatherproof today.
Coade stone features were produced by appointment to George III and the Prince Regent for St George's Chapel, Windsor; The Royal Pavilion, Brighton; Carlton House, London; the Royal Naval College, Greenwich; and refurbishment of Buckingham Palace in the 1820s.
Coade stone was prized by the most important architects such as: John Nash-Buckingham Palace; Sir John Soane-Bank of England; Robert Adam-Kenwood House; and James Wyatt-Radcliffe Observatory.
The product (originally known as Lithodipyra) was created around 1770 by Eleanor Coade, who ran Coade's Artificial Stone Manufactory, Coade and Sealy, and Coade in Lambeth, London, from 1769 until her death in 1821. It continued to be manufactured by her last business partner, William Croggon, until 1833.
History
In 1769, Mrs Coade bought Daniel Pincot's struggling artificial stone business at Kings Arms Stairs, Narrow Wall, Lambeth, a site now under the Royal Festival Hall. This business developed into Coade's Artificial Stone Manufactory with Coade in charge, such that within two years (1771) she fired Pincot for "representing himself as the chief proprietor".
Coade did not invent artificial stone. Various lesser-quality ceramic precursors to Lithodipyra had been both patented and manufactured over the forty (or sixty) years prior to the introduction of her product. She was, however, probably responsible for perfecting both the clay recipe and the firing process. It is possible that Pincot's business was a continuation of that run nearby by Richard Holt, who had taken out two patents in 1722 for a kind of liquid metal or stone and another for making china without the use of clay, but there were many start-up artificial stone businesses in the early 18th century of which only Coade's succeeded.
The company did well and boasted an illustrious list of customers such as George III and members of the English nobility. In 1799, Coade appointed her cousin John Sealy (son of her mother's sister, Mary), already working as a modeller, as a partner in her business. The business then traded as Coade and Sealy until his death in 1813, when it reverted to Coade.
In 1799, she opened a showroom, Coade and Sealy's Gallery of Sculpture, on Pedlar's Acre at the Surrey end of Westminster Bridge Road, to display her products.(See adjacent "Coade and Sealy gallery" image)
In 1813, Coade took on William Croggan from Grampound in Cornwall, a sculptor and distant relative by marriage (second cousin once removed). He managed the factory until her death eight years later in 1821 whereupon he bought the factory from the executors for c. £4000. Croggan supplied a lot of Coade stone for Buckingham Palace; however, he went bankrupt in 1833 and died two years later. Trade declined, and production came to an end in the early 1840s.
Material
Description
Coade stone is a type of stoneware. Mrs Coade's own name for her products was Lithodipyra, a name constructed from ancient Greek words meaning 'stone-twice-fire' (), or 'twice-fired stone'. Its colours varied from light grey to light yellow (or even beige) and its surface is best described as having a matte finish.
The ease with which the product could be moulded into complex shapes made it ideal for large statues, sculptures and sculptural façades. One-off commissions were expensive to produce, as they had to carry the entire cost of creating a mould. Whenever possible moulds were kept for many years of repeated use.
Formula
The recipe for Coade stone is claimed to be used today by Coade Ltd.
Its manufacture required extremely careful control and skill in kiln firing over a period of days, difficult to achieve with its era's fuels and technology. Coade's factory was the only really successful manufacturer.
The formula used was:
10% grog
5–10% crushed flint
5–10% fine quartz
10% crushed soda lime glass
60–70% ball clay from Dorset and Devon
This mixture was also referred to as "fortified clay", which was kneaded before insertion into a kiln for firing over four days – a production technique very similar to brick manufacture.
Depending on the size and fineness of detail in the work, a different size and proportion of Coade grog was used. In many pieces a combination of grogs was used, with fine grogged clay applied to the surface for detail, backed up by a more heavily grogged mixture for strength.
Durability
One of the more striking features of Coade stone is its high resistance to weathering, with the material often faring better than most types of natural stone in London's harsh environment. Prominent examples listed below have survived without apparent wear and tear for 150 years. There were, however, notable exceptions. A few works produced by Coade, mainly dating from the later period, have shown poor resistance to weathering due to a bad firing in the kiln where the material was not brought up to a sufficient temperature.
Demise
Coade stone was only superseded after Mrs Coade's death in 1821, by products using naturally exothermic Portland cement as a binder. It appears to have been largely phased out by the 1840s.
Examples
Over 650 pieces are still in existence worldwide.
Apsley House, No. 1, London. Duke of Wellington's house. The 1819 renovations by architect Benjamin Dean Wyatt included Scagliola ornamentation (that resembles marble inlays) in Coade stone. ()
Athenry Abbey, Ireland, The last de Bermingham to be buried at Athenry was Lady Mathilda Bermingham (d. 1788). The tower collapsed around 1790. Lady Mathilda's tomb, a Coade stone monument, was broken into in 2002. ()
Banff, Aberdeenshire, Scotland. Duff House Mausoleum, Wrack Woods. James Duff, 2nd Earl Fife built the mausoleum for his family in 1791, possibly on the site of a Carmelite friary. Built before the Gothic Revival, this is an example of "Gothick" architecture. Typically Georgian – the carvings, including the monument to the first Earl, are in Coade stone. ()
Bargate, a Grade I listed medieval gatehouse in the city centre of Southampton. In 1809 a Coade stone statue of George III in Roman dress was added the middle of the four windows of the southern side. It was a gift to the town from John Petty, 2nd Marquess of Lansdowne. ()
Bath, 8 Argyll Street – The Royal Arms of Queen Charlotte are above the entrance to A.H.Hale, (Pharmacy) established 1826.()
Battersea, St Mary's Church The church includes several important monuments from the earlier church. John Camden, (d. 1780), and his eldest daughter Elizabeth Neild, (d. 1791). 'Girl by a funeral urn with a poetic eulogy'. Signed by Coade of Lambeth (1792).()
Becconsall Old Church, Hesketh Bank, Lancashire. The baptismal font, dating from the 18th century, is the form of a vase, and is made from Coade stone.()
Birmingham Botanical Gardens, England. A Coade stone fountain lies west of the bandstand, which was presented in 1850 and was designed by the Birmingham architect, Charles Edge.()
Birmingham Library, displayed in the Library are two large Coade stone medallions, made in the 1770s and removed from the front of the city's Theatre Royal when it was demolished in 1956. These depict David Garrick and William Shakespeare.()
Brighton, Royal Pavilion of King George IV.()
Brighton and Hove Cemetery. Anna Maria Crouch, actress, singer and mistress of George IV, has an elaborate, Grade II-listed, Coade stone table tomb with a carved memorial tablet, friezes with foliage patterns and Vitruvian scrolls, putti and a Classical-style urn.()
Brighton, Stanmer Park, Sussex. Frankland Monument. A Coade stone statue of 1775 by Richard Hayward, erected to commemorate Frederick Meinhardt Frankland (c. 1694–1768), barrister-at-law, MP for Thirsk, son of Sir Thomas Frankland, 2nd Baronet). Listed at Grade II by English Heritage (NHLE Code 1380952). It was erected at the expense of Thomas Pelham, 1st Earl of Chichester, who owned Stanmer House and the estate, and his wife Ann, who was Frankland's daughter. The plinth has three stone tortoises and a Latin inscription. The triangular column above has concave sides with oval panels and a cornice with a frieze and some egg-and-dart moulding, all topped by an urn. The monument stands on top of a hill in Stanmer Park.()
Brogyntyn, near Oswestry, Shropshire. Benjamin Gummow designed a portico and other alterations for the Ormsby Gores, 1814–15. He used Coade stone ornamentation on the interior of the portico()
Broomhall House, Dunfermline, Scotland. A 1796 redesign by Thomas Harrison included a semi-circular bay on the south front decorated with three Coade stone panels depicting reclining figures.
Buckingham Palace London, (in a section not open to the public). A frieze with vegetative scrollwork of Coade stone, balconies accessible from the first floor, and an attic with figural sculptures based on the Elgin Marbles. The west front overlooking the main garden features large Classical urns made of Coade stone. ()
Burnham Thorpe – Nelson's Memorial.()
Burton Constable Hall in the East Riding of Yorkshire, displays 3 figures and a number of 'medallions' above the doors and windows of the Orangerie. In 1966 this was designated as Grade II*. ()
Capesthorne Hall, Cheshire. The Drawing Room features twin fireplaces made from Coade stone, dated to 1789, which originally belonged to the family's house in Belgravia, London. Both are carved, one depicting Faith, Hope and Charity, and the other the Aldobrandini Marriage.()
Carlton House, London.()
Castle Howard, North Yorkshire, ()
Charborough House, Dorset. The park wall, alongside the A31 is punctuated by Stag Gate at the northern extremity and Lion Lodge at the easternmost entrance, with heraldic symbols in Coade stone. These gateways are Grade II listed, as is a third one, East Almer Lodge, further to the west. A fourth gateway, Peacock Lodge, is inside the estate, is Grade II* listed.()
Chelmsford Cathedral, Essex. The nave partially collapsed in 1800, and was rebuilt by the County architect John Johnson, retaining the Perpendicular design, but using Coade stone piers and tracery, and a plaster ceiling.()
Chichester – The Buttermarket. Designed by John Nash (coat of arms engraved with "Coade & Sealey 1808")()
Chiswick High Road, London, Presbytery of brown brick with Coade stone details, three storeys with double-hung sash windows; Grade II listed.()
Chiswick House, London. A couple of large ornate urns in the Italian Garden.()
Clerkenwell, St James's Church Over the west door are the royal arms of George III. Made of Coade stone and dated 1792, they were formerly over the reredos.()
Cottesbrooke, Northamptonshire. 'All Saints Church' contains a free-standing monument to Sir William Langham, (d.1812) in the nave, moulded in Coade stone by Bacon Junior.()
Croome Court, Upton-upon-Severn in Worcestershire. The south face has a broad staircase, with Coade stone sphinxes on each side, leading to a south door topped with a cornice on consoles. ()
Culzean Castle, overlooking the Firth of Clyde, near Maybole, Scotland. The former home of the Marquess of Ailsa. "Cat Gates" – The original inner entrance with Coade stone cats (restored in 1995) surmounting the pillars. The lodge cottages were demolished in the 1950s.(), (See Gallery "Cat gates at Culzean Castle")
Daylesford House, Gloucestershire. The main front was originally to the west, at the centre of which is a projecting semicircular bay, with four Ionic pillars and French Neoclassical garland swags around the architrave, topped by a shallow dome with pointed Coade stone finial, and wings projecting to either side. ()
Doddington Hall, Cheshire, The country house was designed by Samuel Wyatt. An outer double staircase leads up to a doorway flanked by columns and under a blind arch containing a Coade stone medallion containing a sign of the Zodiac. There are similar medallions over the first floor windows in the outer bays.()
Edinburgh, Stockbridge The "Statue of Hygieia" in the St Bernard's Well building by the Water of Leith "is made of coade stone".(). (See additional image in Coade stone Gallery below.)
Edinburgh, Bonaly Tower. Statue of William Shakespeare in Coade stone. ()
Egyptian House, Penzance, Cornwall. There is some dispute over the architect and the date of build, but in 1973, it was acquired by the Landmark Trust, the elaborate mouldings were mainly Coade stone.()
Exeter, 'Palace Gate' – Coade stone doorways on the terrace in 'Palace Gate' between the cathedral and South Street. Several late 18th century houses near Exeter Cathedral have doorway surrounds decorated with a keystone face (chosen from a small range of moulds), and decorative blocks.()
Fenstanton, Cambridgeshire, Church of St Peter and St Paul, Memorial to Frances Brown, daughter in law of Lancelot "Capability" Brown in Coade stone. (). (See adjacent image on right)
Great Yarmouth, Britannia Monument Coade stone caryatids replaced by concrete copies.()
Greenwich, Royal Naval College – Admiral Lord Nelson's Pediment in the King William Courtyard of the Old Royal Naval College was regarded by the Coade workers as the finest of all their work. It was sculpted by Joseph Panzetta in 1813, as a public memorial after his death at the Battle of Trafalgar in 1805. It was based on a painting by Benjamin West depicting Nelson's body being offered to Britannia by a Winged Victory. It was cleaned in 2016. (), (See Nelson Pediment at Top of this article)
Grey Coat Hospital Westminster. The 1707 Acts of Union with Scotland arms of Queen Anne, with her 1702 motto semper eadem ("always the same"), executed in Coade stone. ()
Haberdashers' Hatcham College, Telegraph Hill, Lewisham. A Coade stone statue of Robert Aske stands in the forecourt of the college, formerly Haberdashers' Aske's Hatcham Boys' School, in Pepys Road. It dates from 1836 and shows him in the robes of the Haberdashers' Company, leaning on a plinth and holding in his hand the plans of the school built at that time in Hoxton, whence the statue was transferred in 1903.()
Ham House Richmond, on the River Thames near London, has a reclining statue of Father Thames, by John Bacon in the entrance courtyard.
Haldon Belvedere, Devon. Inside is a larger-than-life-size Coade stone statue of General Stringer Lawrence dressed as a Roman general; a copy of the marble statue of him by Peter Scheemakers (1691–1781).
Hammerwood Park, East Grinstead. Coade stone plaques of scenes derived from the Borghese Vase adorn both porticos.()
Harlow, Essex, The Gibberd Garden Coade stone urns originally from Coutts Bank, The Strand, now in the garden created by Sir Frederick Gibberd who died in 1984.()
Heaton Hall, A country house that was remodelled between 1772 and 1789 by James Wyatt. Further additions were made in 1823 by Lewis Wyatt. It is built in sandstone with dressings in Coade stone and is in Palladian style. ()
Herstmonceux Place East Sussex. Circa 1932 it ceased to be a private house and was divided into flats. The north front of the house was built in the late 17th century. The south and east fronts were designed by Samuel Wyatt in 1778. The white panels are made of Coade Stone. (), (See "Herstmonceux Place" in Gallery below)
Highclere Castle, Hampshire. 'London Lodge' (1793), Brick but Coade stone dressed, and wings (1840).(), (See "Highclere Castle, London Lodge" in Gallery below)
Horniman Museum, Forest Hill, London. The facade of the Pelican and British Empire Life Insurance Company at 70 Lombard Street in the City of London was rescued before demolition in 1915 and is now displayed in the museum. To adorn its building, Pelican added an allegorical sculptural group to the previously plain facade; the group was designed by Lady Diana Beauclerk and sculpted by John de Veere of the Coade factory. ()
Ifield, West Sussex - St Margaret's Church, There are several other large tombs from the 18th century in the churchyard—some of which are good examples of Coade stone. The George Hutchinson wall memorial in the chancel, designed by local sculptor Richard Joanes, includes Coade stone embellishments. ()
Imperial War Museum, London. Sculptural reliefs above the entrance.()
Kensington Palace, Kensington High Street, London. The lion and unicorn statues on pillars at the entrance to Kensington Palace.(), (See "Lion and Unicorn gate" images in Gallery)
Kew Gardens – The lion and unicorn statues over their respective gates into The Royal Botanical Gardens.(Lion Gate-)(Unicorn Gate-), (See "Kew Lion and Unicorn gates" images above)
Kew Gardens, The Medici Vase, from a pair ordered by George IV.
Lancaster Castle, Shire Hall and Crown Court were completed by 1798 by Thomas Harrison (architect). Six Gothic columns support a panelled vault covering the main part of the courtroom. Around the perimeter is an arcade, and the judge's bench has an elaborate canopy in Coade stone.()
Lancaster, Royal Lancaster Infirmary. The hospital by Paley, Austin and Paley is in free Renaissance style, and built in sandstone with slate roofs. It has an octagonal entrance tower that is flanked by wings. The tower has four stages, and above the entrance is a niche containing a Coade stone statue of the Good Samaritan. ()
Lawhitton, Cornwall. The parish church of St Michael includes two monuments, to R. Bennet (d. 1683) and in Coade stone to Richard Bennet-Coffin (d. 1796). ()
Lea Marston, Warwickshire. The Church Saint John the Baptist contains numerous monuments to members of the Adderley family, including one from 1784 made of Coade stone. ()
Lewes, Lewes Crown Court. Located at the highest point of the old town is the Portland stone and Coade stone facade of the Crown Court (1808–12, by John Johnson).()
Lincoln Castle, Coade stone bust of George III, relocated from atop the Dunston Pillar in 1940. ()
Liverpool. George Bullock (sculptor) statue of Horatio Nelson, 1st Viscount Nelson in Coade stone. (Location unclear) ()
LiverpoolTown Hall. 1802 statue by Charles Rossi - Britannia or Minerva atop Liverpool Town Hall. Minerva, the goddess of wisdom, or Britannia. She is holding a spear, which is a common replacement for Britannia's trident, but that is usually in her right hand. Minerva is commonly depicted with an owl, but she is also the goddess of strategic warfare, so a spear makes sense. Both wear Corinthian helmets. Who is it? - Neither Rossi's own list of commissions, nor a (non-existent) Royal Academy contemporary list of his worksare available, so both Historic England and Pevsner hedge their bets saying "Britannia or Minerva".
Lurgan, Northern Ireland. 42-46 High Street. Decorative stonework with Coade stone keys and sculpted heads.() Provenance unclear.
Lyme Regis, Dorset – Eleanor Coade's country home at Belmont House decorated with Coade stone on its façade.(), (See image of Belmont House at Top of this article)
Metropolitan Museum of Art ("The Met") - New York City. Faith, statue in 'overpainted Coade stone', after a model by John Bacon the Elder. 1791.(), (See image at start of this list of 'Examples' above.)
Montreal – Nelson's Column, built 1809. Montreal's pillar is the second-oldest "Nelson's Column" in the world, after the Nelson Monument in Glasgow. The statue and ornaments were shipped in parts to Montreal, arriving in April 1808. William Gilmore, a local stonemason who had contributed £7 towards its construction, was hired to assemble its seventeen parts and the foundation base was laid on 17 August 1809.()
Bank of Montreal. A series of Relief panels based on designs by John Bacon (1740–1799), moulded in Coade stone by Joseph Panzetta and Thomas Dubbin in 1819.()
The Octagon House or the John Tayloe III House in Washington, DC, built 1800 by William Thornton. ()
North Ockendon, Church of St Mary Magdalene, (Havering). A Grade I listed building, The baptismal font and royal arms (made of Coade stone) were both made in 1842. ()
Paço de São Cristóvão, (Palace of Saint Christopher) Rio de Janeiro, Brazil. In front of the palace is a decorative Coade stone portico, a gift sent by Hugh Percy, 2nd Duke of Northumberland, inspired by Robert Adams' porch for "Sion House". ()
Pitzhanger Manor House, Ealing, was owned from 1800 to 1810 by the architect Sir John Soane, who radically rebuilt it. It features four Coade stone caryatids atop the columns of the east front, modelled after those that enclose the sanctuary of Pandrosus in Athens. (), (See Caryatid, Pitzhanger Manor in Gallery below)
Plympton, Devon - St Mary's church, monument to W. Seymour (died 1801) in Coade stone. ()
Portman Square, London. About a third of the north side is in the statutory category scheme, Grade I. No.s 11–15 built in 1773–1776 by architect James Wyatt in cooperation with his brother Samuel Wyatt. First houses in which Coade stone was used. (), (See Portman Square in Gallery below)
Portmeirion, Horatio Nelson, 1st Viscount Nelson,(See "Portmeirion, Lord Nelson section")
Portobello, Edinburgh, Portobello Beach, three Coade Stone columns erected in a community garden, with Heritage Lottery funds in 2006 at 70 Promenade (John Street), Portobello; rescued from the garden of Argyle House, Hope Lane, off Portobello High Street when taken into Council storage in 1989 as a new extension was built onto the house. ()
Preston Hall, Midlothian, Significant features of the interior include four life-size female figures in the stairway, which are made from Coade stone, a type of ceramic used as an artificial stone. ()
Putney Old Burial Ground. The grave of 18th century novelist Harriet Thomson (c. 1719–1787) made of coade stone. ()
Reading, Berkshire. St Mary's Church, Castle Street. The frontage is rendered in stucco while the capitals of the portico are probably formed of Coade stone. ()
Radcliffe Observatory, Tower of the Winds (Oxford). The reliefs of the signs of the zodiac above the windows on the first floor are made of Coade stone by J. C. F. Rossi. () (See Tower of the Winds in Gallery)
Richmond upon Thames. Two examples of the River God, one outside Ham House, the other in Terrace Gardens. (Ham House-) (Terrace Gardens-), (See image in Coade stone Gallery below.)
Rio de Janeiro Zoo entrance. ()
Roscommon, Ireland, Entrance gate to former Mote Park demesne, The Lion Gate, built 1787, consisting of a Doric triumphal arch surmounted by a lion with screen walls linking it to a pair of identical lodges. ()
Saxham Hall, Suffolk has an Umbrello (shelter) constructed of Coade stone in the grounds (), (See "Saxham Hall, Umbrelllo" in Gallery below)
Schomberg House at 81–83 Pall Mall, London, was built for Meinhardt Schomberg, 3rd Duke of Schomberg in the late 17th-century. The porch, framed by two Coade stone figures, was added in the late 18th century. Note – The figures that framed the doorway of the original Coade's Gallery, on Pedlar's Acre at the Surrey end of Westminster Bridge Road were made from the same moulds. () (See "Schomberg House" in Gallery below)
Shrewsbury, Shropshire. Lord Hill's Column commemorates General Rowland Hill, 1st Viscount Hill, with a tall statue on a pillar. The statue was modelled in Lithodipyra (Coade stone) by Joseph Panzetta who worked for Eleanor Coade. ()
South Bank Lion at the south end of Westminster Bridge in central London originally stood atop the old Lion Brewery, on the Lambeth bank of the River Thames. The brewery was demolished in 1950, to make way for the South Bank Site of the 1951 Festival of Britain. Just before the demolition King George VI ordered that both lions should be preserved:
- The lion which originally stood over one of the brewery gates is now painted gold and located at the west-gate entrance of Twickenham Stadium, the home of English rugby. (See Twickenham Stadium Lion section below)
- The lion from the roof of the brewery, now known as the "South Bank Lion", was moved to Station Approach Waterloo, placed on a high plinth, and painted red as the symbol of British Rail. When removed, the initials of the sculptor William F. Woodington and the date, 24 May 1837, were discovered under one of its paws. In 1966, it was moved from outside Waterloo station to the south end of Westminster bridge. (), (See South Bank Lion image at Top of article)
Southwark – Statue of King Alfred the Great, Trinity Church Square. The statue of a king on the stone plinth in the square is Grade II-listed. The provenance is unknown, but it may be either one of eight medieval statues from the north end towers of Westminster Hall (c. late 14th century) or, alternatively, one of a pair representing Alfred the Great and Edward, the Black Prince made for the garden of Carlton House in the 18th century. Analysis in 2021 showed that the top part was of Coade stone but the legs were Roman and of Bath stone.(), (See King Alfred the Great image in Gallery)
St Botolph-without-Bishopsgate Church Hall, London, pair of statues of schoolchildren on the front of this former School House, replicas outside, listed originals now inside the Hall.()
St Mary-at-Lambeth, Garden Museum, London – Captain Bligh's tomb in the churchyard of St Mary's Lambeth.()
Shugborough Hall, Staffordshire. A large country house, between 1760 and 1770 the house was remodelled by "Athenian" Stuart, the giant portico was added to the front in 1794 by Samuel Wyatt. In front of the house is the portico, which has eight columns in wood faced with slate, and capitals in Coade stone. On the south front is another bowed bay.()
St Mary Magdalene's Church, Stapleford, Leicestershire. In the west wall of the gallery is a Coade stone fireplace, above which are the Royal arms on a roundel.()
Stourhead Gardens The 'Temple of Flora' contains a replica of the Borghese Vase modelled in Coade stone dating from 1770 to 1771.
Stowe Gardens, a grade I listed landscape garden in Stowe, Buckinghamshire.()
- 'The Oxford Gates'. The central piers were designed by William Kent in 1731 Pavilions at either end were added in the 1780s to the design of the architect Vincenzo Valdrè. The piers have coats of arms in Coade stone.
- 'The Gothic Cross' erected in 1814 from Coade stone on the path linking the Doric Arch to the Temple of Ancient Virtue. It was erected by the 1st Duke of Buckingham and Chandos as a memorial to his mother Lady Mary Nugent. It was demolished in the 1980s by a falling elm tree. The National Trust rebuilt the cross in 2017 using several of the surviving pieces of the monument.
- 'The Cobham Monument' is the tallest structure in the gardens. It incorporates a square plinth with corner buttresses surmounted by Coade stone lions holding shields added in 1778.
- 'The Gothic Umbrello' also called the Conduit House a small octagonal pavilion dating from the 1790s. The coat of arms of the Marquess of Buckingham, dated 1793, made from Coade stone are placed over the entrance door.
Teigngrace Devon. James Templer (1748–1813), the builder of the Stover Canal, is commemorated by a Coade stone monument in Teigngrace church.()
Tong, Shropshire - St Bartholomew's Church. The church's north door served as the "Door of Excommunication". A stoneworked version of the Royal Arms of George III, is located above the north door which is made of Coade stone. The monument cost £60 in 1814, and was a present from George Jellicoe to celebrate the Peace of Paris and Napoleon's exile to Elba.()
Towcester Racecourse on the Easton Neston estate – Main Entrance Gate decorated with an array of dogs, urns and vases surmounted by the Fermor arms, signed by William Croggon.(), (See "Towcester racecourse / Easton Neston House" images in Gallery)
Tremadog, Gwynedd, Wales. St Mary's Church Lychgate. Tremadog was founded, planned, named for and built by William Madocks between 1798 and 1811. The Lychgate to the churchyard is spanned by a decorative arch of Coade stone, containing boars, dragons, frogs, grimacing cherubs, owls, shrouded figures and squirrels, while the tops of the towers are surrounded by elephant heads.()
Twickenham Stadium Lion gate. (R.F.U.) The lion was sculpted in Coade stone by William F. Woodington in 1837 and paired with the "South Bank Lion" at the Lion Brewery on the Lambeth bank of the River Thames. It is now located above the central pillar of the Rowland Hill Memorial Gate (Gate 3) at Twickenham Stadium. It was covered with gold leaf prior to the 1991 Rugby World Cup held in England. The Lion brewery was damaged by fire and closed in 1931, and then demolished in 1949 to make way for the Royal Festival Hall. () (See "Twickenham Stadium Lion" image at top of this article)
Twinings' first ever (and still operating) shop's frontispiece, in the Strand, London opposite the Royal Courts of Justice, rediscovered under soot after a century.()
University of Maryland, College Park, United States – The keystone, featuring a carving of the head of Silenus, above the entry to The Rossborough Inn.()
University of East London, Stratford Campus. Statue of William Shakespeare. (See Shakespeare, University of East London image in Gallery)
Weymouth, Dorset. King's Statue, (Weymouth) is a tribute to George III on the seafront.()
Weston Park, in Weston-under-Lizard, Staffordshire.
- Sundial, 1825. The sundial in the grounds of the hall is in Coade stone, and is high. It has a triangular plan with concave sides. At the bottom is a plinth with meander decoration on a circular base, the sides are moulded with festoons at the top, in the angles are caryatids, and at the top is a fluted frieze and an egg-and-dart cornice. ()
- Two urns and planting basin, 1825. The urns and planting basin are in Coade stone, and are to the southwest of the 'Temple of Diana'. The basin has a diameter of , with a cabled rim to the kerb. The urns are on a base, and each has a short stem, and a wide body with guilloché decoration and carvings of lions' heads. ()
Whiteford House, Cornwall. The stables and a garden folly (called Whiteford Temple) survive. The Temple is owned by the Landmark Trust and let as a holiday cottage. There are Coade stone plaques on the exterior.()
Windsor Castle, St George's Chapel. Mrs Coade was commissioned by King George III to make the Gothic screen designed by Henry Emlyn, and possibly also replace part of the ceiling of St George's Chapel. ()
Woodeaton Manor, Oxford. In 1775 John and Elizabeth Weyland had the old manor house demolished and the present Woodeaton Manor built. In 1791 the architect Sir John Soane enhanced its main rooms with marble chimneypieces, added an Ionic porch of Coade stone, a service wing and an ornate main hall.()
Woodhall Park is a Grade I listed country house, Watton-at-Stone, Hertfordshire. Limited use of Coade stone in the park.()
Woolverstone Hall, Ipswich, The house, now a school, is built of Woolpit brick, with Coade stone ornamentation. ()
Park Crescent, Worthing, A triumphal arch. The main archway, designed for carriages, contains the busts of four bearded men as atlantes. The two side arches, designed for pedestrians, each contain the busts of four young ladies as caryatids. The Coade stone busts were supplied by William Croggan, successor to Eleanor Coade.()
Birkbeck Image library
In 2020, the library of Birkbeck, University of London, launched the Coade Stone image collection online, consisting of digitised slides of examples of Coade stone bequeathed by Alison Kelly, whose book Coade Stone was described by Caroline Stanford as "the most authoritative treatment on the subject".
Gallery
Modern replication claims
The recipe and techniques for producing Coade stone are claimed to have been rediscovered by Coade Ltd. from its workshops in Wilton, Wiltshire. In 2000, Coade ltd started producing statues, sculptures and architectural ornaments.
See also
Anthropic rock
Baluster
Cast stone
Pulhamite
Notes
References
Works cited
External links
In 2021 Historic England launched a crowd sourced Enrich the List map of Coade stone in England.
Google - My Maps
Historic England. Eleanor Coade and Interactive map of Coade stone sites
Anna Keay of the Landmark Trust discussing Mrs Coade and Coade stone
Birkbeck College Collections - Coade Stone
Gallery of images.
Plate 48: A view of Westminster Bridge, 1791. shows King's Arms Stairs in the foreground (possibly) with a sign advertising Coade's factory.
Imagee of Coade's factory, circa 1800
Plate 38a: Coade's Artificial Stone Manufactory 1801
Plate 39a: The entrance to Coade and Sealy's Gallery of Sculpture, Westminster Bridge, 1802
Coade stone factory, Narrow Wall, Lambeth, London, c1800.
Coade and Sealey's Artificial Stone Factory, by Thomas Hosmer Shepherd
Thomason Cudworth, restorers of Coade stone.
Coade Ltd, current makers and restorers of Coade stone.
Artificial stone
Stoneware
Ceramic materials | Coade stone | [
"Engineering"
] | 7,664 | [
"Ceramic engineering",
"Ceramic materials"
] |
566,231 | https://en.wikipedia.org/wiki/Microexpression | A microexpression is a facial expression that only lasts for a short moment. It is the innate result of a voluntary and an involuntary emotional response occurring simultaneously and conflicting with one another, and occurs when the amygdala responds appropriately to the stimuli that the individual experiences and the individual wishes to conceal this specific emotion. This results in the individual very briefly displaying their true emotions followed by a false emotional reaction.
Human emotions are an unconscious biopsychosocial reaction that derives from the amygdala and they typically last 0.5–4.0 seconds, although a microexpression will typically last less than 1/2 of a second. Unlike regular facial expressions it is either very difficult or virtually impossible to hide microexpression reactions. Microexpressions cannot be controlled as they happen in a fraction of a second, but it is possible to capture someone's expressions with a high speed camera and replay them at much slower speeds. Microexpressions express the seven universal emotions: disgust, anger, fear, sadness, happiness, contempt, and surprise. Nevertheless, in the 1990s, Paul Ekman expanded his list of emotions, including a range of positive and negative emotions not all of which are encoded in facial muscles. These emotions are amusement, embarrassment, anxiety, guilt, pride, relief, contentment, pleasure, and shame.
History
Microexpressions were first discovered by Haggard and Isaacs. In their 1966 study, Haggard and Isaacs outlined how they discovered these "micromomentary" expressions while "scanning motion picture films of psychotherapy for hours, searching for indications of non-verbal communication between therapist and patient" Through a series of studies, Paul Ekman found a high agreement across members of diverse Western and Eastern literate cultures on selecting emotional labels that fit facial expressions. Expressions he found to be universal included those indicating anger, disgust, fear, happiness, sadness, and surprise. Findings on contempt are less clear, though there is at least some preliminary evidence that this emotion and its expression are universally recognized. Working with his long-time friend Wallace V. Friesen, Ekman demonstrated that the findings extended to preliterate Fore tribesmen in Papua New Guinea, whose members could not have learned the meaning of expressions from exposure to media depictions of emotion. Ekman and Friesen then demonstrated that certain emotions were exhibited with very specific display rules, culture-specific prescriptions about who can show which emotions to whom and when. These display rules could explain how cultural differences may conceal the universal effect of expression.
In the 1960s, William S. Condon pioneered the study of interactions at the fraction-of-a-second level. In his famous research project, he scrutinized a four-and-a-half-second film segment frame by frame, where each frame represented 1/25th second. After studying this film segment for a year and a half, he discerned interactional micromovements, such as the wife moving her shoulder exactly as the husband's hands came up, which combined yielded rhythms at the micro level.
Years after Condon's study, American psychologist John Gottman began video-recording living relationships to study how couples interact. By studying participants' facial expressions, Gottman was able to correlate expressions with which relationships would last and which would not. Gottman's 2002 paper makes no claims to accuracy in terms of binary classification, and is instead a regression analysis of a two factor model where skin conductance levels and oral history narratives encodings are the only two statistically significant variables. Facial expressions using Ekman's encoding scheme were not statistically significant. In Malcolm Gladwell's book Blink, Gottman states that there are four major emotional reactions that are destructive to a marriage: defensiveness which is described as a reaction toward a stimulus as if you were being attacked, stonewalling which is the behavior where a person refuses to communicate or cooperate with another, criticism which is the practice of judging the merits and faults of a person, and contempt which is a general attitude that is a mixture of the primary emotions disgust and anger. Among these four, Gottman considers contempt the most important of them all.
A 2024 study investigated the impact of then-President Donald Trump's televised COVID-19 address on viewers' emotions, with a focus on his micro-expressions. Researchers found that a specific micro-expression of fear in Trump's address was perceived by his supporters as an authentic emotional cue, enhancing their connection to the speech, indicating that subtle nonverbal cues can influence emotional responses based on political alignment.
Types
Microexpressions are typically classified based on how an expression is modified. They exist in three groups:
Simulated expressions: when a microexpression is not accompanied by a genuine emotion. This is the most commonly studied form of microexpression because of its nature. It occurs when there is a brief flash of an expression, and then returns to a neutral state.
Neutralized expressions: when a genuine expression is suppressed and the face remains neutral. This type of micro-expression is not observable due to the successful suppression of it by a person.
Masked expressions: when a genuine expression is completely masked by a falsified expression. Masked expressions are microexpressions that are intended to be hidden, either subconsciously or consciously.
In photographs and films
Microexpressions can be difficult to recognize, but still images and video can make them easier to perceive. In order to learn how to recognize the way that various emotions register across parts of the face, Ekman and Friesen recommend the study of what they call "facial blueprint photographs", photographic studies of "the same person showing all the emotions" under consistent photographic conditions. However, because of their extremely short duration, by definition, microexpressions can happen too quickly to capture with traditional photography. Both Condon and Gottman compiled their seminal research by intensively reviewing film footage. Frame rate manipulation also allows the viewer to distinguish distinct emotions, as well as their stages and progressions, which would otherwise be too subtle to identify. This technique is demonstrated in the short film Thought Moments by Michael Simon Toon and a film in Malayalam Pretham 2016 Paul Ekman also has materials he has created on his website that teach people how to identify microexpressions using various photographs, including photos he took during his research period in New Guinea.
Moods vs emotions
Moods differ from emotions in that the feelings involved last over a longer period. For example, a feeling of anger lasting for just a few minutes, or even for an hour, is called an emotion. But if the person remains angry all day, or becomes angry a dozen times during that day, or is angry for days, then it is a mood. Many people describe this as a person being irritable, or that the person is in an angry mood. As Paul Ekman described, it is possible but unlikely for a person in this mood to show a complete anger facial expression. More often just a trace of that angry facial expression may be held over a considerable period: a tightened jaw or tensed lower eyelid, or lip pressed against lip, or brows drawn down and together.
Emotions are defined as a complex pattern of changes, including physiological arousal, feelings, cognitive processes, and behavioral reactions, made in response to a situation perceived to be personally significant.
Controlled microexpressions
Facial expressions are not just uncontrolled instances. Some may in fact be voluntary and others involuntary, and thus some may be truthful and others false or misleading. Facial expression may be controlled or uncontrolled. Some people are born able to control their expressions (such as pathological liars), while others are trained, for example actors. "Natural liars" may be aware of their ability to control microexpressions, and so may those who know them well; they may have been "getting away" with things since childhood due to greater ease in fooling their parents, teachers, and friends. People can simulate emotion expressions, attempting to create the impression that they feel an emotion when they are not experiencing it at all. A person may show an expression that looks like fear when in fact they feel nothing, or perhaps some other emotion. Facial expressions of emotion are controlled for various reasons, whether cultural or by social conventions. For example, in the United States many little boys learn the cultural display rule, "little men do not cry or look afraid". There are also more personal display rules, not learned by most people within a culture, but the product of the idiosyncrasies of a particular family. A child may be taught never to look angrily at his father, or never to show sadness when disappointed. These display rules, whether cultural ones shared by most people or personal, individual ones, are usually so well-learned, and learned so early, that the control of the facial expression they dictate is done automatically without thinking or awareness.
Emotional intelligence
Involuntary facial expressions can be hard to pick up and understand explicitly, and it is more of an implicit competence of the unconscious mind. Daniel Goleman created a conclusion on the capacity of an individual to recognize their own, as well as others' emotions, and to discriminate emotions based on introspection of those feelings. This is part of Goleman's emotional intelligence. In E.I, attunement is an unconscious synchrony that guides empathy. Attunement relies heavily on nonverbal communication. Looping is where facial expressions can elicit involuntary behavior. In the research motor mimicry there shows neurons that pick up on facial expressions and communicate with motor neurons responsible for muscles in the face to display the same facial expression. Thus displaying a smile may elicit a micro expression of a smile on someone who is trying to remain neutral in their expression.
Through fMRI we can see the area where these Mirror neurons are located lights up when you show the subject an image of a face expressing an emotion using a mirror. In the relationship of the prefrontal cortex also known as the (executive mind) which is where cognitive thinking experience and the amygdala being part of the limbic system is responsible for involuntary functions, habits, and emotions. The amygdala can hijack the pre-frontal cortex in a sympathetic response. In his book Emotional Intelligence Goleman uses the case of Jason Haffizulla (who assaulted his high school physics teacher because of a grade he received on a test) as an example of an emotional hijacking in which rationality and better judgement can be impaired. This is one example of how the bottom brain can interpret sensory memory and execute involuntary behavior. This is the purpose of microexpressions in attunement and how you can interpret the emotion that is shown in a fraction of a second. The microexpression of a concealed emotion that's displayed to an individual will elicit the same emotion in them to a degree, this process is referred to as an emotional contagion.
MFETT and SFETT
Micro Facial expression training tools and subtle Facial expression training tools are software made to develop someone's skills in the competence of recognizing emotion. The software consists of a set of videos that you watch after being educated on the facial expressions. After watching a short clip, there is a test of your analysis of the video with immediate feedback. This tool is to be used daily to produce improvements. Individuals that are exposed to the test for the first time usually do poor trying to assume what expression was presented, but the idea is through the reinforcement of the feedback you unconsciously generate the correct expectations of that expression. These tools are used to develop rounder social skills and a better capacity for empathy. They are also quite useful for development of social skills in people on the autism spectrum. Lie detection is an important skill not only in social situations and in the workplace, but also for law enforcement and other occupations that deal with frequent acts of deception. Microexpression and subtle expression recognition are valuable assets for these occupations as it increases the chance of detecting deception. In recent years it was found that the average person has a 54% accuracy rate in terms of exposing whether a person is lying or being truthful. However, Ekman had done a research experiment and discovered that secret service agents have a 64% accuracy rate. In later years, Ekman found groups of people that are intrigued by this form of detecting deception and had accuracy rates that ranged from 68% to 73%. Their conclusion was that people with the same training on microexpression and subtle expression recognition will vary depending on their level of emotional intelligence.
Lies and leakage
The sympathetic nervous system is one of two divisions under the autonomic nervous system, it functions involuntarily and one aspect of the system deals with emotional arousal in response to situations accordingly. Therefore, if an individual decides to deceive someone, they will experience a stress response within because of the possible consequences if caught. A person using deception will typically cope by using nonverbal cues which take the form of bodily movements. These bodily movements occur because of the need to release the chemical buildup of cortisol, which is produced at a higher rate in a situation where there is something at stake. The purpose for these involuntary nonverbal cues are to ease oneself in a stressful situation. In the midst of deceiving an individual, leakage can occur which is when nonverbal cues are exhibited and are contradictory to what the individual is conveying. Despite this useful tactic of detecting deception, microexpressions do not show what intentions or thoughts the deceiver is trying to conceal. They only provide the fact that there was emotional arousal in the context of the situation. If an individual displays fear or surprise in the form of a microexpression, it does not mean that the individual is concealing information that is relevant to investigation. This is similar to how polygraphs fail to some degree: because there is a sympathetic response due to the fear of being disbelieved as innocent. The same goes for microexpressions, when there is a concealed emotion there is no information revealed on why that emotion was felt. They do not determine a lie, but are a form of detecting concealed information. David Matsumoto is a well-known American psychologist and explains that one must not conclude that someone is lying if a microexpression is detected but that there is more to the story than is being told. Matsumoto was also the first to publish scientific evidence that microexpressions may be a key to detecting deception.
Despite the prevailing belief among law enforcement and the public that micro-expressions are able to reveal whether a person is being deceitful, there is a lack of empirical evidence to support this claim. Research has shown that there is often a disconnect between displayed emotions and felt emotions; in short, deception does not necessarily produce negative emotions and negative emotions do not necessarily signal deception. In addition, microexpressions do not occur often enough to be useful. In one of the few studies of microexpressions, researchers found that only 2% of emotional expressions coded could be considered microexpressions and they appeared equally for truth-tellers and liars. Other studies have found that liars and truth-tellers exhibit different responses than expected: in a concealed information test, Pentland and colleagues found that liars showed less contempt and more intense smiles than truthful people. This counters the fundamental idea behind microexpressions, which suggests that it is impossible for a liar to conceal their true nature, as evidence of their guilt "leaks" out through these expressions. Taken together, their findings suggest that micro-expressions do not occur frequently enough to be detectable, neither are they consistent enough to distinguish liars from truth-tellers.
Universality
A significant amount of research has been done in respect to whether basic facial expressions are universal or are culturally distinct. After Charles Darwin had written The Expression of the Emotions in Man and Animals it was widely accepted that facial expressions of emotion are universal and biologically determined. Many writers have disagreed with this statement. David Matsumoto however agreed with this statement in his study of sighted and blind Olympians. Using thousands of photographs captured at the 2004 Olympic and Paralympic Games, Matsumoto compared the facial expressions of sighted and blind judo athletes, including individuals who were born blind. All competitors displayed the same expressions in response to winning and losing. Matsumoto discovered that both blind and sighted competitors displayed similar facial expression, during winnings and loss. These results suggest that our ability to modify our faces to fit the social setting is not learned visually. Matsumoto also has training tools he has created on his website that teaches people how to identify micro and subtle facial expressions of emotion.
In popular culture
Microexpressions and associated science are the central premise for the 2009 television series Lie to Me, based on discoveries of Paul Ekman. The main character uses his acute awareness of microexpressions and other body language clues to determine when someone is lying or hiding something.
They also play a central role in Robert Ludlum's posthumously published The Ambler Warning, in which the central character, Harrison Ambler, is an intelligence agent able to recognize them. Similarly, one of the main characters in Alastair Reynolds' science fiction novel, Absolution Gap, Aura, can easily read microexpressions.
In The Mentalist, the main character, Patrick Jane, can often tell when people are being dishonest. However, specific reference to microexpressions is only made once in the 7th and final season.
In the 2015 science fiction thriller Ex Machina, Ava, an artificially intelligent humanoid, surprises the protagonist, Caleb, in their first meeting, when she tells him "Your microexpressions are telegraphing discomfort."
Controversy
Though the study of microexpressions has gained popularity through popular media, studies show it lacks internal consistency in its conceptual formation.
Maria Hartwig, professor of Psychology at John Jay College of Criminal Justice, argues that it has led to wrongful imprisonment of suspects who were aggressively interrogated due to perceived micro expressions.
A 2016 article in Nature explains that it is possible to mask involuntary expressions with fake expressions, and that in real world situations, over 40% of the time we can not tell the difference.
Judee K. Burgoon argues in a 2018 Frontiers in Psychology opinion that micro expressions theory presumes that people feel detectible emotions always connected to the same thoughts or motivations. But what if, for example, people feel happy rather than guilty about deceiving others? Burgoon also cites studies showing that micro-expressions are rare:In one of the very few investigations of microexpression frequency, Porter and ten Brinke (2008) coded 700 high-stakes genuine and falsified emotional expressions and found only 2% were microexpressions.and that they seldom result in arrests when implemented at places like airports:testimony to the U.S. Congress revealed that only 0.6% out of 61,000 passenger referrals to law enforcement in 2011 and 2012 resulted in arrests (U.S. Government Accountability Office, 2013)
See also
Interpersonal deception theory
Nonverbal communication
Microaggression
Facecrime
Silent Talker Lie Detector
Tell (poker)
References
Further reading
External links
Lying Is Exposed By Microexpressions We Can't Control, Science Daily, May 2006
The Naked Face
Facial Expressions Test based on "The Micro Expression Training Tool"
"A Look Tells All" in Scientific American Mind October 2006
Microexpressions Complicate Face Reading, by Medical News Today August 2007
Deception Detection, American Psychological Association
Facial expressions
Emotion
Nonverbal communication | Microexpression | [
"Biology"
] | 4,021 | [
"Emotion",
"Behavior",
"Human behavior"
] |
566,387 | https://en.wikipedia.org/wiki/Piconet | A piconet is an ad hoc network that links a wireless user group of devices using Bluetooth technology protocols. A piconet consists of two or more devices occupying the same physical channel (synchronized to a common clock and hopping sequence). It allows one master device to interconnect with up to seven active slave devices. Up to 255 further slave devices can be inactive, or parked, which the master device can bring into active status at any time, but an active station must go into parked first.
Some examples of piconets include a cell phone connected to a computer, a laptop and a Bluetooth-enabled digital camera, or several PDAs that are connected to each other.
Overview
A group of devices are connected via Bluetooth technology in an ad hoc fashion. A piconet starts with two connected devices, and may grow to eight connected devices. Bluetooth communication always designates one of the Bluetooth devices as a main controlling unit or master unit. Other devices that follow the master unit are slave units. This allows the Bluetooth system to be non-contention based (no collisions). This means that after a Bluetooth device has been added to the piconet, each device is assigned a specific time period to transmit and they do not collide or overlap with other units operating within the same piconet.
Piconet range varies according to the class of the Bluetooth device. Data transfer rates vary between about 200 and 2100 kilobits per second.
Because the Bluetooth system hops over 79 channels, the probability of interfering with another Bluetooth system is less than 1.5%. This allows several Bluetooth piconets to operate in the same area at the same time with minimal interference.
See also
Personal area network (PAN)
Scatternet
IEEE 802.15
Further reading
Bluetooth | Piconet | [
"Technology"
] | 373 | [
"Computing stubs",
"Wireless networking",
"Bluetooth",
"Computer network stubs"
] |
566,411 | https://en.wikipedia.org/wiki/Scatternet | A scatternet is a type of ad hoc computer network consisting of two or more piconets. The terms "scatternet" and "piconet" are typically applied to Bluetooth wireless technology.
Description
A piconet is the type of connection that is formed between two or more Bluetooth-enabled devices such as modern cell phones. Bluetooth enabled devices are "peer units" in that they are able to act as either master or slave. However, when a piconet is formed between two or more devices, one device takes the role of the 'master', and all other devices assume a 'slave' role for synchronization reasons. Piconets have a 7 member address space (3 bits, with zero reserved for broadcast), which limits the maximum size of a piconet to 8 devices, i.e. 1 master and 7 slaves.
A scatternet is a number of interconnected piconets that supports communication between more than 8 devices. Scatternets can be formed when a member of one piconet (either the master or one of the slaves) elects to participate as a slave in a second, separate piconet. The device participating in both piconets can relay data between members of both ad hoc networks. However, the basic Bluetooth protocol does not support this relaying - the host software of each device would need to manage it. Using this approach, it is possible to join together numerous piconets into a large scatternet, and to expand the physical size of the network beyond Bluetooth's limited range.
Currently there are very few actual implementations of scatternets due to limitations of Bluetooth and the MAC address protocol. However, there is a growing body of research being conducted with the goal of developing algorithms to efficiently form scatternets.
Future applications
Scatternets have the potential to bring the interconnectivity of the Internet to the physical world through wireless devices. A number of companies have attempted to launch social networking and dating services that leverage early scatternet implementations (see Bluedating). Scatternets can also be used to enable ad hoc communication and interaction between autonomous robots and other devices.
Research
Several papers exist that propose algorithms for scatternet formation, and many different approaches have been simulated in both academic and corporate R&D environments. Some early experiments with large scatternets can be found at ETH Zurich in the BTnode project.
In 2008, a student at University College Cork, Ireland, developed a scatternet-based application in the Java programming language, using the JSR-82 library. This application's main purpose is to facilitate parallel computations over Bluetooth scatternets, using an MPI-style message passing paradigm. Although it only runs on the emulation environment provided by Sun's Wireless Toolkit, it is capable of creating a scatternet of up to 15 devices and routing a message through the network.
In 2006, a student at the University of Technology, Iraq, developed an on-demand peer-to-peer scatternet routing algorithm and protocol, with Java ME application based on JSR-82 library. This application was tested successfully on several real life Java-enabled mobile phones, and is capable of building large scatternets, but its only practical when routes are less than 3 nodes long due to Bluetooth's speed.
See also
Personal area network
IEEE 802.15
Mesh networking
References
Bluetooth | Scatternet | [
"Technology"
] | 704 | [
"Wireless networking",
"Bluetooth"
] |
566,520 | https://en.wikipedia.org/wiki/Catalog%20of%20Nearby%20Habitable%20Systems | The Catalog of Nearby Habitable Systems (HabCat) is a catalogue of star systems which conceivably have habitable planets. The list was developed by scientists Jill Tarter and Margaret Turnbull under the auspices of Project Phoenix, a part of SETI.
The list was based upon the Hipparcos Catalogue (which has 118,218 stars) by filtering on a wide range of star system features. The current list contains 17,129 "HabStars".
External links
Target Selection for SETI: 1. A Catalog of Nearby Habitable Stellar Systems, Turnbull, Tarter, submitted 31 Oct 2002 (last accessed 19 Jan 2010)
Target selection for SETI. II. Tycho-2 dwarfs, old open clusters, and the nearest 100 stars , by Turnbull and Tarter, (last accessed 19 Jan 2010)
HabStars - an article on the NASA website
Astronomical catalogues of stars
Search for extraterrestrial intelligence
Exoplanet catalogues | Catalog of Nearby Habitable Systems | [
"Astronomy"
] | 199 | [
"Astronomical catalogue stubs",
"Astronomy stubs",
"Astrobiology stubs"
] |
566,614 | https://en.wikipedia.org/wiki/Sleep%20cycle | The sleep cycle is an oscillation between the slow-wave and REM (paradoxical) phases of sleep. It is sometimes called the ultradian sleep cycle, sleep–dream cycle, or REM-NREM cycle, to distinguish it from the circadian alternation between sleep and wakefulness. In humans, this cycle takes 70 to 110 minutes (90 ± 20 minutes). Within the sleep of adults and infants there are cyclic fluctuations between quiet and active sleep. These fluctuations may persist during wakefulness as rest-activity cycles but are less easily discerned.
Characteristics
Electroencephalography shows the timing of sleep cycles by virtue of the marked distinction in brainwaves manifested during REM and non-REM sleep. Delta wave activity, correlating with slow-wave (deep) sleep, in particular shows regular oscillations throughout a good night's sleep. Secretions of various hormones, including renin, growth hormone, and prolactin, correlate positively with delta-wave activity, while secretion of thyroid-stimulating hormone correlates inversely. Heart rate variability, well known to increase during REM, predictably also correlates inversely with delta-wave oscillations over the ~90-minute cycle.
In order to determine in which stage of sleep the asleep subject is, electroencephalography is combined with other devices used for this differentiation. EMG (electromyography) is a crucial method to distinguish between sleep phases: for example, a decrease of muscle tone is in general a characteristic of the transition from wake to sleep, and during REM sleep, there is a state of muscle atonia (paralysis), resulting in an absence of signals in the EMG.
EOG (electrooculography), the measure of the eyes’ movement, is the third method used in the sleep architecture measurement; for example, REM sleep, as the name indicates, is characterized by a rapid eye movement pattern, visible thanks to the EOG.
Moreover, methods based on cardiorespiratory parameters are also effective in the analysis of sleep architecture—if they are associated with the other aforementioned measurements (such as electroencephalography, electrooculography and the electromyography).
Homeostatic functions, especially thermoregulation, occur normally during non-REM sleep, but not during REM sleep. Thus, during REM sleep, body temperature tends to drift away from its mean level, and during non-REM sleep, to return to normal. Alternation between the stages therefore maintains body temperature within an acceptable range.
In humans, the transition between non-REM and REM is abrupt; in other animals, it is less so.
Researchers have proposed different models to elucidate the undoubtedly complex rhythm of electrochemical processes that result in the regular alternation of REM and NREM sleep. Monoamines are active during NREMS, but not REMS, whereas acetylcholine is more active during REMS. The reciprocal interaction model proposed in the 1970s suggested a cyclic give-and-take between these two systems. More recent theories such as the "flip-flop" model, proposed in the 2000s, include the regulatory role of an inhibitory neurotransmitter gamma-aminobutyric acid (GABA).
Length
The standard figure given for the average length of the sleep cycle in an adult man is 90 minutes. N1 (NREM stage 1) is when the person is drowsy or awake to falling asleep. Brain waves and muscle activity start to decrease at this stage. N2 is when the person experiences a light sleep. Eye movement has stopped by this time. Brain wave frequency and muscle tonus is decreased. The heart rate and body temperature also goes down. N3 or even N4 are the most difficult stages to be awakened. Every part of the body is now relaxed, breathing, blood pressure and body temperature are reduced. The National Sleep Foundation discusses the different stages of NREM sleep and their importance. They describe REM sleep as "A unique state, in which dreams usually occur. The brain is awake and body paralyzed." This unique stage usually occurs when the person dreams. The figure of 90 minutes for the average length of a sleep cycle was popularized by Nathaniel Kleitman around 1963. Other sources give 90–110 minutes or 80–120 minutes.
In infants, the sleep cycle lasts about 50–60 minutes; average length increases as the human grows into adulthood. In cats, the sleep cycle lasts about 30 minutes, though it is about 12 minutes in rats and up to 120 minutes in elephants (In this regard, the ontogeny of the sleep cycle appears proportionate with metabolic processes, which vary in proportion with organism size. However, shorter sleep cycles detected in some elephants complicate this theory).
The cycle can be defined as lasting from the end of one REM period to the end of the next, or from the beginning of REM, or from the beginning of non-REM stage 2 (the decision of how to mark the periods makes a difference for research purposes, because of the unavoidable inclusion or exclusion of the night's first NREM or its final REM phase if directly preceding awakening).
A 7–8-hour sleep probably includes five cycles, the middle two of which tend to be longer than the first and the fourth. REM takes up more of the cycle as the night goes on.
Awakening
Unprovoked awakening occurs most commonly during or after a period of REM sleep, as body temperature is rising.
Continuation during wakefulness
Ernest Hartmann discovered in 1968 that humans seem to continue a roughly 90-minute ultradian rhythm throughout a 24-hour day, whether they are asleep or awake. According to this hypothesis, during the period of this cycle corresponding with REM, people tend to daydream more and show less muscle tone. Kleitman and others following have referred to this rhythm as the basic rest–activity cycle, of which the "sleep cycle" would be a manifestation. A difficulty for this theory is the fact that a long non-REM phase almost always precedes REM, regardless of when in the cycle a person falls asleep.
Alteration
The sleep cycle has proven resistant to systematic alteration by drugs. Although some drugs shorten REM periods, they do not abolish the underlying rhythm. Deliberate REM deprivation shortens the cycle temporarily, as the brain moves into REM sleep more readily (the "REM rebound") in an apparent correction for the deprivation. There are various methods to control the alterations of sleep cycles:
Switching off all artificial lights: Since the natural production of melatonin can be suppressed by bright light, exposure to light–even after sunset–may prevent the body from feeling sleepy (and hence entering the sleep phase).
Meditation and relaxation techniques
Staying away from caffeine before bedtime: This ensures that the body is not under the stimulant effects of caffeine while trying to sleep.
See also
Biphasic and polyphasic sleep
Circadian rhythm
References
Bibliography
Mallick, B. N.; S. R. Pandi-Perumal; Robert W. McCarley; and Adrian R. Morrison (2011). Rapid Eye Movement Sleep: Regulation and Function. Cambridge University Press.
Nir, and Tononi, "Dreaming and the Brain: from Phenomenology to Neurophsiology." Trends in Cognitive Sciences, vol. 14, no. 2, 2010, pp. 88–100.
Varela, F., Engel, J., Wallace, B., & Thupten Jinpa. (1997). Sleeping, dreaming, and dying: An exploration of consciousness with the Dalai Lama.
Sleep physiology
Chronobiology | Sleep cycle | [
"Biology"
] | 1,603 | [
"Behavior",
"Sleep physiology",
"Sleep",
"Chronobiology"
] |
566,643 | https://en.wikipedia.org/wiki/Gyromitrin | Gyromitrin is a toxin and carcinogen present in several members of the fungal genus Gyromitra, like G. esculenta. Its formula is . It is unstable and is easily hydrolyzed to the toxic compound monomethylhydrazine . Monomethylhydrazine acts on the central nervous system and interferes with the normal use and function of vitamin B6. Poisoning results in nausea, stomach cramps, and diarrhea, while severe poisoning can result in convulsions, jaundice, or even coma or death. Exposure to monomethylhydrazine has been shown to be carcinogenic in small mammals.
History
Poisonings related to consumption of the false morel Gyromitra esculenta, a highly regarded fungus eaten mainly in Finland and by some in parts of Europe and North America, had been reported for at least a hundred years. Experts speculated the reaction was more of an allergic one specific to the consumer, or a misidentification, rather than innate toxicity of the fungus, due to the wide range in effects seen. Some would suffer severely or perish while others exhibited no symptoms after eating similar amounts of mushrooms from the same dish. Yet others would be poisoned after previously eating the fungus for many years without ill-effects. In 1885, Böhm and Külz described helvellic acid, an oily substance they believed to be responsible for the toxicity of the fungus. The identity of the toxic constituents of Gyromitra species eluded researchers until 1968, when N-methyl-N-formylhydrazone was isolated by German scientists List and Luft and named gyromitrin. Each kilogram of fresh false morel had between 1.2 and 1.6 grams of the compound.
Mechanism of toxicity
Gyromitrin is a volatile, water-soluble hydrazine compound that can be hydrolyzed in the body into monomethylhydrazine (MMH) through the intermediate N-methyl-N-formylhydrazine.
Other N-methyl-N-formylhydrazone derivatives have been isolated in subsequent research, although they are present in smaller amounts. These other compounds would also produce monomethylhydrazine when hydrolyzed, although it remains unclear how much each contributes to the false morel's toxicity.
The toxins react with pyridoxal 5-phosphate—the activated form of pyridoxine—and form a hydrazone. This reduces production of the neurotransmitter GABA via decreased activity of glutamic acid decarboxylase, which gives rise to the neurological symptoms. MMH also causes oxidative stress leading to methemoglobinemia. Additionally during the metabolism of MMH, N-methyl-N-formylhydrazine is produced; this then undergoes cytochrome P450 regulated oxidative metabolism which via reactive nitrosamide intermediates leads to formation of methyl radicals which lead to liver necrosis. Inhibition of diamine oxidase (histaminase) elevates histamine levels, resulting in headaches, nausea, vomiting, and abdominal pain. Giving pyridoxine to rats poisoned with gyromitrin inhibited seizures, but did not prevent liver damage.
The toxicity of gyromitrin varies greatly according to the animal species being tested. Tests of administering gyromitrin to mice to observe the correlation between the formation of MMH and stomach pH have been performed. Higher levels of formed MMH were observed in the stomachs of the mice than were observed in control tests under less acidic conditions. The conclusions drawn were that the formation of MMH in a stomach is likely a result of acid hydrolysis of gyromitrin rather than enzymatic metabolism. Based on this animal experimentation, it is reasonable to infer that a more acidic stomach environment would transform more gyromitrin into MMH, independent of the species in which the reaction is occurring.
The median lethal dose (LD50) is 244 mg/kg in mice, 50–70 mg/kg in rabbits, and 30–50 mg/kg in humans. The toxicity is largely due to the MMH that is created; about 35% of ingested gyromitrin is transformed to MMH. Based on this conversion, the LD50 of MMH in humans has been estimated to be 1.6–4.8 mg/kg in children, and 4.8–8 mg/kg in adults.
Occurrence and removal
Several Gyromitra species are traditionally considered very good edibles and several steps are available to remove gyromitrin from these mushrooms and allow their consumption. For North America, the toxin has been reliably reported from the species G. esculenta, G. gigas, and G. fastigiata. Species in which gyromitrin's presence is suspected, but not proven, include G. californica, G. caroliniana, G. korfii, and G. sphaerospora, in addition to Disciotis venosa and Sarcosphaera coronaria. The possible presence of the toxin renders these species "suspected, dangerous, or not recommended" for consumption.
Gyromitrin content can differ greatly in different populations of the same species. For example, G. esculenta collected from Europe is "almost uniformly toxic", compared to rarer reports of toxicity from specimens collected from the US west of the Rocky Mountains. A 1985 study reported that the stems of G. esculenta contained twice as much gyromitrin as the cap, and that mushrooms collected at higher altitudes contained less of the toxin than those collected at lower altitudes.
The gyromitrin content in false morels has been reported to be in the range of 40–732 milligrams of gyromitrin per kilogram of mushrooms (wet weight). Gyromitrin is volatile and water soluble, and can be mostly removed from the mushrooms by cutting them to small pieces and repeatedly boiling them in copious amounts of water under good ventilation. Prolonged periods of air drying also reduces levels of the toxin. In the US, there are typically between 30 and 100 cases of gyromitrin poisoning requiring medical attention. The mortality rate for cases worldwide is about 10%.
Detection
The early methods developed for the determination of gyromitrin concentration in mushroom tissue were based on thin-layer chromatography and spectrofluorometry, or the electrochemical oxidation of hydrazine. These methods require large amounts of sample, are labor-intensive and unspecific. A 2006 study reported an analytical method based on gas chromatography-mass spectrometry with detection levels at the parts per billion level. The method, which involves acid hydrolysis of gyromitrin followed by derivatization with pentafluorobenzoyl chloride, has a minimum detectable concentration equivalent to 0.3 microgram of gyromitrin per gram of dry matter.
Identification
When foraging for mushrooms in the wild, it is important to be cautious of ones that may not be safe to eat. Morel mushrooms are highly sought after; however, they can easily be confused with Gyromitra esculenta, also known as “false morels”. There are a few differing characteristics between the two species that can be used to avoid accidental poisoning. The cap of a real morel mushroom attaches directly to the stem, while the cap of a false morel grows around the stem. Real morel mushrooms are also hollow from top to bottom when cut in half, which varies from the filled nature of false morels. Finally, based on outward appearance, real morels are rather uniformly shaped and covered in pits that seem to fall inwards, whereas false morels are often considered more irregularly shaped with wavy ridges that seem to form outwards.
Poisoning
Symptoms
The symptoms of poisoning are typically gastrointestinal and neurological. Symptoms occur within 6–12 hours of consumption, although cases of more severe poisoning may present sooner—as little as 2 hours after ingestion. Initial symptoms are gastrointestinal, with sudden onset of nausea, vomiting, and watery diarrhea which may be bloodstained. Dehydration may develop if the vomiting or diarrhea is severe. Dizziness, lethargy, vertigo, tremor, ataxia, nystagmus, and headaches develop soon after; fever often occurs, a distinctive feature which does not develop after poisoning by other types of mushrooms. In most cases of poisoning, symptoms do not progress from these initial symptoms, and patients recover after 2–6 days of illness.
In some cases there may be an asymptomatic phase following the initial symptoms which is then followed by more significant toxicity including kidney damage, liver damage, and neurological dysfunction including seizures and coma. These signs usually develop within 1–3 days in serious cases. The patient develops jaundice and the liver and spleen become enlarged, in some cases blood sugar levels will rise (hyperglycemia) and then fall (hypoglycemia) and liver toxicity is seen. Additionally, intravascular hemolysis causes destruction of red blood cells resulting in increases in free hemoglobin and hemoglobinuria, which can lead to kidney toxicity or kidney failure. Methemoglobinemia may also occur in some cases. This is where higher than normal levels of methemoglobin—a form of hemoglobin that can not carry oxygen—are found in the blood. It causes the patient to become short of breath and cyanotic. Cases of severe poisoning may progress to a terminal neurological phase, with delirium, muscle fasciculations and seizures, and mydriasis progressing to coma, circulatory collapse, and respiratory arrest. Death may occur from five to seven days after consumption.
Toxic effects from gyromitrin may also be accumulated from sub-acute and chronic exposure due to "professional handling"; symptoms include pharyngitis, bronchitis, and keratitis.
Treatment
Treatment is mainly supportive; gastric decontamination with activated charcoal may be beneficial if medical attention is sought within a few hours of consumption. However, symptoms often take longer than this to develop, and patients do not usually present for treatment until many hours after ingestion, thus limiting its effectiveness. Patients with severe vomiting or diarrhea can be rehydrated with intravenous fluids. Monitoring of biochemical parameters such as methemoglobin levels, electrolytes, liver and kidney function, urinalysis, and complete blood count is undertaken and any abnormalities are corrected. Dialysis can be used if kidney function is impaired or the kidneys are failing. Hemolysis may require a blood transfusion to replace the lost red blood cells, while methemoglobinemia is treated with intravenous methylene blue.
Pyridoxine, also known as vitamin B6, can be used to counteract the inhibition by MMH on the pyridoxine-dependent step in the synthesis of the neurotransmitter GABA. Thus GABA synthesis can continue and symptoms are relieved. Pyridoxine, which is only useful for the neurological symptoms and does not decrease hepatic toxicity, is given at a dose of 25 mg/kg; this can be repeated up to a maximum total of 15 to 30 g daily if symptoms do not improve. Benzodiazepines are given to control seizures; as they also modulate GABA receptors they may potentially increase the effect of pyridoxine. Additionally MMH inhibits the chemical transformation of folic acid into its active form, folinic acid, this can be treated by folinic acid given at 20–200 mg daily.
Toxicity controversy
Due to variances seen in the effects of consumption of the Gyromitra esculenta, there is some controversy surrounding its toxicity. Historically, there was some confusion over what was causing the symptoms to form after consuming the mushrooms. Over time, there were poisonings across Europe due to the consumption of Gyromitra mushrooms; however, the toxin causing the poisonings was unknown at that time. In 1793, mushroom poisonings that occurred in France were attributed to Morchella pleopus, and in 1885, the poisonings were said to be caused by “helvellic acid”. The identity of the toxin found in Gyromitra was not known until List and Luft of Germany were able to isolate and identify the structure of gyromitrin from these mushrooms in 1968.
Gyromitrin may not be considered especially toxic, which may lead people to underestimate its poisonous qualities. In Poland, from 1953 to 1962, there were 138 documented poisonings, only two of which were fatal. Of 706 calls to the Swedish poison center regarding Gyromitra mushrooms between 1994 and 2002, there were no fatalities. In the United States from 2001 to 2011, 448 calls to poison centers involved gyromitrin. The North American Mycological Association (NAMA) reported on 27 cases over 30 years, none of which were fatal. Although poisonings due to gyromitrin are not often fatal, it is still highly toxic to the liver. Of those 27 analyzed cases, nine developed liver injury; there were also three instances of acute kidney injury. As gyromitrin is not especially stable, most poisonings apparently occur from the consumption of the raw or insufficiently cooked "false morel" mushrooms.
There are also possibly several strains of Gyromitra esculenta that vary from region to region and have differing levels of the toxin. For example, there is a less toxic variety that grows west of the Rockies in North America. The toxin may also diminish as the seasons change, as most exposures occur in the Spring. This may help explain some conflicting reports on whether the fungus is edible or not.
Carcinogenicity
Monomethylhydrazine, as well as its precursors methylformylhydrazine and gyromitrin and raw Gyromitra esculenta, have been shown to be carcinogenic in experimental animals. Although Gyromitra esculenta has not been observed to cause cancer in humans, it is possible there is a carcinogenic risk for people who ingest these types of mushrooms. Even small amounts may have a carcinogenic effect. At least 11 different hydrazines have been isolated from Gyromitra esculenta, and it is not known if the potential carcinogens can be completely removed by parboiling.
References
Books cited
External links
"Gyromitra esculenta, one of the false morels"
Official Finnish instructions for the processing of false morels
Mycotoxins
Hydrazides
Hydrazones
Formamides | Gyromitrin | [
"Chemistry"
] | 3,092 | [
"Hydrazones",
"Functional groups"
] |
566,680 | https://en.wikipedia.org/wiki/Artificial%20Intelligence%3A%20A%20Modern%20Approach | Artificial Intelligence: A Modern Approach (AIMA) is a university textbook on artificial intelligence (AI), written by Stuart J. Russell and Peter Norvig. It was first published in 1995, and the fourth edition of the book was released on 28 April 2020.
AIMA has been called "the most popular artificial intelligence textbook in the world", and is considered the standard text in the field of AI. As of 2023, it was being used at over 1500 universities worldwide, and it has over 59,000 citations on Google Scholar.
AIMA is intended for an undergraduate audience but can also be used for graduate-level studies with the suggestion of adding some of the primary sources listed in the extensive bibliography.
Content
AIMA gives detailed information about the working of algorithms in AI. The book's chapters span from classical AI topics like searching algorithms and first-order logic, propositional logic and probabilistic reasoning to advanced topics such as multi-agent systems, constraint satisfaction problems, optimization problems, artificial neural networks, deep learning, reinforcement learning, and computer vision.
Code
The authors provide a GitHub repository with implementations of various exercises and algorithms from the book in different programming languages. Programs in the book are presented in pseudo code with implementations in Java, Python, Lisp, JavaScript, and Scala available online.
Editions
The first and last editions of AIMA were published in 1995 and 2020, respectively, with four editions published in total (1995, 2003, 2009, 2020).
The following is a list of the US print editions. For other editions, the publishing date and the colors of the cover can vary.
1st edition: published in 1995 with red cover
2nd edition: published in 2003 with green cover
3rd edition: published in 2009 with blue cover
4th edition: published in 2020 with purple cover
Various editions have been translated from the original English into several languages, including at least Chinese, French, German, Hungarian, Italian, Romanian, Russian, and Serbian. However, the latest, 4th edition is available only in English, French, Croatian
References
External links
1995 non-fiction books
2003 non-fiction books
2009 non-fiction books
2020 non-fiction books
Artificial intelligence textbooks
Cognitive science literature
English-language non-fiction books
Robotics books
Prentice Hall books | Artificial Intelligence: A Modern Approach | [
"Technology"
] | 457 | [
"Computing stubs",
"Computer book stubs"
] |
566,866 | https://en.wikipedia.org/wiki/880%20%28number%29 | 880 (eight hundred [and] eighty) is the natural number following 879 and preceding 881.
Characteristics
In mathematics and sound
It is the number of 4-by-4 magic squares.
And the triple factorial: 11!!! = 880.
880 is the frequency in hertz of the musical note A5.
Other
880 is also:
The code for international direct dialing phone calls to Bangladesh
The year 880 BC or AD 880.
Interstate 880, several Interstate highways in the United States.
Dodge Custom 880, an automobile manufactured from 1962 to 1965.
References
Integers | 880 (number) | [
"Mathematics"
] | 117 | [
"Mathematical objects",
"Number stubs",
"Elementary mathematics",
"Integers",
"Numbers"
] |
566,869 | https://en.wikipedia.org/wiki/Quasi-arithmetic%20mean | In mathematics and statistics, the quasi-arithmetic mean or generalised f-mean or Kolmogorov-Nagumo-de Finetti mean is one generalisation of the more familiar means such as the arithmetic mean and the geometric mean, using a function . It is also called Kolmogorov mean after Soviet mathematician Andrey Kolmogorov. It is a broader generalization than the regular generalized mean.
Definition
If f is a function which maps an interval of the real line to the real numbers, and is both continuous and injective, the f-mean of numbers
is defined as , which can also be written
We require f to be injective in order for the inverse function to exist. Since is defined over an interval, lies within the domain of .
Since f is injective and continuous, it follows that f is a strictly monotonic function, and therefore that the f-mean is neither larger than the largest number of the tuple nor smaller than the smallest number in .
Examples
If , the real line, and , (or indeed any linear function , not equal to 0) then the f-mean corresponds to the arithmetic mean.
If , the positive real numbers and , then the f-mean corresponds to the geometric mean. According to the f-mean properties, the result does not depend on the base of the logarithm as long as it is positive and not 1.
If and , then the f-mean corresponds to the harmonic mean.
If and , then the f-mean corresponds to the power mean with exponent .
If and , then the f-mean is the mean in the log semiring, which is a constant shifted version of the LogSumExp (LSE) function (which is the logarithmic sum), . The corresponds to dividing by , since logarithmic division is linear subtraction. The LogSumExp function is a smooth maximum: a smooth approximation to the maximum function.
Properties
The following properties hold for for any single function :
Symmetry: The value of is unchanged if its arguments are permuted.
Idempotency: for all x, .
Monotonicity: is monotonic in each of its arguments (since is monotonic).
Continuity: is continuous in each of its arguments (since is continuous).
Replacement: Subsets of elements can be averaged a priori, without altering the mean, given that the multiplicity of elements is maintained. With it holds:
Partitioning: The computation of the mean can be split into computations of equal sized sub-blocks:
Self-distributivity: For any quasi-arithmetic mean of two variables: .
Mediality: For any quasi-arithmetic mean of two variables:.
Balancing: For any quasi-arithmetic mean of two variables:.
Central limit theorem : Under regularity conditions, for a sufficiently large sample, is approximately normal.
A similar result is available for Bajraktarević means and deviation means, which are generalizations of quasi-arithmetic means.
Scale-invariance: The quasi-arithmetic mean is invariant with respect to offsets and scaling of : .
Characterization
There are several different sets of properties that characterize the quasi-arithmetic mean (i.e., each function that satisfies these properties is an f-mean for some function f).
Mediality is essentially sufficient to characterize quasi-arithmetic means.
Self-distributivity is essentially sufficient to characterize quasi-arithmetic means.
Replacement: Kolmogorov proved that the five properties of symmetry, fixed-point, monotonicity, continuity, and replacement fully characterize the quasi-arithmetic means.
Continuity is superfluous in the characterization of two variables quasi-arithmetic means. See [10] for the details.
Balancing: An interesting problem is whether this condition (together with symmetry, fixed-point, monotonicity and continuity properties) implies that the mean is quasi-arithmetic. Georg Aumann showed in the 1930s that the answer is no in general, but that if one additionally assumes to be an analytic function then the answer is positive.
Homogeneity
Means are usually homogeneous, but for most functions , the f-mean is not.
Indeed, the only homogeneous quasi-arithmetic means are the power means (including the geometric mean); see Hardy–Littlewood–Pólya, page 68.
The homogeneity property can be achieved by normalizing the input values by some (homogeneous) mean .
However this modification may violate monotonicity and the partitioning property of the mean.
Generalizations
Consider a Legendre-type strictly convex function . Then the gradient map is globally invertible and the weighted multivariate quasi-arithmetic mean is defined by
, where is a normalized weight vector ( by default for a balanced average). From the convex duality, we get a dual quasi-arithmetic mean associated to the quasi-arithmetic mean .
For example, take for a symmetric positive-definite matrix.
The pair of matrix quasi-arithmetic means yields the matrix harmonic mean:
See also
Generalized mean
Jensen's inequality
References
Andrey Kolmogorov (1930) "On the Notion of Mean", in "Mathematics and Mechanics" (Kluwer 1991) — pp. 144–146.
Andrey Kolmogorov (1930) Sur la notion de la moyenne. Atti Accad. Naz. Lincei 12, pp. 388–391.
John Bibby (1974) "Axiomatisations of the average and a further generalisation of monotonic sequences," Glasgow Mathematical Journal, vol. 15, pp. 63–65.
Hardy, G. H.; Littlewood, J. E.; Pólya, G. (1952) Inequalities. 2nd ed. Cambridge Univ. Press, Cambridge, 1952.
B. De Finetti, "Sul concetto di media", vol. 3, p. 36996, 1931, istituto italiano degli attuari.
Means
[10] MR4355191 - Characterization of quasi-arithmetic means without regularity condition
Burai, P.; Kiss, G.; Szokol, P.
Acta Math. Hungar. 165 (2021), no. 2, 474–485.
[11]
MR4574540 - A dichotomy result for strictly increasing bisymmetric maps
Burai, Pál; Kiss, Gergely; Szokol, Patricia
J. Math. Anal. Appl. 526 (2023), no. 2, Paper No. 127269, 9 pp. | Quasi-arithmetic mean | [
"Physics",
"Mathematics"
] | 1,359 | [
"Means",
"Mathematical analysis",
"Point (geometry)",
"Geometric centers",
"Symmetry"
] |
566,959 | https://en.wikipedia.org/wiki/Standard%20electrode%20potential | In electrochemistry, standard electrode potential , or , is a measure of the reducing power of any element or compound. The IUPAC "Gold Book" defines it as; "the value of the standard emf (electromotive force) of a cell in which molecular hydrogen under standard pressure is oxidized to solvated protons at the left-hand electrode".
Background
The basis for an electrochemical cell, such as the galvanic cell, is always a redox reaction which can be broken down into two half-reactions: oxidation at anode (loss of electron) and reduction at cathode (gain of electron). Electricity is produced due to the difference of electric potential between the individual potentials of the two metal electrodes with respect to the electrolyte.
Although the overall potential of a cell can be measured, there is no simple way to accurately measure the electrode/electrolyte potentials in isolation. The electric potential also varies with temperature, concentration and pressure. Since the oxidation potential of a half-reaction is the negative of the reduction potential in a redox reaction, it is sufficient to calculate either one of the potentials. Therefore, standard electrode potential is commonly written as standard reduction potential.
Calculation
The galvanic cell potential results from the voltage difference of a pair of electrodes. It is not possible to measure an absolute value for each electrode separately. However, the potential of a reference electrode, standard hydrogen electrode (SHE), is defined as to 0.00 V. An electrode with unknown electrode potential can be paired with either the standard hydrogen electrode, or another electrode whose potential has already been measured, to determine its "absolute" potential.
Since the electrode potentials are conventionally defined as reduction potentials, the sign of the potential for the metal electrode being oxidized must be reversed when calculating the overall cell potential. The electrode potentials are independent of the number of electrons transferred —they are expressed in volts, which measure energy per electron transferred—and so the two electrode potentials can be simply combined to give the overall cell potential even if different numbers of electrons are involved in the two electrode reactions.
For practical measurements, the electrode in question is connected to the positive terminal of the electrometer, while the standard hydrogen electrode is connected to the negative terminal.
Reversible electrode
A reversible electrode is an electrode that owes its potential to changes of a reversible nature. A first condition to be fulfilled is that the system is close to the chemical equilibrium. A second set of conditions is that the system is submitted to very small solicitations spread on a sufficient period of time so, that the chemical equilibrium conditions nearly always prevail. In theory, it is very difficult to experimentally achieve reversible conditions because any perturbation imposed to a system near equilibrium in a finite time forces it out of equilibrium. However, if the solicitations exerted on the system are sufficiently small and applied slowly, one can consider an electrode to be reversible. By nature, electrode reversibility depends on the experimental conditions and the way the electrode is operated. For example, electrodes used in electroplating are operated with a high over-potential to force the reduction of a given metal cation to be deposited onto a metallic surface to be protected. Such a system is far from equilibrium and continuously submitted to important and constant changes in a short period of time
Standard reduction potential table
The larger the value of the standard reduction potential, the easier it is for the element to be reduced (gain electrons); in other words, they are better oxidizing agents.
For example, F2 has a standard reduction potential of +2.87 V and Li+ has −3.05 V:
(g) + 2 2 = +2.87 V
+ (s) = −3.05 V
The highly positive standard reduction potential of F2 means it is reduced easily and is therefore a good oxidizing agent. In contrast, the greatly negative standard reduction potential of Li+ indicates that it is not easily reduced. Instead, Li(s) would rather undergo oxidation (hence it is a good reducing agent).
Zn2+ has a standard reduction potential of −0.76 V and thus can be oxidized by any other electrode whose standard reduction potential is greater than −0.76 V (e.g., H+ (0 V), Cu2+ (0.34 V), F2 (2.87 V)) and can be reduced by any electrode with standard reduction potential less than −0.76 V (e.g. H2 (−2.23 V), Na+ (−2.71 V), Li+ (−3.05 V)).
In a galvanic cell, where a spontaneous redox reaction drives the cell to produce an electric potential, Gibbs free energy must be negative, in accordance with the following equation:
(unit: Joule = Coulomb × Volt)
where is number of moles of electrons per mole of products and is the Faraday constant, .
As such, the following rules apply:
If > 0, then the process is spontaneous (galvanic cell): < 0, and energy is liberated.
If < 0, then the process is non-spontaneous (electrolytic cell): > 0, and energy is consumed.
Thus in order to have a spontaneous reaction ( < 0), must be positive, where:
where is the standard potential at the cathode (called as standard cathodic potential or standard reduction potential and is the standard potential at the anode (called as standard anodic potential or standard oxidation potential) as given in the table of standard electrode potential.
See also
Nernst equation
Pourbaix diagram
Solvated electron
Standard electrode potential (data page)
Standard hydrogen electrode (SHE)
Biochemically relevant redox potentials (data page)
References
Further reading
Zumdahl, Steven S., Zumdahl, Susan A (2000) Chemistry (5th ed.), Houghton Mifflin Company.
Atkins, Peter, Jones, Loretta (2005) Chemical Principles (3rd ed.), W.H. Freeman and Company.
Zu, Y, Couture, MM, Kolling, DR, Crofts, AR, Eltis, LD, Fee, JA, Hirst, J (2003) Biochemistry, 42, 12400-12408
Shuttleworth, SJ (1820) Electrochemistry (50th ed.), Harper Collins.
External links
Standard Electrode Potentials table
Redox Equilibria
Chemistry of Batteries
Electrochemical Cells
STEP in Non-aqueous solvent
Electrochemical concepts
Electrochemical potentials | Standard electrode potential | [
"Chemistry"
] | 1,373 | [
"Electrochemistry",
"Electrochemical concepts",
"Electrochemical potentials"
] |
566,990 | https://en.wikipedia.org/wiki/Royal%20Society%20Prizes%20for%20Science%20Books | The Royal Society Science Books Prize is an annual £25,000 prize awarded by the Royal Society to celebrate outstanding popular science books from around the world. It is open to authors of science books written for a non-specialist audience, and since it was established in 1988 has championed writers such as Stephen Hawking, Jared Diamond, Stephen Jay Gould and Bill Bryson. In 2015 The Guardian described the prize as "the most prestigious science book prize in Britain".
History
The Royal Society established the Science Books Prize in 1988 with the aim of encouraging the writing, publishing and reading of good and accessible popular science books. Its name has varied according to sponsorship agreements.
Judging process
A panel of judges decides the shortlist and the winner of the Prize each year. The panel is chaired by a fellow of the Royal Society and includes authors, scientists and media personalities. The judges for the 2016 prize included author Bill Bryson, theoretical physicist Dr Clare Burrage, science fiction author Alastair Reynolds, ornithologist and science blogger GrrlScientist, and author and director of external affairs at the Science Museum Group, Roger Highfield. In 2019, the jury consisted of Sir Nigel Shadbolt, Shukry James Habib, Dorothy Koomson, Stephen McGann, and Gwyneth Williams.
All books entered for the prize must be published in English for the first time between September and October the preceding year. The winner is announced at an award ceremony and receives £25,000. Each of the other shortlisted authors receives £2,500.
Shortlisted books
Before 2000
2000s
2010s
2020s
References
External links
The Royal Society Trivedi Science Book Prize
Royal Society Prize at lovethebook
British children's literary awards
Sanofi
Royal Society
Science writing awards
British science and technology awards
Awards established in 1988
1988 establishments in the United Kingdom
Annual events in the United Kingdom
British non-fiction literary awards
Awards of the Royal Society | Royal Society Prizes for Science Books | [
"Technology"
] | 385 | [
"Science and technology awards",
"Science writing awards"
] |
567,131 | https://en.wikipedia.org/wiki/Biodiversity%20hotspot | A biodiversity hotspot is a biogeographic region with significant levels of biodiversity that is threatened by human habitation. Norman Myers wrote about the concept in two articles in The Environmentalist in 1988 and 1990, after which the concept was revised following thorough analysis by Myers and others into "Hotspots: Earth's Biologically Richest and Most Endangered Terrestrial Ecoregions" and a paper published in the journal Nature, both in 2000.
To qualify as a biodiversity hotspot on Myers' 2000 edition of the hotspot map, a region must meet two strict criteria: it must contain at least 1,500 species of vascular plants (more than 0.5% of the world's total) as endemics, and it has to have lost at least 70% of its primary vegetation. Globally, 36 zones qualify under this definition. These sites support nearly 60% of the world's plant, bird, mammal, reptile, and amphibian species, with a high share of those species as endemics. Some of these hotspots support up to 15,000 endemic plant species, and some have lost up to 95% of their natural habitat.
Biodiversity hotspots host their diverse ecosystems on just 2.4% of the planet's surface. Ten hotspots were originally identified by Myer; the current 36 used to cover more than 15.7% of all the land but have lost around 85% of their area. This loss of habitat is why approximately 60% of the world's terrestrial life lives on only 2.4% of the land surface area. Caribbean Islands like Haiti and Jamaica are facing serious pressures on the populations of endemic plants and vertebrates as a result of rapid deforestation. Other areas include the Tropical Andes, Philippines, Mesoamerica, and Sundaland, which, under the current levels at which deforestation is occurring, will likely lose most of their plant and vertebrate species.
Hotspot conservation initiatives
Only a small percentage of the total land area within biodiversity hotspots is now protected. Several international organizations are working to conserve biodiversity hotspots.
Critical Ecosystem Partnership Fund (CEPF) is a global program that provides funding and technical assistance to nongovernmental organizations in order to protect the Earth's richest regions of plant and animal diversity, including biodiversity hotspots, high-biodiversity wilderness areas and important marine regions.
The World Wide Fund for Nature has devised a system called the "Global 200 Ecoregions", the aim of which is to select priority ecoregions for conservation from fourteen terrestrial, three freshwater, and four marine habitat types. They are chosen for species richness, endemism, taxonomic uniqueness, unusual ecological or evolutionary phenomena, and global rarity. All biodiversity hotspots contain at least one Global 200 Ecoregion.
Birdlife International has identified 218 "Endemic Bird Areas" (EBAs) each of which holds two or more bird species found nowhere else. Birdlife International has identified more than 11,000 Important Bird Areas all over the world.
Plant life International coordinates programs aiming to identify and manage Important Plant Areas.
Alliance for Zero Extinction is an initiative of scientific organizations and conservation groups who co-operate to focus on the most threatened endemic species of the world. They have identified 595 sites, including many Birdlife's Important Bird Areas.
The National Geographic Society has prepared a world map of the hotspots and ArcView shapefile and metadata for the Biodiversity Hotspots including details of the individual endangered fauna in each hotspot, which is available from Conservation International.
The Compensatory Afforestation Management and Planning Authority (CAMPA) seeks to control the destruction of forests in India.
Distribution by region
Most biodiversity exists within the tropics; likewise, most hotspots are tropical. Of the 36 biodiversity hotspots, 15 are classified as old, climatically-buffered, infertile landscapes (OCBILs). These areas have been historically isolated from interactions with other climate zones, but recent human interaction and encroachment have put these historically safe hotspots at risk. OCBILs have mainly been threatened by the relocation of indigenous groups and military actions, as the infertile ground has previously dissuaded human populations. The conservation of OCBILs within biodiversity hotspots has started to garner attention because current theories believe these sites provide not only high levels of biodiversity, but they have relatively stable lineages and the potential for high levels of speciation in the future. Because these sites are relatively stable, they can be classified as refugia.
North and Central America
California Floristic Province (8)
Madrean pine–oak woodlands (26)
Mesoamerica (2)
North American Coastal Plain (36)
The Caribbean
Caribbean Islands (3)
South America
Atlantic Forest (4)
Cerrado (6)
Chilean Winter Rainfall-Valdivian Forests (7)
Tumbes–Chocó–Magdalena (5)
Tropical Andes (1)
Europe
Mediterranean Basin (14)
Africa
Cape Floristic Region (12)
Coastal Forests of Eastern Africa (10)
Eastern Afromontane (28)
Guinean Forests of West Africa (11)
Horn of Africa (29)
Madagascar and the Indian Ocean Islands (9)
Maputaland-Pondoland-Albany (27)
Succulent Karoo (13)
Central Asia
Mountains of Central Asia (31)
South Asia
Eastern Himalaya (32)
Indo-Burma, Bangladesh, India and Myanmar (19)
Western Ghats and Sri Lanka (21)
Southeast Asia and Asia-Pacific
East Melanesian Islands (34)
New Caledonia (23)
New Zealand (24)
Philippines (18)
Polynesia-Micronesia (25)
Eastern Australian temperate forests (35)
Southwest Australia (22)
Sundaland, Indonesia and Nicobar islands of India (16)
Wallacea of Indonesia (17)
East Asia
Japan (33)
Mountains of Southwest China (20)
West Asia
Caucasus (15)
Irano-Anatolian (30)
Criticism
The high profile of the biodiversity hotspots approach has resulted in some criticism. Papers such as Kareiva & Marvier (2003) have pointed out that biodiversity hotspots (and many other priority region sets) do not address the concept of cost, and do not consider phylogenetic diversity.
See also
: biodiversity hotspots in the open sea
References
Further reading
Dedicated issue of Philosophical Transactions B on Biodiversity Hotspots. Some articles are freely available.
Spyros Sfenthourakis, Anastasios Legakis: Hotspots of endemic terrestrial invertebrates in Southern Greece. Kluwer Academic Publishers, 2001
External links
A-Z of Areas of Biodiversity Importance: Biodiversity Hotspots
Conservation International's Biodiversity Hotspots project
African Wild Dog Conservancy's Biodiversity Hotspots Project
Biodiversity hotspots in India
New biodiversity maps color-coded to show hotspots
Shapefile of the Biodiversity Hotspots (v2016.1)
Biodiversity
Environmental conservation
International environmental organizations | Biodiversity hotspot | [
"Biology"
] | 1,435 | [
"Biodiversity"
] |
567,292 | https://en.wikipedia.org/wiki/Mental%20calculation | Mental calculation consists of arithmetical calculations using only the human brain, with no help from any supplies (such as pencil and paper) or devices such as a calculator. People may use mental calculation when computing tools are not available, when it is faster than other means of calculation (such as conventional educational institution methods), or even in a competitive context. Mental calculation often involves the use of specific techniques devised for specific types of problems. People with unusually high ability to perform mental calculations are called mental calculators or lightning calculators.
Many of these techniques take advantage of or rely on the decimal numeral system.
Methods and techniques
Casting out nines
After applying an arithmetic operation to two operands and getting a result, the following procedure can be used to improve confidence in the correctness of the result:
Sum the digits of the first operand; any 9s (or sets of digits that add to 9) can be counted as 0.
If the resulting sum has two or more digits, sum those digits as in step one; repeat this step until the resulting sum has only one digit.
Repeat steps one and two with the second operand. At this point there are two single-digit numbers, the first derived from the first operand and the second derived from the second operand.
Apply the originally specified operation to the two condensed operands, and then apply the summing-of-digits procedure to the result of the operation.
Sum the digits of the result that were originally obtained for the original calculation.
If the result of step 4 does not equal the result of step 5, then the original answer is wrong. If the two results match, then the original answer may be right, though it is not guaranteed to be.
Example
Assume the calculation 6,338 × 79, manually done, yielded a result of 500,702:
Sum the digits of 6,338: (6 + 3 = 9, so count that as 0) + 3 + 8 = 11
Iterate as needed: 1 + 1 = 2
Sum the digits of 79: 7 + (9 counted as 0) = 7
Perform the original operation on the condensed operands, and sum digits: 2 × 7 = 14; 1 + 4 = 5
Sum the digits of 500702: 5 + 0 + 0 + (7 + 0 + 2 = 9, which counts as 0) = 5
5 = 5, so there is a good chance that the prediction that 6,338 × 79 equals 500,702 is right.
The same procedure can be used with multiple operations, repeating steps 1 and 2 for each operation.
Factors
When multiplying, a useful thing to remember is that the factors of the operands still remain. For example, to say that 14 × 15 was 201 would be unreasonable. Since 15 is a multiple of 5, the product should be as well. Likewise, 14 is a multiple of 2, so the product should be even. Furthermore, any number which is a multiple of both 5 and 2 is necessarily a multiple of 10, and in the decimal system would end with a 0. The correct answer is 210. It is a multiple of 10, 7 (the other prime factor of 14) and 3 (the other prime factor of 15).
Calculating differences: a − b
Direct calculation
When the digits of b are all smaller than the corresponding digits of a, the calculation can be done digit by digit. For example, evaluate 872 − 41 simply by subtracting 1 from 2 in the units place, and 4 from 7 in the tens place: 831.
Indirect calculation
When the above situation does not apply, there is another method known as indirect calculation.
Look-ahead borrow method
This method can be used to subtract numbers left to right, and if all that is required is to read the result aloud, it requires little of the user's memory even to subtract numbers of arbitrary size.
One place at a time is handled, left to right.
Example:
4075
− 1844
------
Thousands: 4 − 1 = 3, look to right, 075 < 844, need to borrow.
3 − 1 = 2, say "Two thousand".
One is performing 3 - 1 rather than 4 - 1 because the column to the right is
going to borrow from the thousands place.
Hundreds: 0 − 8 = negative numbers not allowed here.
One is going to increase this place by using the number one borrowed from the
column to the left. Therefore:
10 − 8 = 2. It is 10 rather than 0, because one borrowed from the Thousands
place. 75 > 44 so no need to borrow,
say "two hundred"
Tens: 7 − 4 = 3, 5 > 4, so 5 - 4 = 1
Hence, the result is 2231.
Cheprasov Algorithm
The Cheprasov method of multiplication can be applied for any n digit number multiplied by any other n digit number. The main uniqueness of this algorithm lies in its ability to:
1. Utilize negative numbers, even when multiplying only positive numbers, to reach the correct positive result.
2. Attach, or link, numbers to one another in order to avoid excessive addition.
3. As the n digit multiplication problem grows to ever larger numbers, the number of possible combinations one can use to reach the same answer grows as well; meaning the user can pick and choose the easiest and fastest route to reach the answer to each multiplication problem according to their own specific needs at the time. For example, using the Cheprasov algorithm, a 3 digit • 4 digit multiplication problem yields a total of 6 different combinations one can use to reach the correct answer using the same exact algorithm but through different permutations.
The basic algorithm, as best exemplified in a 2 • 2 multiplication problem, is as follows (where t represent a tens digit, u represent a units digit, and T symbolizes the tens digit multiplied by its respective power):
Step 1 = (t 1 - u 1) • u 2 = A
Step 2 = u 1 • t 2 = B
Step 3 = (u 1 • u 2) + t (u 1 • u 2) = C
Step 3 first multiplies both units digits together to reach an initial answer ((u 1 • u 2)). That initial answer’s tens digits (t (u 1 • u 2)) is added back to step 3’s initial answer.
A+B+C are then added together to reach a sum (D).
Step 4 is where the units digit to step 3’s initial answer to: (u 1 • u 2) is attached (symbolized by: @) to the end of the sum of steps 1-3.
Step 4 = D @ u (u 1 • u 2) = E
Finally this number is taken and the following is added to it:
Step 5 = E + (T 1 • T 2) = Final Answer
For example, in the following problem: 79 • 26, by assigning subscripts of 1 to 79 and subscripts of 2 to 26, we would reach the answer as follows:
Step 1 = (7 - 9) • 6 = -12
Step 2 = 9 • 2 = 18
Step 3 = (9 • 6) = 54 + 5 = 59
Steps 1-3 are added together to reach a sum of 65.
Step 4 = 65@4 = 654
Step 5 = 70 • 20 = 1400 + 654 = 2054
The reverse, where subscripts of 1 are assigned to 26 and subscripts of 2 are assigned to 79 would yield the same answer but with different results to each intermediary step.
The "Ends of Five" Formula
For any 2-digit by 2-digit multiplication problem, if both numbers end in five, the following algorithm can be used to quickly multiply them together:
As a preliminary step simply round the smaller number down and the larger up to the nearest multiple of ten. In this case:
The algorithm reads as follows:
Where t1 is the tens unit of the original larger number (75) and t2 is the tens unit of the original smaller number (35).
Calculating products: a × b
Many of these methods work because of the distributive property.
Multiplying any 2-digit numbers
To easily multiply any 2-digit numbers together a simple algorithm is as follows (where a is the tens digit of the first number, b is the ones digit of the first number, c is the tens digit of the second number and d is the ones digit of the second number):
For example,
800
+120
+140
+ 21
-----
1081
Note that this is the same thing as the conventional sum of partial products, just restated with brevity. To minimize the number of elements being retained in one's memory, it may be convenient to perform the sum of the "cross" multiplication product first, and then add the other two elements:
[of which only the tens digit will interfere with the first term]
i.e., in this example
(12 + 14) = 26, 26 × 10 = 260,
to which is it is easy to add 21: 281 and then 800: 1081
An easy mnemonic to remember for this would be FOIL. F meaning first, O meaning outer, I meaning inner and L meaning last. For example:
and
where 7 is a, 5 is b, 2 is c and 3 is d.
Consider
this expression is analogous to any number in base 10 with a hundreds, tens and ones place. FOIL can also be looked at as a number with F being the hundreds, OI being the tens and L being the ones.
is the product of the first digit of each of the two numbers; F.
is the addition of the product of the outer digits and the inner digits; OI.
is the product of the last digit of each of the two numbers; L.
Multiplying by 9
Since 9 = 10 − 1, to multiply a number by nine, multiply it by 10 and then subtract the original number from the result. For example, 9 × 27 = 270 − 27 = 243.
This method can be adjusted to multiply by eight instead of nine, by doubling the number being subtracted; 8 × 27 = 270 − (2×27) = 270 − 54 = 216.
Similarly, by adding instead of subtracting, the same methods can be used to multiply by 11 and 12, respectively (although simpler methods to multiply by 11 exist).
Multiplying by 11
For single digit numbers simply duplicate the number into the tens digit, for example: 1 × 11 = 11, 2 × 11 = 22, up to 9 × 11 = 99.
The product for any larger non-zero integer can be found by a series of additions to each of its digits from right to left, two at a time.
First take the ones digit and copy that to the temporary result. Next, starting with the ones digit of the multiplier, add each digit to the digit to its left. Each sum is then added to the left of the result, in front of all others. If a number sums to 10 or higher take the tens digit, which will always be 1, and carry it over to the next addition. Finally copy the multipliers left-most (highest valued) digit to the front of the result, adding in the carried 1 if necessary, to get the final product.
In the case of a negative 11, multiplier, or both apply the sign to the final product as per normal multiplication of the two numbers.
A step-by-step example of 759 × 11:
The ones digit of the multiplier, 9, is copied to the temporary result.
result: 9
Add 5 + 9 = 14 so 4 is placed on the left side of the result and carry the 1.
result: 49
Similarly add 7 + 5 = 12, then add the carried 1 to get 13. Place 3 to the result and carry the 1.
result: 349
Add the carried 1 to the highest valued digit in the multiplier, 7 + 1 = 8, and copy to the result to finish.
Final product of 759 × 11: 8349
Further examples:
−54 × −11 = 5 5+4(9) 4 = 594
999 × 11 = 9+1(10) 9+9+1(9) 9+9(8) 9 = 10989
Note the handling of 9+1 as the highest valued digit.
−3478 × 11 = 3 3+4+1(8) 4+7+1(2) 7+8(5) 8 = −38258
62473 × 11 = 6 6+2(8) 2+4+1(7) 4+7+1(2) 7+3(0) 3 = 687203
Another method is to simply multiply the number by 10, and add the original number to the result.
For example:
17 × 11
17 × 10 = 170
170 + 17 = 187
17 × 11 = 187
One last easy way:
If one has a two-digit number, take it and add the two numbers together and put that sum in the middle, and one can get the answer.
For example: 24 x 11 = 264 because 2 + 4 = 6 and the 6 is placed in between the 2 and the 4.
Second example: 87 x 11 = 957 because 8 + 7 = 15 so the 5 goes in between the 8 and the 7 and the 1 is carried to the 8. So it is basically 857 + 100 = 957.
Or if 43 x 11 is equal to first 4+3=7 (For the tens digit) Then 4 is for the hundreds and 3 is for the tens. And the answer is 473.
Multiplying two numbers close to and below 100
This technique allows easy multiplication of numbers close and below 100.(90-99) The variables will be the two numbers one multiplies.
The product of two variables ranging from 90-99 will result in a 4-digit number. The first step is to find the ones-digit and the tens digit.
Subtract both variables from 100 which will result in 2 one-digit number. The product of the 2 one-digit numbers will be the last two digits of one's final product.
Next, subtract one of the two variables from 100. Then subtract the difference from the other variable. That difference will be the first two digits of the final product, and the resulting 4 digit number will be the final product.
Example:
95
x 97
----
Last two digits: 100-95=5 (subtract first number from 100)
100-97=3 (subtract second number from 100)
5*3=15 (multiply the two differences)
Final Product- yx15
First two digits: 100-95=5 (Subtract the first number of the equation from 100)
97-5=92 (Subtract that answer from the second number of the equation)
Now, the difference will be the first two digits
Final Product- 9215
Alternate for first two digits
5+3=8 (Add the two single digits derived when calculating "Last two digits" in previous step)
100-8=92 (Subtract that answer from 100)
Now, the difference will be the first two digits
Final Product- 9215
Using square numbers
The products of small numbers may be calculated by using the squares of integers; for example, to calculate 13 × 17, one can remark 15 is the mean of the two factors, and think of it as (15 − 2) × (15 + 2), i.e. 152 − 22. Knowing that 152 is 225 and 22 is 4, simple subtraction shows that 225 − 4 = 221, which is the desired product.
This method requires knowing by heart a certain number of squares:
Squaring numbers
It may be useful to be aware that the difference between two successive square numbers is the sum of their respective square roots. Hence, if one knows that 12 × 12 = 144 and wish to know 13 × 13, calculate 144 + 12 + 13 = 169.
This is because (x + 1)2 − x2 = x2 + 2x + 1 − x2 = x + (x + 1)
x2 = (x − 1)2 + (2x − 1)
Squaring any number
Take a given number, and add and subtract a certain value to it that will make it easier to multiply. For example:
4922
492 is close to 500, which is easy to multiply by. Add and subtract 8 (the difference between 500 and 492) to get
492 -> 484, 500
Multiply these numbers together to get 242,000 (This can be done efficiently by dividing 484 by 2 = 242 and multiplying by 1000). Finally, add the difference (8) squared (82 = 64) to the result:
4922 = 242,064
The proof follows:
Squaring any 2-digit integer
This method requires memorization of the squares of the one-digit numbers 1 to 9.
The square of mn, mn being a two-digit integer, can be calculated as
10 × m(mn + n) + n2
Meaning the square of mn can be found by adding n to mn, multiplied by m, adding 0 to the end and finally adding the square of n.
For example, 232:
232
= 10 × 2(23 + 3) + 32
= 10 × 2(26) + 9
= 520 + 9
= 529
So 232 = 529.
Squaring a number ending in 5
Take the digit(s) that precede the five: abc5, where a, b, and c are digits
Multiply this number by itself plus one: abc(abc + 1)
Take above result and attach 25 to the end
Example: 85 × 85
8
8 × 9 = 72
So, 852 = 7,225
Example: 1252
12
12 × 13 = 156
So, 1252 = 15,625
Mathematical explanation
Squaring numbers very close to 50
Suppose one needs to square a number n near 50.
The number may be expressed as n = 50 − a so its square is (50−a)2 = 502 − 100a + a2. One knows that 502 is 2500. So one subtracts 100a from 2500, and then add a2.
For example, say one wants to square 48, which is 50 − 2. One subtracts 200 from 2500 and add 4, and get n2 = 2304. For numbers larger than 50 (n = 50 + a), add 100×a instead of subtracting it.
Squaring an integer from 26 to 74
This method requires the memorization of squares from 1 to 24.
The square of n (most easily calculated when n is between 26 and 74 inclusive) is
(50 − n)2 + 100(n − 25)
In other words, the square of a number is the square of its difference from fifty added to one hundred times the difference of the number and twenty five. For example, to square 62:
(−12)2 + [(62-25) × 100]
= 144 + 3,700
= 3,844
Squaring an integer near 100 (e.g., from 76 to 124)
This method requires the memorization of squares from 1 to a where a is the absolute difference between n and 100. For example, students who have memorized their squares from 1 to 24 can apply this method to any integer from 76 to 124.
The square of n (i.e., 100 ± a) is
100(100 ± 2a) + a2
In other words, the square of a number is the square of its difference from 100 added to the product of one hundred and the difference of one hundred and the product of two and the difference of one hundred and the number. For example, to square 93:
100(100 − 2(7)) + 72
= 100 × 86 + 49
= 8,600 + 49
= 8,649
Another way to look at it would be like this:
932 = ? (is −7 from 100)
93 − 7 = 86 (this gives the first two digits)
(−7)2 = 49 (these are the second two digits)
932 = 8649
Another example:
822 = ? (is −18 from 100)
82 − 18 = 64 (subtract. First digits.)
(−18)2 = 324 (second pair of digits. One will need to carry the 3.)
822 = 6724
Squaring any integer near 10n (e.g., 976 to 1024, 9976 to 10024, etc.)
This method is a straightforward extension of the explanation given above for squaring an integer near 100.
10122 = ? (1012 is +12 from 1000)
(+12)2 = 144 (n trailing digits)
1012 + 12 = 1024 (leading digits)
10122 = 1024144
99972 = ? (9997 is -3 from 10000)
(-3)2 = 0009 (n trailing digits)
9997 - 3 = 9994 (leading digits)
99972 = 99940009
Squaring any integer near m × 10n (e.g., 276 to 324, 4976 to 5024, 79976 to 80024)
This method is a straightforward extension of the explanation given above for integers near 10n.
4072 = ? (407 is +7 from 400)
(+7)2 = 49 (n trailing digits)
407 + 7 = 414
414 × 4 = 1656 (leading digits; note this multiplication by m was not needed for integers from 76 to 124 because their m = 1)
4072 = 165649
799912 = ? (79991 is -9 from 80000)
(-9)2 = 0081 (n trailing digits)
79991 - 9
79982 × 8 = 639856 (leading digits)
799912 = 6398560081
Finding roots
Approximating square roots
An easy way to approximate the square root of a number is to use the following equation:
The closer the known square is to the unknown, the more accurate the approximation. For instance, to estimate the square root of 15, one could start with the knowledge that the nearest perfect square is 16 (42).
So the estimated square root of 15 is 3.875. The actual square root of 15 is 3.872983... One thing to note is that, no matter what the original guess was, the estimated answer will always be larger than the actual answer due to the inequality of arithmetic and geometric means. Thus, one should try rounding the estimated answer down.
Note that if n2 is the closest perfect square to the desired square x and d = x - n2 is their difference, it is more convenient to express this approximation in the form of mixed fraction as . Thus, in the previous example, the square root of 15 is As another example, square root of 41 is while the actual value is 6.4031...
It may simplify mental calculation to notice that this method is equivalent to the mean of the known square and the unknown square, divided by the known square root:
Derivation
By definition, if r is the square root of x, then
One then redefines the root
where a is a known root (4 from the above example) and b is the difference between the known root and the answer one seeks.
Expanding yields
If 'a' is close to the target, 'b' will be a small enough number to render the element of the equation negligible. Thus, one can drop out and rearrange the equation to
and therefore
that can be reduced to
Alternatively, this approach to square root approximation can be viewed as a single step of Newton's method.
Extracting roots of perfect powers
Extracting roots of perfect powers is often practiced. The difficulty of the task does not depend on the number of digits of the perfect power but on the precision, i.e. the number of digits of the root. In addition, it also depends on the order of the root; finding perfect roots, where the order of the root is coprime with 10 are somewhat easier since the digits are scrambled in consistent ways, as in the next section.
Extracting cube roots
An easy task for the beginner is extracting cube roots from the cubes of 2-digit numbers. For example, given 74088, determine what two-digit number, when multiplied by itself once and then multiplied by the number again, yields 74088. One who knows the method will quickly know the answer is 42, as 423 = 74088.
Before learning the procedure, it is required that the performer memorize the cubes of the numbers 1-10:
Observe that there is a pattern in the rightmost digit: adding and subtracting with 1 or 3. Starting from zero:
03 = 0
13 = 1 up 1
23 = 8 down 3
33 = 27 down 1
43 = 64 down 3
53 = 125 up 1
63 = 216 up 1
73 = 343 down 3
83 = 512 down 1
93 = 729 down 3
103 = 1000 up 1
There are two steps to extracting the cube root from the cube of a two-digit number. For example, extracting the cube root of 29791. Determine the one's place (units) of the two-digit number. Since the cube ends in 1, as seen above, it must be 1.
If the perfect cube ends in 0, the cube root of it must end in 0.
If the perfect cube ends in 1, the cube root of it must end in 1.
If the perfect cube ends in 2, the cube root of it must end in 8.
If the perfect cube ends in 3, the cube root of it must end in 7.
If the perfect cube ends in 4, the cube root of it must end in 4.
If the perfect cube ends in 5, the cube root of it must end in 5.
If the perfect cube ends in 6, the cube root of it must end in 6.
If the perfect cube ends in 7, the cube root of it must end in 3.
If the perfect cube ends in 8, the cube root of it must end in 2.
If the perfect cube ends in 9, the cube root of it must end in 9.
Note that every digit corresponds to itself except for 2, 3, 7 and 8, which are just subtracted from ten to obtain the corresponding digit.
The second step is to determine the first digit of the two-digit cube root by looking at the magnitude of the given cube. To do this, remove the last three digits of the given cube (29791 → 29) and find the greatest cube it is greater than (this is where knowing the cubes of numbers 1-10 is needed). Here, 29 is greater than 1 cubed, greater than 2 cubed, greater than 3 cubed, but not greater than 4 cubed. The greatest cube it is greater than is 3, so the first digit of the two-digit cube must be 3.
Therefore, the cube root of 29791 is 31.
Another example:
Find the cube root of 456533.
The cube root ends in 7.
After the last three digits are taken away, 456 remains.
456 is greater than all the cubes up to 7 cubed.
The first digit of the cube root is 7.
The cube root of 456533 is 77.
This process can be extended to find cube roots that are 3 digits long, by using arithmetic modulo 11.
These types of tricks can be used in any root where the order of the root is coprime with 10; thus it fails to work in square root, since the power, 2, divides into 10. 3 does not divide 10, thus cube roots work.
Approximating common logarithms (log base 10)
To approximate a common logarithm (to at least one decimal point accuracy), a few logarithm rules, and the memorization of a few logarithms is required. One must know:
log(a × b) = log(a) + log(b)
log(a / b) = log(a) - log(b)
log(0) does not exist
log(1) = 0
log(2) ~ .30
log(3) ~ .48
log(7) ~ .85
From this information, one can find the logarithm of any number 1-9.
log(1) = 0
log(2) ~ .30
log(3) ~ .48
log(4) = log(2 × 2) = log(2) + log(2) ~ .60
log(5) = log(10 / 2) = log(10) − log(2) ~ .70
log(6) = log(2 × 3) = log(2) + log(3) ~ .78
log(7) ~ .85
log(8) = log(2 × 2 × 2) = log(2) + log(2) + log(2) ~ .90
log(9) = log(3 × 3) = log(3) + log(3) ~ .96
log(10) = 1 + log(1) = 1
The first step in approximating the common logarithm is to put the number given in scientific notation. For example, the number 45 in scientific notation is 4.5 × 101, but one will call it a × 10b. Next, find the logarithm of a, which is between 1 and 10. Start by finding the logarithm of 4, which is .60, and then the logarithm of 5, which is .70 because 4.5 is between these two. Next, and skill at this comes with practice, place a 5 on a logarithmic scale between .6 and .7, somewhere around .653 (NOTE: the actual value of the extra places will always be greater than if it were placed on a regular scale. i.e., one would expect it to go at .650 because it is halfway, but instead, it will be a little larger, in this case, .653) Once one has obtained the logarithm of a, simply add b to it to get the approximation of the common logarithm. In this case, a + b = 0.653 + 1 = 1.653. The actual value of log(45) ~ 1.65321.
The same process applies for numbers between 0 and 1. For example, 0.045 would be written as 4.5 × 10−2. The only difference is that b is now negative, so when adding one is really subtracting. This would yield the result 0.653 − 2 = −1.347.
Mental arithmetic as a psychological skill
Physical exertion of the proper level can lead to an increase in performance of a mental task, like doing mental calculations, performed afterward. It has been shown that during high levels of physical activity there is a negative effect on mental task performance. This means that too much physical work can decrease accuracy and output of mental math calculations. Physiological measures, specifically EEG, have been shown to be useful in indicating mental workload. Using an EEG as a measure of mental workload after different levels of physical activity can help determine the level of physical exertion that will be the most beneficial to mental performance. Previous work done at Michigan Technological University by Ranjana Mehta includes a recent study that involved participants engaging in concurrent mental and physical tasks. This study investigated the effects of mental demands on physical performance at different levels of physical exertion and ultimately found a decrease in physical performance when mental tasks were completed concurrently, with a more significant effect at the higher level of physical workload. The Brown–Peterson procedure is a widely known task using mental arithmetic. This procedure, mostly used in cognitive experiments, suggests mental subtraction is useful in testing the effects maintenance rehearsal can have on how long short-term memory lasts.
Mental Calculations World Championship
The first Mental Calculations World Championship took place in 1998. This event repeats every year and now occurs online. It consists of a range of different tasks such as addition, subtraction, multiplication, division, irrational and exact square roots, cube roots and deeper roots, factorizations, fractions, and calendar dates.
Mental Calculation World Cup
The first Mental Calculation World Cup (Mental Calculation World Cup) took place in 2004. It is an in-person competition that occurs every other year in Germany. It consists of four different standard tasks --- addition of ten ten-digit numbers, multiplication of two eight-digit numbers, calculation of square roots, and calculation of weekdays for given dates --- in addition to a variety of "surprise" tasks.
Memoriad – World Memory, Mental Calculation & Speed Reading Olympics
The first international Memoriad was held in Istanbul, Turkey, in 2008.
The second Memoriad took place in Antalya, Turkey, on 24–25 November 2012. 89 competitors from 20 countries participated. Awards and money prizes were given for 10 categories in total; of which 5 categories had to do about Mental Calculation (Mental addition, Mental Multiplication, Mental Square Roots (non-integer), Mental Calendar Dates calculation and Flash Anzan). The third Memoriad was held in Las Vegas, USA, from November 8, 2016 through November 10, 2016.
See also
Doomsday rule for calculating the day of the week
Mental abacus
Mental calculator
Soroban
Notes
References
External links
Mental Calculation World Cup
Memoriad - World Mental Olympics
Games of mental skill | Mental calculation | [
"Mathematics"
] | 7,003 | [
"Mental calculation",
"Arithmetic"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.