id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
11,825,534 | https://en.wikipedia.org/wiki/Gliese%20412 | Gliese 412 is a pair of stars that share a common proper motion through space and are thought to form a binary star system. The pair have an angular separation of 31.4″ at a position angle of 126.1°. They are located 15.8 light-years distant from the Sun in the constellation Ursa Major. Both components are relatively dim red dwarf stars.
The two stellar components of this system have a projected separation of about 152 AU, and an estimated orbital semimajor axis of 190 AU. The primary has about 48% of the Sun's mass, while the secondary is only 10%. The primary has a projected rotation velocity at the equator of less than 3 km/s; the secondary has a rotation velocity of km/s.
The primary star was monitored for radial velocity (RV) variations caused by a Jupiter-mass companion in a short-period orbit. It displayed no significant excess of RV variation that could be attributed to a planet. A search of the system using near-infrared speckle interferometry also failed to detect a companion orbiting at distances of 1–10 AU. Nor has a brown dwarf been detected orbiting within this system.
The space velocity components of this system are U = 141, V = –7 and W = 7. They are members of the halo population of the Milky Way galaxy.
X-ray source
The secondary is a flare star that is referred to as WX Ursae Majoris. It is characterized as a UV Ceti-type variable star that displays infrequent increases in luminosity. This star was observed to flare as early as 1939 by the Dutch astronomer Adriaan van Maanen.
Component B (WX Ursae Majoris) has been identified as an X-ray source, while no significant X-ray emission was detected from component A. This system had not been studied in X-rays prior to ROSAT. The Gaia DR2 release gives a parallax of 204.059 ±0.169 mas for B, indicating a distance of around 16 light-years.
References
See also
List of nearest stars
X-ray astronomy
Binary stars
054211
0412
BD+44 2051
Local Bubble
Ursa Major
M-type main-sequence stars
Flare stars
Astronomical X-ray sources
Ursae Majoris, WX | Gliese 412 | [
"Astronomy"
] | 479 | [
"Astronomical X-ray sources",
"Ursa Major",
"Astronomical objects",
"Constellations"
] |
11,825,602 | https://en.wikipedia.org/wiki/LP%20944-20 | LP 944-20 is a dim brown dwarf of spectral class M9 located 21 light-years from the Solar System in the constellation of Fornax. With a visual apparent magnitude of 18.69, it has one of the dimmest visual magnitudes listed on the RECONS page. It is one of the brightest brown dwarfs, if not the brightest at JMKO=.
Discovery
LP 944-20 was discovered in the Luyten-Palomar Survey. It appears as a star with R=17.5 mag with a proper motion of 334 mas/yr in a catalog from 1979. It was however first published in 1975 by Luyten & Kowal. It was re-discovered in the APM survey, a quasar survey, in which the red color was noticed. The first spectrum was published in 1997 by Kirkpatrick, Henry & Irwin. A spectral type of M9 or later was assigned in this work and a distance of around 5 parsec was established thanks to the parallax being measured. In 1998 Tinney discovered that this M-dwarf shows the 6708 Å Lithium absorption line and H-alpha emission line, which helped to constrain the age to around 500 million years and established it as a brown dwarf with a mass of around 60 .
Physical characteristics
Short after LP 944-20 was established as a brown dwarf, the fast rotation was detected in 1998. Later a work in 1999 claimed to have detected variability in LP 944-20. A search for dust around LP 944-20 has shown that it has no disk.
Due to short rotational period, this young brown dwarf is displaying strong and frequent X-ray flares, and possessing a strong magnetic field reaching 135 G at the photosphere level. On 15 December 1999, an X-ray flare was detected. On 27 July 2000, radio emission (in flare and quiescence) was detected from this brown dwarf by a team of students at the Very Large Array.
Observations published in 2007 showed that the atmosphere of LP 944-20 contains much lithium and that it has dusty clouds. A search for planets was carried out in 2006 using the radial velocity method. No planets were found, but variability with an amplitude of 3.5 km/s was detected. This variability is likely due to weather effects and the rotation of the brown dwarf.
In 2015 high resolution Doppler images were taken of LP 944-20 and GJ 791.2A. The time series spectra show line profile distortions, which were interpreted as starspots. These starspots were reconstructed and found to be concentrated at high latitudes. The modelling produces a better fit of ΔT= between starspots (Tspot=) and photosphere (Tphot=).
In a large program in 2016 the spectral type was established to be M9β in the optical and L0β in the infrared. The beta stands for a surface gravity intermediate between normal and low. The mass was calculated to be .
Observations with TESS found that LP 944-20 is variable with a period of around 3.8 hours and an amplitude of . This is in agreement with previous estimates of a period of less than 4.5 hours.
References
External links
"The 100 nearest star systems", Research Consortium on Nearby Stars
M-type brown dwarfs
Fornax
J03393521-3525440 | LP 944-20 | [
"Astronomy"
] | 696 | [
"Fornax",
"Constellations"
] |
11,825,732 | https://en.wikipedia.org/wiki/DENIS%200255%E2%88%924700 | DENIS 0255−4700 is an extremely faint brown dwarf from the Solar System in the southern constellation of Eridanus. It is the closest isolated L-type brown dwarf (no undiscovered L-dwarfs are expected to be closer), and only after the binary Luhman 16. It is also the faintest brown dwarf (with the absolute magnitude of MV=24.44) having measured visible magnitude. A number of nearer T and Y-type dwarfs are known, specifically WISE 0855−0714, Epsilon Indi B and C, SCR 1845-6357 B, and UGPS 0722−05.
History of observations
DENIS 0255−4700 was identified for the first time as a probable nearby object in 1999. Its proximity to the Solar System was established by the RECONS group in 2006 when its trigonometric parallax was measured. DENIS 0255-4700 has a relatively small tangential velocity of .
Properties
The photospheric temperature of DENIS 0255−4700 is estimated at 1300 K. Its atmosphere in addition to hydrogen and helium contains water vapor, methane and possibly ammonia. The mass of DENIS 0255−4700 lies in the range from 25 to 65 Jupiter masses corresponding to the age range from 0.3 to 10 billion years. The brown dwarf is rotating rapidly with the period of 1.7 hours, and its rotational axis is inclined 40 degrees from the line-of-sight.
See also
List of nearest stars and brown dwarfs
List of brown dwarfs
Research Consortium On Nearby Stars
References
External links
RECONS List of the 100 nearest stars
Eridanus (constellation)
DENIS objects
J02550357-4700509
Local Bubble
L-type brown dwarfs
Astronomical objects discovered in 1999 | DENIS 0255−4700 | [
"Astronomy"
] | 358 | [
"Eridanus (constellation)",
"Constellations"
] |
11,825,946 | https://en.wikipedia.org/wiki/American%20Sleep%20Apnea%20Association | The American Sleep Apnea Association (ASAA) is a non-profit organization founded in 1990 by persons with sleep apnea, health care providers and researchers. The association offers education and advocacy services to improve the lives of sleep apnea patients.
In March 2016, the organization partnered with IBM Watson to launch a ResearchKit study app called SleepHealth, to study the connection between sleep habits and health outcomes.
References
External links
Sleep disorders
Medical and health organizations based in Washington, D.C. | American Sleep Apnea Association | [
"Biology"
] | 101 | [
"Behavior",
"Sleep",
"Sleep disorders"
] |
11,826,062 | https://en.wikipedia.org/wiki/Auxiliary%20function | In mathematics, auxiliary functions are an important construction in transcendental number theory. They are functions that appear in most proofs in this area of mathematics and that have specific, desirable properties, such as taking the value zero for many arguments, or having a zero of high order at some point.
Definition
Auxiliary functions are not a rigorously defined kind of function, rather they are functions which are either explicitly constructed or at least shown to exist and which provide a contradiction to some assumed hypothesis, or otherwise prove the result in question. Creating a function during the course of a proof in order to prove the result is not a technique exclusive to transcendence theory, but the term "auxiliary function" usually refers to the functions created in this area.
Explicit functions
Liouville's transcendence criterion
Because of the naming convention mentioned above, auxiliary functions can be dated back to their source simply by looking at the earliest results in transcendence theory. One of these first results was Liouville's proof that transcendental numbers exist when he showed that the so called Liouville numbers were transcendental. He did this by discovering a transcendence criterion which these numbers satisfied. To derive this criterion he started with a general algebraic number α and found some property that this number would necessarily satisfy. The auxiliary function he used in the course of proving this criterion was simply the minimal polynomial of α, which is the irreducible polynomial f with integer coefficients such that f(α) = 0. This function can be used to estimate how well the algebraic number α can be estimated by rational numbers p/q. Specifically if α has degree d at least two then he showed that
and also, using the mean value theorem, that there is some constant depending on α, say c(α), such that
Combining these results gives a property that the algebraic number must satisfy; therefore any number not satisfying this criterion must be transcendental.
The auxiliary function in Liouville's work is very simple, merely a polynomial that vanishes at a given algebraic number. This kind of property is usually the one that auxiliary functions satisfy. They either vanish or become very small at particular points, which is usually combined with the assumption that they do not vanish or can't be too small to derive a result.
Fourier's proof of the irrationality of e
Another simple, early occurrence is in Fourier's proof of the irrationality of e, though the notation used usually disguises this fact. Fourier's proof used the power series of the exponential function:
By truncating this power series after, say, N + 1 terms we get a polynomial with rational coefficients of degree N which is in some sense "close" to the function ex. Specifically if we look at the auxiliary function defined by the remainder:
then this function—an exponential polynomial—should take small values for x close to zero. If e is a rational number then by letting x = 1 in the above formula we see that R(1) is also a rational number. However, Fourier proved that R(1) could not be rational by eliminating every possible denominator. Thus e cannot be rational.
Hermite's proof of the irrationality of er
Hermite extended the work of Fourier by approximating the function ex not with a polynomial but with a rational function, that is a quotient of two polynomials. In particular he chose polynomials A(x) and B(x) such that the auxiliary function R defined by
could be made as small as he wanted around x = 0. But if er were rational then R(r) would have to be rational with a particular denominator, yet Hermite could make R(r) too small to have such a denominator, hence a contradiction.
Hermite's proof of the transcendence of e
To prove that e was in fact transcendental, Hermite took his work one step further by approximating not just the function ex, but also the functions ekx for integers k = 1,...,m, where he assumed e was algebraic with degree m. By approximating ekx by rational functions with integer coefficients and with the same denominator, say Ak(x) / B(x), he could define auxiliary functions Rk(x) by
For his contradiction Hermite supposed that e satisfied the polynomial equation with integer coefficients a0 + a1e + ... + amem = 0. Multiplying this expression through by B(1) he noticed that it implied
The right hand side is an integer and so, by estimating the auxiliary functions and proving that 0 < |R| < 1 he derived the necessary contradiction.
Auxiliary functions from the pigeonhole principle
The auxiliary functions sketched above can all be explicitly calculated and worked with. A breakthrough by Axel Thue and Carl Ludwig Siegel in the twentieth century was the realisation that these functions don't necessarily need to be explicitly known – it can be enough to know they exist and have certain properties. Using the Pigeonhole Principle Thue, and later Siegel, managed to prove the existence of auxiliary functions which, for example, took the value zero at many different points, or took high order zeros at a smaller collection of points. Moreover they proved it was possible to construct such functions without making the functions too large. Their auxiliary functions were not explicit functions, then, but by knowing that a certain function with certain properties existed, they used its properties to simplify the transcendence proofs of the nineteenth century and give several new results.
This method was picked up on and used by several other mathematicians, including Alexander Gelfond and Theodor Schneider who used it independently to prove the Gelfond–Schneider theorem. Alan Baker also used the method in the 1960s for his work on linear forms in logarithms and ultimately Baker's theorem. Another example of the use of this method from the 1960s is outlined below.
Auxiliary polynomial theorem
Let β equal the cube root of b/a in the equation ax3 + bx3 = c and assume m is an integer that satisfies m + 1 > 2n/3 ≥ m ≥ 3 where n is a positive integer.
Then there exists
such that
The auxiliary polynomial theorem states
A theorem of Lang
In the 1960s Serge Lang proved a result using this non-explicit form of auxiliary functions. The theorem implies both the Hermite–Lindemann and Gelfond–Schneider theorems. The theorem deals with a number field K and meromorphic functions f1,...,fN of order at most ρ, at least two of which are algebraically independent, and such that if we differentiate any of these functions then the result is a polynomial in all of the functions. Under these hypotheses the theorem states that if there are m distinct complex numbers ω1,...,ωm such that fi (ωj ) is in K for all combinations of i and j, then m is bounded by
To prove the result Lang took two algebraically independent functions from f1,...,fN, say f and g, and then created an auxiliary function which was simply a polynomial F in f and g. This auxiliary function could not be explicitly stated since f and g are not explicitly known. But using Siegel's lemma Lang showed how to make F in such a way that it vanished to a high order at the m complex numbers
ω1,...,ωm. Because of this high order vanishing it can be shown that a high-order derivative of F takes a value of small size one of the ωis, "size" here referring to an algebraic property of a number. Using the maximum modulus principle Lang also found a separate way to estimate the absolute values of derivatives of F, and using standard results comparing the size of a number and its absolute value he showed that these estimates were contradicted unless the claimed bound on m holds.
Interpolation determinants
After the myriad of successes gleaned from using existent but not explicit auxiliary functions, in the 1990s Michel Laurent introduced the idea of interpolation determinants. These are alternants – determinants of matrices of the form
where φi are a set of functions interpolated at a set of points ζj. Since a determinant is just a polynomial in the entries of a matrix, these auxiliary functions succumb to study by analytic means. A problem with the method was the need to choose a basis before the matrix could be worked with. A development by Jean-Benoît Bost removed this problem with the use of Arakelov theory, and research in this area is ongoing. The example below gives an idea of the flavour of this approach.
A proof of the Hermite–Lindemann theorem
One of the simpler applications of this method is a proof of the real version of the Hermite–Lindemann theorem. That is, if α is a non-zero, real algebraic number, then eα is transcendental. First we let k be some natural number and n be a large multiple of k. The interpolation determinant considered is the determinant Δ of the n4×n4 matrix
The rows of this matrix are indexed by 1 ≤ i1 ≤ n4/k and 1 ≤ i2 ≤ k, while the columns are indexed by 1 ≤ j1 ≤ n3 and 1 ≤ j2 ≤ n. So the functions in our matrix are monomials in x and ex and their derivatives, and we are interpolating at the k points 0,α,2α,...,(k − 1)α. Assuming that eα is algebraic we can form the number field Q(α,eα) of degree m over Q, and then multiply Δ by a suitable denominator as well as all its images under the embeddings of the field Q(α,eα) into C. For algebraic reasons this product is necessarily an integer, and using arguments relating to Wronskians it can be shown that it is non-zero, so its absolute value is an integer Ω ≥ 1.
Using a version of the mean value theorem for matrices it is possible to get an analytic bound on Ω as well, and in fact using big-O notation we have
The number m is fixed by the degree of the field Q(α,eα), but k is the number of points we are interpolating at, and so we can increase it at will. And once k > 2(m + 1)/3 we will have Ω → 0, eventually contradicting the established condition Ω ≥ 1. Thus eα cannot be algebraic after all.
Notes
References
Number theory
Diophantine approximation | Auxiliary function | [
"Mathematics"
] | 2,187 | [
"Discrete mathematics",
"Mathematical relations",
"Diophantine approximation",
"Approximations",
"Number theory"
] |
11,827,482 | https://en.wikipedia.org/wiki/Wellcome%20Genome%20Campus | The Wellcome Genome Campus is a scientific research campus built in the grounds of Hinxton Hall, Hinxton in Cambridgeshire, England.
Campus
The Campus is home to some institutes and organisations in genomics and computational biology. The Campus is part of the Wellcome Trust, a global charitable foundation that exists to improve health, and houses the Wellcome Sanger Institute, the European Bioinformatics Institute (EBI), the bioinformatics outstation of the European Molecular Biology Laboratory (EMBL), and a number of biotech companies whose UK offices are located in the BioData Innovation Centre acting as an incubator for businesses of all sizes.
In 2020, the South Cambridgeshire District Council granted outline planning permission for an expansion of the Campus. The expansion will increase the overall Campus grounds from 125 acres to 440 acres. The first buildings are expected to be completed in 2026.
Activities
At the Campus, genome and biodata research takes place. The Campus provides bioinformatics services and delivers training in genomics and biodata to scientists and clinicians.
History
Opening of the Campus in 1994
At the time of its official opening by the Princess Royal in 1994, the Wellcome Genome Campus was already home to the Wellcome Sanger Institute (then called the Sanger Centre), the Medical Research Council’s Human Genome Mapping Project Resource Centre, the European Molecular Biology Laboratory’s European Bioinformatics Institute (EMBL-EBI).
Wellcome funded the establishment of the Sanger Centre in 1993 and chose Hinxton as the home for its new genome research institute. Shortly after, EMBL-EBI located on the same site, and the two institutes formed a natural fit, consolidating expertise, facilities and knowledge in one place and enabling both to contribute a major role in the Human Genome Project – a global collaboration to sequence the first ‘reference’ human genome.
One third of the human genome was sequenced for the first time at the Wellcome Trust Sanger Institute, and the data was stored and shared through EMBL-EBI. This was the largest single contribution of any centre to the Human Genome Project, making the Campus and its collaborations uniquely important in the history of genomics.
Since the announcement of the completion of the draft human genome in 2000, and final completion in 2003, rapid progress in sequencing technology has enabled new areas of Science to be opened up for exploration. At its opening in 1994, the Campus housed approximately 400 employees. This has grown to over 2,600 people employed at the Wellcome Genome Campus today, making the Campus a densely concentrated and globally significant cluster for biodata and genomics expertise.
Before 1993
The first recorded owner of the estate, in 1506, was the college of Michaelhouse in Cambridge but it wasn’t until the early eighteenth century that the first building – a modest hunting and fishing lodge – was erected by Captain Joseph Richardson of Horseheath. It became a gentleman’s retreat with well-stocked trout ponds and fields full of partridge.
The current Hall was built by John Bromwell Jones in 1748 and remains today as the central three-storey block on the Campus. Opposite the house were stables, a kitchen garden and an orchard, all of which still exist, albeit in altered form.
By 1800 ownership of the Hall and estate had passed to the Green family, who remained until 1920, when the Hall was sold to the Robinsons. During the Second World War, the Hall was used for billeting American soldiers, stationed at the local airbase at Duxford.
In 1953 the Hall and grounds were sold to Tube Investments Plc for us as research laboratories, which closed in the late 1980s. The site remained under their ownership until it was sold to Genome Research Limited in 1992.
Sanger Institute's History
The Wellcome Trust established the Sanger Centre in 1992 to undertake the most ambitious project ever attempted in biology, sequencing the human genome. The new facility developed laboratory infrastructure, robotics, team working and computational approaches on a scale unprecedented in life sciences.
In 2000, the first draft of the human genome was announced with the Sanger Centre championing open access to the data and making the largest contribution to the global collaborative endeavour. Genomes began to convert biology into big data science. The subsequently renamed Wellcome Trust Sanger Institute established long term research programmes to explore and apply genome sequences.
References
1993 establishments in England
Biotechnology in the United Kingdom
DNA sequencing
Genomics organizations
Hinxton
Research institutes established in 1993
Research institutes in Cambridgeshire
Science parks in the United Kingdom
Buildings and structures in South Cambridgeshire District
Genome Campus | Wellcome Genome Campus | [
"Chemistry",
"Biology"
] | 942 | [
"Molecular biology techniques",
"DNA sequencing",
"Biotechnology in the United Kingdom",
"Biotechnology by country"
] |
11,827,553 | https://en.wikipedia.org/wiki/Reciprocal%20Fibonacci%20constant | The reciprocal Fibonacci constant is the sum of the reciprocals of the Fibonacci numbers:
Because the ratio of successive terms tends to the reciprocal of the golden ratio, which is less than 1, the ratio test shows that the sum converges.
The value of is approximately
.
With terms, the series gives digits of accuracy. Bill Gosper derived an accelerated series which provides digits.
is irrational, as was conjectured by Paul Erdős, Ronald Graham, and Leonard Carlitz, and proved in 1989 by Richard André-Jeannin.
Its simple continued fraction representation is:
.
Generalization and related constants
In analogy to the Riemann zeta function, define the Fibonacci zeta function as
for complex number with , and its analytic continuation elsewhere. Particularly the given function equals when .
It was shown that:
The value of is transcendental for any positive integer , which is similar to the case of even-index Riemann zeta-constants .
The constants , and are algebraically independent.
Except for which was proved to be irrational, the number-theoretic properties of (whenever s is a non-negative integer) are mostly unknown.
See also
List of sums of reciprocals
List of mathematical constants
References
External links
Mathematical constants
Fibonacci numbers
Irrational numbers | Reciprocal Fibonacci constant | [
"Mathematics"
] | 264 | [
"Recurrence relations",
"Irrational numbers",
"Fibonacci numbers",
"Mathematical objects",
"Golden ratio",
"Mathematical relations",
"nan",
"Mathematical constants",
"Numbers"
] |
11,827,758 | https://en.wikipedia.org/wiki/PEST%20sequence | A PEST sequence is a peptide sequence that is rich in proline (P), glutamic acid (E), serine (S) and threonine (T). It is associated with proteins that have a short intracellular half-life, so might act as a signal peptide for protein degradation. This may be mediated via the proteasome or calpain.
References
Peptide sequences
Proteins
Post-translational modification | PEST sequence | [
"Chemistry"
] | 91 | [
"Biomolecules by chemical classification",
"Molecular and cellular biology stubs",
"Gene expression",
"Biochemical reactions",
"Biochemistry stubs",
"Post-translational modification",
"Molecular biology",
"Proteins"
] |
11,830,303 | https://en.wikipedia.org/wiki/Dissipative%20soliton | Dissipative solitons (DSs) are stable solitary localized structures that arise in nonlinear spatially extended dissipative systems due to mechanisms of self-organization. They can be considered as an extension of the classical soliton concept in conservative systems. An alternative terminology includes autosolitons, spots and pulses.
Apart from aspects similar to the behavior of classical particles like the formation of bound states, DSs exhibit interesting behavior – e.g. scattering, creation and annihilation – all without the constraints of energy or momentum
conservation. The excitation of internal degrees of freedom may result in a dynamically stabilized intrinsic speed, or periodic oscillations of the shape.
Historical development
Origin of the soliton concept
DSs have been experimentally observed for a long time.
Helmholtz measured the propagation velocity of nerve pulses in
1850. In 1902, Lehmann found the formation of localized anode
spots in long gas-discharge tubes. Nevertheless, the term
"soliton" was originally developed in a different context. The
starting point was the experimental detection of "solitary
water waves" by Russell in 1834.
These observations initiated the theoretical work of
Rayleigh and Boussinesq around
1870, which finally led to the approximate description of such
waves by Korteweg and de Vries in 1895; that description is known today as the (conservative)
KdV equation.
On this background the term "soliton" was
coined by Zabusky and Kruskal in 1965. These
authors investigated certain well localised solitary solutions
of the KdV equation and named these objects solitons. Among
other things they demonstrated that in 1-dimensional space
solitons exist, e.g. in the form of two unidirectionally
propagating pulses with different size and speed and exhibiting the
remarkable property that number, shape and size are the same
before and after collision.
Gardner et al. introduced the inverse scattering technique
for solving the KdV equation and proved that this equation is
completely integrable. In 1972 Zakharov and
Shabat found another integrable equation and
finally it turned out that the inverse scattering technique can
be applied successfully to a whole class of equations (e.g. the
nonlinear Schrödinger and
sine-Gordon equations). From 1965
up to about 1975, a common agreement was reached: to reserve the term soliton to
pulse-like solitary solutions of conservative nonlinear partial
differential equations that can be solved by using the inverse
scattering technique.
Weakly and strongly dissipative systems
With increasing knowledge of classical solitons, possible
technical applicability came into perspective, with the most
promising one at present being the transmission of optical
solitons via glass fibers for the purpose of
data transmission. In contrast to conservative systems, solitons in fibers dissipate energy and
this cannot be neglected on an intermediate and long time
scale. Nevertheless, the concept of a classical soliton can
still be used in the sense that on a short time scale
dissipation of energy can be neglected. On an intermediate time
scale one has to take small energy losses into account as a
perturbation, and on a long scale the amplitude of the soliton
will decay and finally vanish.
There are however various types of systems which are capable of
producing solitary structures and in which dissipation plays an
essential role for their formation and stabilization. Although
research on certain types of these DSs has been carried out for
a long time (for example, see the research on nerve pulses culminating
in the work of Hodgkin and Huxley in 1952), since
1990 the amount of research has significantly increased (see e.g.)
Possible reasons are improved experimental devices and
analytical techniques, as well as the availability of more
powerful computers for numerical computations. Nowadays, it is
common to use the term dissipative solitons for solitary structures in
strongly dissipative systems.
Experimental observations
Today, DSs can be found in many different
experimental set-ups. Examples include
Gas-discharge systems: plasmas confined in a discharge space which often has a lateral extension large compared to the main discharge length. DSs arise as current filaments between the electrodes and were found in DC systems with a high-ohmic barrier, AC systems with a dielectric barrier, and as anode spots, as well as in an obstructed discharge with metallic electrodes.
Semiconductor systems: these are similar to gas-discharges; however, instead of a gas, semiconductor material is sandwiched between two planar or spherical electrodes. Set-ups include Si and GaAs pin diodes, n-GaAs, and Si p+−n+−p−n−, and ZnS:Mn structures.
Nonlinear optical systems: a light beam of high intensity interacts with a nonlinear medium. Typically the medium reacts on rather slow time scales compared to the beam propagation time. Often, the output is fed back into the input system via single-mirror feedback or a feedback loop. DSs may arise as bright spots in a two-dimensional plane orthogonal to the beam propagation direction; one may, however, also exploit other effects like polarization. DSs have been observed for saturable absorbers, degenerate optical parametric oscillators (DOPOs), liquid crystal light valves (LCLVs), alkali vapor systems, photorefractive media, and semiconductor microresonators.
If the vectorial properties of DSs are considered, vector dissipative soliton could also be observed in a fiber laser passively mode locked through saturable absorber,
In addition, multiwavelength dissipative soliton in an all normal dispersion fiber laser passively mode-locked with a SESAM has been obtained. It is confirmed that depending on the cavity birefringence, stable single-, dual- and triple-wavelength dissipative soliton can be formed in the laser. Its generation mechanism can be traced back to the nature of dissipative soliton.
Chemical systems: realized either as one- and two-dimensional reactors or via catalytic surfaces, DSs appear as pulses (often as propagating pulses) of increased concentration or temperature. Typical reactions are the Belousov–Zhabotinsky reaction, the ferrocyanide-iodate-sulphite reaction as well as the oxidation of hydrogen, CO, or iron. Nerve pulses or migraine aura waves also belong to this class of systems.
Vibrated media: vertically shaken granular media, colloidal suspensions, and Newtonian fluids produce harmonically or sub-harmonically oscillating heaps of material, which are usually called oscillons.
Hydrodynamic systems: the most prominent realization of DSs are domains of convection rolls on a conducting background state in binary liquids. Another example is a film dragging in a rotating cylindric pipe filled with oil.
Electrical networks: large one- or two-dimensional arrays of coupled cells with a nonlinear current–voltage characteristic. DSs are characterized by a locally increased current through the cells.
Remarkably enough, phenomenologically the dynamics of the DSs in many of the above systems are similar in spite of the microscopic differences. Typical observations are (intrinsic) propagation, scattering, formation of bound states and clusters, drift in gradients, interpenetration, generation, and annihilation, as well as higher instabilities.
Theoretical description
Most systems showing DSs are described by nonlinear
partial differential equations. Discrete difference equations and
cellular automata are also used. Up to now,
modeling from first principles followed by a quantitative
comparison of experiment and theory has been performed only
rarely and sometimes also poses severe problems because of large
discrepancies between microscopic and macroscopic time and
space scales. Often simplified prototype models are
investigated which reflect the essential physical processes in
a larger class of experimental systems. Among these are
Reaction–diffusion systems, used for chemical systems, gas-discharges and semiconductors. The evolution of the state vector q(x, t) describing the concentration of the different reactants is determined by diffusion as well as local reactions:
A frequently encountered example is the two-component Fitzhugh–Nagumo-type activator–inhibitor system
Stationary DSs are generated by production of material in the center of the DSs, diffusive transport into the tails and depletion of material in the tails. A propagating pulse arises from production in the leading and depletion in the trailing end. Among other effects, one finds periodic oscillations of DSs ("breathing"), bound states, and collisions, merging, generation and annihilation.
Ginzburg–Landau type systems for a complex scalar q(x, t) used to describe nonlinear optical systems, plasmas, Bose-Einstein condensation, liquid crystals and granular media. A frequently found example is the cubic-quintic subcritical Ginzburg–Landau equation
To understand the mechanisms leading to the formation of DSs, one may consider the energy ρ = |q|2 for which one may derive the continuity equation
One can thereby show that energy is generally produced in the flanks of the DSs and transported to the center and potentially to the tails where it is depleted. Dynamical phenomena include propagating DSs in 1d, propagating clusters in 2d, bound states and vortex solitons, as well as "exploding DSs".
The Swift–Hohenberg equation is used in nonlinear optics and in the granular media dynamics of flames or electroconvection. Swift–Hohenberg can be considered as an extension of the Ginzburg–Landau equation. It can be written as
For dr > 0 one essentially has the same mechanisms as in the Ginzburg–Landau equation. For dr < 0, in the real Swift–Hohenberg equation one finds bistability between homogeneous states and Turing patterns. DSs are stationary localized Turing domains on the homogeneous background. This also holds for the complex Swift–Hohenberg equations; however, propagating DSs as well as interaction phenomena are also possible, and observations include merging and interpenetration.
Particle properties and universality
DSs in many different systems show universal particle-like
properties. To understand and describe the latter, one may try
to derive "particle equations" for slowly varying order
parameters like position, velocity or amplitude of the DSs by
adiabatically eliminating all fast variables in the field
description. This technique is known from linear systems,
however mathematical problems arise from the nonlinear models
due to a coupling of fast and slow modes.
Similar to low-dimensional dynamic systems, for supercritical
bifurcations of stationary DSs one finds characteristic normal
forms essentially depending on the symmetries of the system.
E.g., for a transition from a symmetric stationary to an
intrinsically propagating DS one finds the Pitchfork normal
form
for the velocity v of the DS, here σ
represents the bifurcation parameter and σ0
the bifurcation point. For a bifurcation to a "breathing" DS,
one finds the Hopf normal form
for the amplitude A of the oscillation.<ref
name="gurevich"/> It is also possible to treat "weak interaction"
as long as the overlap of the DSs is not too large. In this way, a
comparison between experiment and theory is facilitated.
Note that the above problems do not arise for classical
solitons as inverse scattering theory yields complete
analytical solutions.
See also
Clapotis
Compacton, a soliton with compact support
Fiber laser
Freak waves may be a related phenomenon
Graphene
Nonlinear Schrödinger equation
Nonlinear system
Oscillon
Peakon, a soliton with a non-differentiable peak
Q-ball, a non-topological soliton
Sine-Gordon equation
Solitary waves in discrete media
Soliton (optics)
Soliton (topological)
Soliton model of nerve impulse propagation
Topological quantum number
Vector soliton
References
Inline
Books and overview articles
N. Akhmediev and A. Ankiewicz, Dissipative Solitons, Lecture Notes in Physics, Springer, Berlin (2005)
N. Akhmediev and A. Ankiewicz, Dissipative Solitons: From Optics to Biology and Medicine, Lecture Notes in Physics, Springer, Berlin (2008)
H.-G. Purwins et al., Advances in Physics 59 (2010): 485
A. W. Liehr: Dissipative Solitons in Reaction Diffusion Systems. Mechanism, Dynamics, Interaction. Volume 70 of Springer Series in Synergetics, Springer, Berlin Heidelberg 2013,
Solitons
Self-organization
Systems theory | Dissipative soliton | [
"Mathematics"
] | 2,623 | [
"Self-organization",
"Dynamical systems"
] |
11,830,372 | https://en.wikipedia.org/wiki/Menger%20curvature | In mathematics, the Menger curvature of a triple of points in n-dimensional Euclidean space Rn is the reciprocal of the radius of the circle that passes through the three points. It is named after the Austrian-American mathematician Karl Menger.
Definition
Let x, y and z be three points in Rn; for simplicity, assume for the moment that all three points are distinct and do not lie on a single straight line. Let Π ⊆ Rn be the Euclidean plane spanned by x, y and z and let C ⊆ Π be the unique Euclidean circle in Π that passes through x, y and z (the circumcircle of x, y and z). Let R be the radius of C. Then the Menger curvature c(x, y, z) of x, y and z is defined by
If the three points are collinear, R can be informally considered to be +∞, and it makes rigorous sense to define c(x, y, z) = 0. If any of the points x, y and z are coincident, again define c(x, y, z) = 0.
Using the well-known formula relating the side lengths of a triangle to its area, it follows that
where A denotes the area of the triangle spanned by x, y and z.
Another way of computing Menger curvature is the identity
where is the angle made at the y-corner of the triangle spanned by x,y,z.
Menger curvature may also be defined on a general metric space. If X is a metric space and x,y, and z are distinct points, let f be an isometry from into . Define the Menger curvature of these points to be
Note that f need not be defined on all of X, just on {x,y,z}, and the value cX (x,y,z) is independent of the choice of f.
Integral Curvature Rectifiability
Menger curvature can be used to give quantitative conditions for when sets in may be rectifiable. For a Borel measure on a Euclidean space define
A Borel set is rectifiable if , where denotes one-dimensional Hausdorff measure restricted to the set .
The basic intuition behind the result is that Menger curvature measures how straight a given triple of points are (the smaller is, the closer x,y, and z are to being collinear), and this integral quantity being finite is saying that the set E is flat on most small scales. In particular, if the power in the integral is larger, our set is smoother than just being rectifiable
Let , be a homeomorphism and . Then if .
If where , and , then is rectifiable in the sense that there are countably many curves such that . The result is not true for , and for .:
In the opposite direction, there is a result of Peter Jones:
If , , and is rectifiable. Then there is a positive Radon measure supported on satisfying for all and such that (in particular, this measure is the Frostman measure associated to E). Moreover, if for some constant C and all and r>0, then . This last result follows from the Analyst's Traveling Salesman Theorem.
Analogous results hold in general metric spaces:
See also
Menger-Melnikov curvature of a measure
External links
References
Curvature (mathematics)
Multi-dimensional geometry | Menger curvature | [
"Physics"
] | 694 | [
"Geometric measurement",
"Physical quantities",
"Curvature (mathematics)"
] |
11,830,463 | https://en.wikipedia.org/wiki/LIGO%20Scientific%20Collaboration | The LIGO Scientific Collaboration (LSC) is a scientific collaboration of international physics institutes and research groups dedicated to the search for gravitational waves.
History
The LSC was established in 1997, under the leadership of Barry Barish. Its mission is to ensure equal scientific opportunity for individual participants and institutions by organizing research, publications, and all other scientific activities, and it includes scientists from both LIGO Laboratory and collaborating institutions. Barish appointed Rainer Weiss as the first spokesperson.
LSC members have access to the US-based Advanced LIGO detectors in Hanford, Washington and in Livingston, Louisiana, as well as the GEO 600 detector in Sarstedt, Germany. Under an agreement with the European Gravitational Observatory (EGO), LSC members also have access to data from the Virgo detector in Pisa, Italy. While the LSC and the Virgo Collaboration are separate organizations, they cooperate closely and are referred to collectively as "LVC". The KAGRA observatory's collaboration has joined the LIGO-Virgo collective, and the LIGO-Virgo-KAGRA collective is called "LVK".
The LSC Spokesperson is Patrick Brady of University of Wisconsin-Milwaukee. The Executive Director of the LIGO Laboratory is David Reitze from the University of Florida.
On 11 February 2016, the LIGO and Virgo collaborations announced that they succeeded in making the first direct gravitational wave observation on 14 September 2015.
In 2016, Barish received the Enrico Fermi Prize "for his fundamental contributions to the formation of the LIGO and LIGO-Virgo scientific collaborations and for his role in addressing challenging technological and scientific aspects whose solution led to the first detection of gravitational waves".
Collaboration members
Membership of LIGO Scientific Collaboration as of November 2015 is detailed in this table:
Notes
References
External links
LIGO Magazine
Gravitational-wave astronomy
Astronomy in the United States
Organizations based in California
Organizations based in Massachusetts
Organizations established in 1997
Albert Einstein Medal recipients | LIGO Scientific Collaboration | [
"Physics",
"Astronomy"
] | 392 | [
"Astronomical sub-disciplines",
"Gravitational-wave astronomy",
"Astrophysics"
] |
11,830,755 | https://en.wikipedia.org/wiki/Grotesque%20%28architecture%29 | In architecture, a grotesque () is a fantastic or mythical figure carved from stone and fixed to the walls or roof of a building. A chimera () is a type of grotesque depicting a mythical combination of multiple animals (sometimes including humans). Grotesque are often called gargoyles, although the term gargoyle refers to figures carved specifically to drain water away from the sides of buildings. In the Middle Ages, the term babewyn was used to refer to both gargoyles and chimerae. This word is derived from the Italian word , which means "baboon".
Grotesques often depict whimsical, mythical creatures in dramatic or humorous ways. They have historically been a key element of architecture in many periods including the Renaissance and Medieval periods and have stylistically developed in conjunction with these times. Although grotesques typically depict a wide range of subjects, they are often hybrids of different mythical, human, and animalistic features.
Many scholars describe grotesques as being used to ward off evil and as reminders of the separation of the earth and the divine. Grotesques are predominantly carved into buildings of religious significance, in particular churches and cathedrals. Despite their presence in religious spaces, their anthropomorphic designs are largely not directly religious and instead are often more whimsical without religious connotations. They commonly exist on high ledges and rooftops and are frequently positioned out of view from common areas. Prominent examples of preserved grotesques exist on buildings such as the Florence Cathedral and Notre-Dame de Paris. Historically, grotesques have also had significant design influence from sculptural trends and often their architects were originally sculptors or artists. This meant that the widespread emergence of grotesques also often converged with popular art styles that existed at the time, especially the combined rise of the gothic style and the addition of grotesques in architecture. Key architects that often included grotesques as a feature in their designs included Brunelleschi and Gundulf of Rochester.
Bridaham, in his book Gargoyles, Chimeres, and the Grotesque in French Gothic Sculpture, pointed out that the sculptors of the gothic cathedrals in the twelfth and thirteenth centuries were tasked by the Pope to be "preacher[s] in stone" to the illiterates who populated Europe at the time. It fell to the sculptors not only to present the stories of the Bible but also to portray the animals and beings who populated the folklore of the times. Many of these showed up as grotesques.
Some critics, such as Frances Barasch, dismissed the use of the grotesque as an idle toy and not of any great use. They also argued that it perpetuated superstition instead of articulating what is real or the truth.
The meaning and use of the grotesque is also changing in architecture. Aside from the sculpture, for instance, the term has been used to describe the search for the abnormal or the representation of caricature. There are also scholars who use the architectural definition of grotesque as a term for disharmony. This include Peter Eisenman, a Jewish Deconstructivist architect who used this conceptualization in his work. Particularly, he used the term in presenting a stylistic opposition to the form of aesthetics that is identified with the Kantian notion of the sublime in architecture.
History of grotesques in architecture
Grotesques in architecture can be traced back to its origins in medieval architecture, however they rose to prominence in Renaissance building design becoming more whimsical and elaborate during this time. Originally designed as spouts to drain water from buildings and gutters, now called gargoyles, grotesques became a sculptural feature during the medieval period and their often-intricate designs developed alongside the gothic architecture period that took place in Europe from the 12th to the 16th century establishing a basis for the common features of grotesque designs. The earliest examples of grotesques in architecture exist at historic sites such as the Salisbury Cathedral. The earliest instances of grotesques in architecture were initially deeply intertwined with religious spaces. The architect of buildings such as the Salisbury Cathedral was a monk, contributing to the rising interest of grotesques upon religious buildings. Even after their establishment as a key feature of early medieval architecture they continued to be based in religious circumstances even up until the Renaissance period almost 500 years later. Even in these early examples of grotesques in architecture there are clear mythological influences, and their whimsical style was established early on. Grotesques in architecture are most found on religious buildings and in religious contexts. Historically grotesques in architecture existed to amplify the traditionally dull waterspouts that existed on buildings throughout the Medieval and Renaissance time periods. As many practicing sculptors such as Brunelleschi would later venture into architecture in their careers and bring with them their knowledge and understanding of sculpture and design contributing to the growing number of grotesques that were designed and executed in architecture.
Renaissance architecture
Grotesques were a key feature of architecture and landscape design in the Renaissance period. Grotesques rose to prominence in the 14th century as a popular architectural feature on churches and other buildings of religious importance. They remained a staple of Renaissance architecture until the end of the period in the 17th century, expanding from a staple feature of Renaissance architecture into a key aspect of Renaissance landscape design.
Many examples of grotesques are preserved on Renaissance buildings such as the Florence Cathedral. Grotesques in the Renaissance period are largely influenced by Renaissance styles that were prominent at the time. These included design features such as the separation of the practical and the stylised. This allowed grotesques to flourish as a key design feature on many Renaissance buildings as they became an element of the Renaissance aesthetic which became more important than their usefulness as decorative waterspouts.
The grotesques on Renaissance buildings such as the Sistine Chapel are examples of the decorative interpretations of grotesques that existed in the Renaissance period. Luke Morgan in The Monster in the Garden described the integral position that grotesques had aesthetically in Renaissance design and architecture. He described the use of grotesques in this time as not just sculptural but also a wider depiction of the massive art movement of grotesque imagery that was then occurring. Grotesque imagery in art in the Renaissance period with depictions of “monstrous births, hybrid creatures and legendary beasts” created a basis for the emerging style that would become the style of grotesques in architecture.
This developing architectural style drew heavily from artistic influences combining the rising public interest in myths and monsters into a sound architectural element in many Renaissance buildings. Similarly, architects in the Renaissance often started out as sculptors lending themselves to the rise in Grotesques created on buildings. This led to architects creating buildings that had the possibility of adding sculptural features such as the grotesques that sit atop them. This was the case with the architect Brunelleschi who designed the Cathedral of Santa Maria del Fiore. As he was previously an artist before becoming an architect, the grotesques and other sculptures that exist within the cathedral are a clear choice by him as a result his previous experience with sculpture.
Medieval architecture
Grotesques also were a key feature of medieval architecture. As the Middle Ages were often referred to as “the age of faith,” religious institutions were hugely important and heavily decorated. Grotesques played a key role in this adding often humorous and subtly subversive touches to these institutions of faith.
Thomas A. Fudgé describes the importance of the inclusion of grotesques in medieval architecture in Medieval Religion and Its Anxieties. He highlights the deep importance that religious institutions had in this period, often reflected in the architecture of the time as churches stood out and loomed over entire towns. As a result, their decorative grotesques served to watch over entire towns acting not just as protectors but as watchful eyes for any potential acts of blasphemy.
Medieval sculpture also often depicted its subjects with a striking “moral transparency”, a key element of the gothic art that was emerging at the time. This concurrent sculptural depiction of good and evil saw a similar pattern emerge in the sculpting of grotesques at the time. Medieval art was governed by religious influences, hence the often mythical and whimsical depictions within architectural grotesques at the time.
Key examples of grotesques in medieval architecture include the grotesques adorning St Vitus Cathedral and Colegiata de San Pedro de Cervatos. The presence of grotesques in the medieval period was also marked by an increased interest to display personal character which quickly developed into the anthropomorphic style that has become a staple for the stone carvings. The distinct style of medieval grotesques is considered by G.R. Redgrave to be “the strange mixture of the sacred and the profane.”
Medieval grotesques were similarly influenced by prominent religious beliefs in Europe at the time and were featured heavily on churches and other religious buildings. Even architects in the medieval period were heavily influenced by the rise of Catholic Church at the time and the style of grotesques developed in tandem with this. Architects such as Gundulf of Rochester heavily influenced the rising style of grotesques on religious buildings. Previously a monk, Gundulf of Rochester went on to design some of the most prominent religious buildings in the Medieval era including Rochester Cathedral and with this established the use of grotesques as a staple on religious buildings such as churches.
Architectural features
Often also referred to as chimeras, grotesques are the carvings around gargoyles, which are the spouts designed to drain water from buildings. They largely portray mythical creatures which were considered to protect the buildings they reside on from evil and encourage the viewer to reflect on the separation between themselves and the divine.
Due to the use of weighty stone to create the grotesques, they were carved in workshops and then lifted into the heights of buildings after they were completed. The main materials used to create grotesques included marble, sandstone, and limestone with the option of including metal rods to reinforce their structural integrity.
In most instances grotesques are open-mouthed with their attached waterspout emerging from their mouths however they are a variety of ways for the waterspout to emerge. In many instances they emerge from the figures body or from an object that the carving is holding instead. As grotesques were extensions of waterspouts, most sustained water damage where the water flowed out, making them difficult to repair without replacing the entire sculpture.
Due to their necessity in draining water from gutters in buildings, grotesques are commonly found placed high on rooftops and on cornices in interior walls. This also often makes grotesques commonly slightly hidden, allowing their subject matter to be more playful than architectural features placed at eye level also allowing their architects to be more creative in the designs of their water draining features to achieve aesthetic continuity within their buildings.
Religious importance
Despite adorning mostly religious spaces and buildings of importance, the bizarre thematic patterns of grotesques are unusual and often not necessarily aligned with the views of the institutions they occupy. Often meant to be humorous, such as the long-necked grotesques at the Bayeux Cathedral, their contradictory meanings and placement still raise many questions.
For example, grotesques on religious buildings sometimes included sexually explicit content. The juxtaposition of the subversive carvings in largely religious contexts remains contested. Scholars such as Marta Zajac interpret the use of crude humour as a tactic to ward away evil, while other scholars connect this crudeness to the rise of the gothic art style that began to emerge in the 12th century.
The combined history of religion and grotesques in architecture is also potentially a result of the stability of religion that existed at the times when grotesques became prominent, in both the Medieval and Renaissance periods, specifically in Europe. Gaurav Majumdar argues that consistency in religion has allowed for the stylistic development of churches architecturally separate from their teachings. As a result, the unique style of grotesques was allowed to develop and flourish to adorn churches and cathedrals but exist separately from them. This explains the number of grotesques that exist in Venice, Italy as the church was well established there allowing for the unique style of grotesques to develop separately from the church.
These bizarre forms also show a “capacity for transformation” which is consistent with common ideas in the church at the time. Although the significance of grotesques being included in religious spaces is contested, their commonality on these buildings of importance showcases their stylistic development that occurred in tandem with the rising influence of religion, in particular, with the influence of the Catholic Church in Europe in the time from the 12th to the 17th century.
Gallery
See also
Carranca
Chimera (mythology)
Chiwen
Darth Vader grotesque
Gargoyle
Grotesque
Mascaron (architecture)
Nightmares in the Sky
Onigawara
Shachihoko
Sheela na gig
References
External links
Grotesques
Visual motifs
Objects believed to protect from evil | Grotesque (architecture) | [
"Mathematics"
] | 2,590 | [
"Symbols",
"Visual motifs"
] |
11,830,903 | https://en.wikipedia.org/wiki/Cleveland%20Bridge%20%26%20Engineering%20Company | Cleveland Bridge & Engineering Company was a British bridge works and structural steel contractor based in Darlington. It was operational for 144 years.
From the founding of the company in 1877, it had a presence in Darlington. While initially focused on fabrication, the company became one of the major bridgebuilders in the world, having constructed structures across all five inhabited continents. It built numerous landmarks around the world, including the Victoria Falls Bridge in Zimbabwe; the Tees Transporter Bridge; the Forth Road and Humber suspension bridges in the UK; Hong Kong's Tsing Ma Bridge, and London's Wembley Stadium Arch. Cleveland Bridge's Dubai subsidiary, which was established in 1978, fabricated and erected steel structures for, amongst other projects, the Burj Al Arab and Emirates Towers.
During 1967, the company was acquired by The Cementation Company, which was itself bought by Trafalgar House soon thereafter. During 1990, it was merged with Redpath Dorman Long, another subsidiary owned by Trafalgar, to create Cleveland Structural Engineering. After a management buyout in 2000, the company operated as an independent concern, with considerable financial backing from Saudi Arabia's Al Rushaid Group. However, the company soon found itself in multiple legal disputes due to alleged quality issues and other concerns on its work on major projects such as The Shard and New Wembley Stadium; these proved to be not only costly in financial terms but also damaging to its reputation. During the early 2020s, the fiscal situation of the company declined considerably and backers proved to be unwilling to expend additional resources. Thus, in July 2021, the Darlington portion of the company went into administration in July 2021, owing £21m. After unsuccessful efforts to attract a buyer, the company was closed in September 2021.
History
Cleveland Bridge & Engineering Company was founded in 1877 in Darlington with a capital of £10,000. Seven years later, the assets were sold to Charles Frederick Dixon, who registered the company on a Stock Exchange in 1893. By 1913, it had 600 employees.
During 1967, the company was acquired by The Cementation Company. Three years later, Trafalgar House purchased Cementation; it also acquired Redpath Dorman Long from Dorman Long Group in 1982, after which the two subsidiaries were merged in 1990 to create Cleveland Structural Engineering. That business was renamed Kvaerner Cleveland Bridge following acquisition of Trafalgar House by Kværner in 1996.
During 1999, it was reported that Kværner intended to sell the business amid a wider restructuring away from heavy manufacturing activities; at the time, the company employed roughly 600 staff following a series of job losses. Despite appeals for financial assistance being made to the British government, it refused to intervene in the matter. One year later, the company became independent through a management buyout that involved a payment of $12.3 million. In addition to the UK-based operations, the same management team also acquired the company's Dubai subsidiary that had been established in 1978. Saudi Arabia's Al Rushaid Group provided finance to the firm which rose to an 88.5% stake by September 2002.
Throughout the 2000s and 2010s, the company's headcount varied considerably, often rising soon after the awarding of key contracts to the business. During this era, it undertook various activities, including its involvement in various road and railway-based schemes and several major construction projects, such as The Shard and Wembley Stadium.
Final years
In July 2021, Cleveland Bridge sought further funding from Al Rushaid Group and warned 220 staff of potential redundancies. That same month, the firm was reported to be on the brink of administration as a result of contract delays and negative economic consequences that were partially attributable to COVID-19.
Al Rushaid Group did not provide the requested resources; instead, FRP was appointed as the company's administrator and the business was put up for sale. Consequently, 51 workers were made redundant in August 2021. Around 25 staff continued to assist FRP, and 128 staff were furloughed under the Coronavirus Jobs Retention Scheme pending restart of production.
FRP was ultimately unable to secure a buyer for the business. Accordingly, on 10 September 2021, it announced the company would permanently close with the loss of a further 133 jobs. They stated £12m would be required to fund the business to the end of 2021. The company assets were sold off in November 2021.
Controversies
2016 death and HSE fine
In 2022, Cleveland Bridge & Engineering was fined £1.5 million by the Health and Safety Executive, with a further cost judgement of £29,000 against them. An inadequately secured crane access panel gave way in a 2016 fatal fall. The fine related to four breaches of the Health and Safety at Work etc. Act 1974 leading to the death. FRP Advisory stated it was unlikely the fine or costs could be paid.
The Shard
In 2013, Cleveland Bridge was ordered to pay Severfield-Rowen plc £824,478 compensation for delays to their subcontracted work on The Shard. The judge accepted there was a very high incidence of poor workmanship in the steelwork Cleveland Bridge delivered. Cleveland Bridge's own internal correspondence highlighted an extraordinary work overload in 2010, and Judge Akenhead concluded it had taken on more work than it had capacity.
Wembley Stadium
In 2002, the company won a £60 million steelwork contract for the bowl of New Wembley Stadium. Part way through construction, relationships between main contractor Multiplex and Cleveland Bridge broke down. Multiplex stripped Cleveland Bridge of their erection role, handing it to roof steelwork contractor Hollandia. Two hundred of Cleveland Bridge's on site erection staff and subcontractors transferred to Hollandia and were sacked after going on strike. The situation escalated when Cleveland Bridge unilaterally repudiated its remaining stadium fabrication contract.
Both sides blamed each other for extra costs; delays; poor workmanship; missing or incorrect steelwork; damaged, missing or incorrect paintwork; chaotic record-keeping; and the near site stock yards. Litigation ensued and Cleveland Bridge was ultimately ordered to pay Multiplex £6,154,246.79 in respect of net earlier overpayments; breach of contract, and interest. Cleveland Bridge was also ordered to pay 20% of Multiplex's legal costs. It was claimed, in evidence, that some Wembley steelwork had been fabricated in China for Cleveland Bridge and that it had been diverted to the Beijing National Stadium.
Mr Justice Jackson's 2008 judgement in the Technology and Construction Court was highly critical of both parties unwillingness to settle earlier in such an expensive case where the core evidence extended to over 500 lever arch files, and photocopying costs alone were £1 million. He highlighted the large number of items at dispute where the sums involved were substantially exceeded by the legal costs involved in resolving them.
Notable bridges
See also
References
External links
A to Z of bridges built by Cleveland Bridge
Bridge companies
Construction and civil engineering companies of England
Companies based in County Durham
Construction and civil engineering companies established in 1877
Manufacturing companies established in 1877
1877 establishments in England
Borough of Darlington
Structural steel
British companies established in 1877
2021 disestablishments in England
British companies disestablished in 2021 | Cleveland Bridge & Engineering Company | [
"Engineering"
] | 1,471 | [
"Structural engineering",
"Structural steel"
] |
11,831,905 | https://en.wikipedia.org/wiki/Service-oriented%20device%20architecture | The purpose of service-oriented device architecture (SODA) is to enable devices to be connected to a service-oriented architecture (SOA). Currently, developers connect enterprise services to an enterprise service bus (ESB) using the various web service standards that have evolved since the advent of XML in 1998. With SODA, developers are able to connect devices to the ESB and users can access devices in exactly the same manner that they would access any other web service.
External links
Service Oriented Device Architecture, IEEE Pervasive Computing September 2006
Presentation at EclipseCon 2007
Service-oriented (business computing) | Service-oriented device architecture | [
"Technology"
] | 121 | [
"Computing stubs"
] |
11,831,990 | https://en.wikipedia.org/wiki/Bloch%27s%20theorem%20%28complex%20analysis%29 | In complex analysis, a branch of mathematics, Bloch's theorem describes the behaviour of holomorphic functions defined on the unit disk. It gives a lower bound on the size of a disk in which an inverse to a holomorphic function exists. It is named after André Bloch.
Statement
Let f be a holomorphic function in the unit disk |z| ≤ 1 for which
Bloch's theorem states that there is a disk S ⊂ D on which f is biholomorphic and f(S) contains a disk with radius 1/72.
Landau's theorem
If f is a holomorphic function in the unit disk with the property |f′(0)| = 1, then let Lf be the radius of the largest disk contained in the image of f.
Landau's theorem states that there is a constant L defined as the infimum of Lf over all such functions f, and that L is greater than Bloch's constant L ≥ B.
This theorem is named after Edmund Landau.
Valiron's theorem
Bloch's theorem was inspired by the following theorem of Georges Valiron:
Theorem. If f is a non-constant entire function then there exist disks D of arbitrarily large radius and analytic functions φ in D such that f(φ(z)) = z for z in D.
Bloch's theorem corresponds to Valiron's theorem via the so-called Bloch's principle.
Proof
Landau's theorem
We first prove the case when f(0) = 0, f′(0) = 1, and |f′(z)| ≤ 2 in the unit disk.
By Cauchy's integral formula, we have a bound
where γ is the counterclockwise circle of radius r around z, and 0 < r < 1 − |z|.
By Taylor's theorem, for each z in the unit disk, there exists 0 ≤ t ≤ 1 such that f(z) = z + z2f″(tz) / 2.
Thus, if |z| = 1/3 and |w| < 1/6, we have
By Rouché's theorem, the range of f contains the disk of radius 1/6 around 0.
Let D(z0, r) denote the open disk of radius r around z0. For an analytic function g : D(z0, r) → C such that g(z0) ≠ 0, the case above applied to (g(z0 + rz) − g(z0)) / (rg′(0)) implies that the range of g contains D(g(z0), |g′(0)|r / 6).
For the general case, let f be an analytic function in the unit disk such that |f′(0)| = 1, and z0 = 0.
If |f′(z)| ≤ 2|f′(z0)| for |z − z0| < 1/4, then by the first case, the range of f contains a disk of radius |f′(z0)| / 24 = 1/24.
Otherwise, there exists z1 such that |z1 − z0| < 1/4 and |f′(z1)| > 2|f′(z0)|.
If |f′(z)| ≤ 2|f′(z1)| for |z − z1| < 1/8, then by the first case, the range of f contains a disk of radius |f′(z1)| / 48 > |f′(z0)| / 24 = 1/24.
Otherwise, there exists z2 such that |z2 − z1| < 1/8 and |f′(z2)| > 2|f′(z1)|.
Repeating this argument, we either find a disk of radius at least 1/24 in the range of f, proving the theorem, or find an infinite sequence (zn) such that |zn − zn−1| < 1/2n+1 and |f′(zn)| > 2|f′(zn−1)|.
In the latter case the sequence is in D(0, 1/2), so f′ is unbounded in D(0, 1/2), a contradiction.
Bloch's theorem
In the proof of Landau's Theorem above, Rouché's theorem implies that not only can we find a disk D of radius at least 1/24 in the range of f, but there is also a small disk D0 inside the unit disk such that for every w ∈ D there is a unique z ∈ D0 with f(z) = w. Thus, f is a bijective analytic function from D0 ∩ f−1(D) to D, so its inverse φ is also analytic by the inverse function theorem.
Bloch's and Landau's constants
The number B is called the Bloch's constant. The lower bound 1/72 in Bloch's theorem is not the best possible. Bloch's theorem tells us B ≥ 1/72, but the exact value of B is still unknown.
The best known bounds for B at present are
where Γ is the Gamma function. The lower bound was proved by Chen and Gauthier, and the upper bound dates back to Ahlfors and Grunsky.
The similarly defined optimal constant L in Landau's theorem is called the Landau's constant. Its exact value is also unknown, but it is known that
In their paper, Ahlfors and Grunsky conjectured that their upper bounds are actually the true values of B and L.
For injective holomorphic functions on the unit disk, a constant A can similarly be defined. It is known that
See also
Table of selected mathematical constants
References
External links
Unsolved problems in mathematics
Theorems in complex analysis | Bloch's theorem (complex analysis) | [
"Mathematics"
] | 1,251 | [
"Mathematical problems",
"Theorems in mathematical analysis",
"Unsolved problems in mathematics",
"Theorems in complex analysis"
] |
11,832,350 | https://en.wikipedia.org/wiki/Growth%20factor%20receptor | A growth factor receptor is a receptor that binds to a growth factor. Growth factor receptors are the first stop in cells where the signaling cascade for cell differentiation and proliferation begins. Growth factors, which are ligands that bind to the receptor are the initial step to activating the growth factor receptors and tells the cell to grow and/or divide.
These receptors may use the JAK/STAT, MAP kinase, and PI3 kinase pathways.
A majority of growth factor receptors consists of receptor tyrosine kinases (RTKs). There are 3 dominant receptor types that are exclusive to research : the epidermal growth factor receptor, the neurotrophin receptor, and the insulin receptors. All growth factor receptors are membrane bound and composed of 3 general protein domains: extracellular, transmembrane, and cytoplasmic. The extracellular domain region is where a ligand may bind, usually with very high specificity. In RTKs, the binding of a ligand to the extracellular ligand binding site leads to the autophosphorylation of tyrosine residues in the intracellular domain. These phosphorylations allow for other intracellular proteins to bind to with the phosphotyrosine-binding domain which results in a series of physiological responses within the cell.
Medical relevance
Research in today’s society focus on growth factor receptors in order to pinpoint cancer treatment. Epidermal growth factor receptors are involved heavily with oncogene activity. Once growth factors bind to their receptor, a signal transduction pathway occurs within the cell to ensure the cell is working. However, in cancerous cells, the pathway might never turn on or turn off. Furthermore, in certain cancers, receptors (such as RTKs) are often observed to be overexpressed, which corresponds to the uncontrolled proliferation and differentiation of cells. For this same reason, tyrosine receptors are often a target for cancer therapy.
References
Receptors
Single-pass transmembrane proteins | Growth factor receptor | [
"Chemistry"
] | 406 | [
"Receptors",
"Signal transduction"
] |
11,832,700 | https://en.wikipedia.org/wiki/List%20of%20abbreviations%20in%20oil%20and%20gas%20exploration%20and%20production | The oil and gas industry uses many acronyms and abbreviations. This list is meant for indicative purposes only and should not be relied upon for anything but general information.
#
1C – Proved contingent resources
1oo2 – One out of two voting (instrumentation)
1P – Proven reserves
2C – Proved and probable contingent resources
2D – two-dimensional (geophysics)
2oo2 – Two out of two voting (instrumentation)
2oo2D – Two out of two voting with additional diagnostic detection capabilities (instrumentation)
2oo3 – Two out of three voting (instrumentation)
2ooN detection – to reach specified alarm limit when N ≥ 3 (instrumentation)
2P – proved and probable reserves
3C – three components seismic acquisition (x, y, and z)
3C – Proved, probable and possible contingent resources
3D – three-dimensional (geophysics)
3P – proved, probable and possible reserves
4D – multiple 3Ds acquired over time (the 4th D) over the same area with the same parameters (geophysics)
8rd – eight round (describes the number of revolutions per inch of pipe thread)
Symbol
°API – degrees API (American Petroleum Institute) density of oil
A
A – Appraisal (well)
AADE – American Association of Drilling Engineers
AAPG – American Association of Petroleum Geologists
AAPL – American Association of Professional Landmen
AAODC – American Association of Oilwell Drilling Contractors (obsolete; superseded by IADC)
AAV – Annulus access valve
ABAN – Abandonment, (also as AB and ABD and ABND)
ABSA – Alberta Boilers Safety Association
ABT – Annulus bore test
ACC – Air-cooled heat condenser
ACHE – Air-cooled heat exchanger
ACOU – Acoustic
ACP – Alkali-cosolvent-polymer
ACQU – Acquisition log
ACV – Automatic control valve
ADE – Advanced decision-making environment
ADEP – Awaiting development with exploration potential, referring to an asset
ADROC – advanced rock properties report
ADT – Applied drilling technology, ADT log
ADM – Advanced diagnostics module (fieldbus)
AER – Auto excitation regulator
AEMO – Australian Energy Market Operator
AFE – Authorization for expenditure, a process of submitting a business proposal to investors
AFP – Active rire protection
AGA – American Gas Association
AGRU – acid gas removal unit
AGT – (1) agitator, used in drilling
AGT – (2) authorised gas tester (certified by OPITTO)
AGT – (3) Azerbaijan – Georgia – Turkey (a region rich in oil related activity)
AHBDF – along hole (depth) below Derrick floor
AHD – along hole depth
AHU – air handling unit
AICD – autonomous inflow control device
AIChemE – American Institute of Chemical Engineers
AIM – asset integrity management
AIPSM – asset integrity and process safety management
AIR – assurance interface and risk
AIRG – airgun
AIRRE – airgun report
AISC – American Institute of Steel Construction
AISI – American Iron and Steel Institute
AIT – analyzer indicator transmitter
AIT – array induction tool
AL – appraisal license (United Kingdom), a type of onshore licence issued before 1996
ALAP – as low as possible (used along with density of mud)
ALARP – as low as reasonably practicable
ALC – vertical seismic profile acoustic log calibration report
ALLMS – anchor leg load monitoring system
ALQ – additional living quarters
ALR – acoustic log report
ALT – altered
AM – asset management
aMDEA – activated methyldiethanolamine
AMS – auxiliary measurement service log; auxiliary measurement sonde (temperature)
AMSL – above mean sea level
AMI – area of mutual interest
AMV – annulus master valve
ANACO – analysis of core logs report
ANARE – analysis report
AOF – absolute open flow
AOFP – absolute open-flow potential
AOI – area of interest
AOL – arrive on location
AOR – additional oil recovery
AP – alkali-polymer
APD – application for permit to drill
API – American Petroleum Institute: organization which sets unit standards in the oil and gas industry
°API – degrees API (gravity of oil)
APPRE – appraisal report
APS – active pipe support
APWD – annular pressure while drilling (tool)
ARACL – array acoustic log
ARESV – analysis of reservoir
ARI – azimuthal resistivity image
ARRC – array acoustic report
ART – actuator running tool
AS – array sonic processing log
ASD – acoustic sand detection
ASI – ASI log
ASME – American Society of Mechanical Engineers
ASOG – activity-specific operating guidelines
ASP – array sonic processing report
ASP – alkali-surfactant-polymer
ASTM – American Society for Testing and Materials
ASCSSV – annulus surface controlled sub-surface valve
ASV – anti-surge valve
ASV – annular safety valve
ASV – accommodation and support vessel
ATD – application to drill
ATU – auto top-up unit
AUV – authonomus underwater vehicle
AV – annular velocity or apparent viscosity
AVGMS – annulus vent gas monitoring system
AVO – amplitude versus offset (geophysics)
AWB/V – annulus wing block/valve (XT)
AWO – approval for well operation
ATM – at the moment
B
B or b – prefix denoting a number in billions
BA – bottom assembly (of a riser)
bbl – barrel
bbl/MMscf – barrels per million standard cubic feet
BBG – buy back gas
BBSM – behaviour-based safety management
BCPD – barrels condensate per day
Bcf – billion cubic feet (of natural gas)
Bcf/d – billion cubic feet per day (of natural gas)
Bcfe – billion cubic feet (of natural gas equivalent)
BD – bursting disc
BDF – below derrick floor
BDL – bit data log
BDV – blowdown valve
BGL – borehole geometry log
BGL – below ground level (used as a datum for depths in a well)
BGS – British Geological Survey
BGT – borehole geometry tool
BGWP – base of ground-water protection
BH – bloodhound
BHA – bottom hole assembly (toolstring on coiled tubing or drill pipe)
BHC – BHC gamma ray log
BHCA – BHC acoustic log
BHCS – BHC sonic log
BHCT – bottomhole circulating temperature
BHKA – bottomhole kickoff assembly
BHL – borehole log
BHP – bottom hole pressure
BHPRP – borehole pressure report
BHSRE – bottom hole sampling report
BHSS – borehole seismic survey
BHT – bottomhole temperature
BHTV – borehole television report
BINXQ – bond index quicklook log
BIOR – biostratigraphic range log
BIORE – biostratigraphy study report
BSLM – bend stiffener latch mechanism
BSW – base sediment and water
BIVDL – BI/DK/WF/casing collar locator/gamma ray log
BLD – bailed (refers to the practice of removing debris from the hole with a cylindrical container on a wireline)
BLI – bottom of logging interval
BLP – bridge-linked platform
BO – back-off log
BO – barrel of oil
boe – barrels of oil equivalent
boed – barrels of oil equivalent per day
BOEM – Bureau of Ocean Energy Management
boepd – barrels of oil equivalent per day
BOB – back on bottom
BOD – biological oxygen demand
BOL – bill of lading
BOM – bill of materials
BOP – blowout preventer
BOP – bottom of pipe
BOPD – barrels of oil per day
BOPE – blowout prevention equipment
BOREH – borehole seismic analysis
BOSIET – basic offshore safety induction and emergency training
BOTHL – bottom hole locator log
BOTTO – bottom hole pressure/temperature report
BP – bridge plug
BPD – barrels per day
BPH – barrels per hour
BPFL – borehole profile log
BPLUG – baker plug
BPM – barrels per minute
BPV – back pressure valve (goes on the end of coiled tubing a drill pipe tool strings to prevent fluid flow in the wrong direction)
BQL – B/QL log
BRPLG – bridge plug log
BRT – below rotary table (used as a datum for depths in a well)
BS – bend stiffener
BS – bumper sub
BS – booster station
BSEE – US: Bureau of Safety and Environmental Enforcement (formerly the MMS)
BSG – black start generator
BSR – blind shear rams (blowout preventer)
BSML – below sea mean level
BS&W – basic sediments and water
BT – buoyancy tank
BTEX – benzene, toluene, ethyl-benzene and xylene
BTHL – bottom hole log
BTO/C – break to open/close (valve torque)
BTU – British thermal units
BTU – Board of Trade Unit (1 kWh) (historical)
BU – bottom up
BUL – bottom-up lag
BUR – build-up rate
BVO – ball valve operator
bwd – barrels of water per day (often used in reference to oil production)
bwipd – barrels of water injected per day
– barrels of water per day
C
C&E – well completion and equipment cost
C&S – cased and suspended
C1 – methane
C2 – ethane
C3 – propane
C4 – butane
C6 – hexanes
C7+ – heavy hydrocarbon components
CA – core analysis log
CAAF – contract authorization approval form
CalGEM – California Geologic Energy Management Division (oil & gas regulatory body)
CALI – caliper log
CALOG – circumferential acoustic log
CALVE – calibrated velocity log data
CAODC – Canadian Association of Oilwell Drilling Contractors
CAPP – Canadian Association of Petroleum Producers
CAR – Company Appointed Representative
CART – cam-actuated running tool (housing running tool)
CART – cap replacement tool
CAS – casing log
CAT – connector actuating tool
CB – casing bowl
CB – core barrel
CBF – casing bowl flange
CBIL – CBIL log
CBL – cement bond log (measurement of casing cement integrity)
CBM – choke bridge module – XT choke
CBM – conventional buoy mooring
CBM – coal-bed methane
CCHT – core chart log
CCL – casing collar locator (in perforation or completion operations, the tool provides depths by correlation of the casing string's magnetic anomaly with known casing features)
CCLBD – construction / commissioning logic block diagram
CCLP – casing collar locator perforation
CCLTP – casing collar locator through tubing plug
CD – core description
CDATA – core data
CDIS – CDI synthetic seismic log
CDU – control distribution unit
CDU – crude distillation unit
CDP – common depth point (geophysics)
CDP – comprehensive drilling plan
CDRCL – compensated dual resistivity cal. log
CDF – core contaminated by drilling fluid
CDFT – critical device function test
CE – CE log
CEC – cation-exchange capacity
CECAN – CEC analysis
CEME – cement evaluation
CEOR – chemical-enhanced oil recovery
CER – central electrical/equipment room
CERE – cement remedial log
CET – cement evaluation tool
CF – completion fluid
CF – casing flange
CFD – computational fluid dynamics
CFGPD – cubic feet of gas per day
CFU – compact flotation unit
CGEL – CG EL log
CGL – core gamma log
CGPA – Canadian Gas Processors Association
CGPH – core graph log
CGR – condensate gas ratio
CGTL – compact gas to liquids (production equipment small enough to fit on a ship)
CHCNC – CHCNC gamma ray casing collar locator
CHDTP – calliper HDT playback log
CHECK – checkshot and acoustic calibration report
CHESM – contractor, health, environment and safety management
CHF – casing head flange
CHK – choke (a restriction in a flowline or a system, usually referring to a production choke during a test or the choke in the well control system)
CHKSR – checkshot survey report
CHKSS – checkshot survey log
CHOPS – cold heavy oil production with sand
CHP – casing hanger pressure (pressure in an annulus as measured at the casing hanger)
CHOTO – commissioning, handover and takeover
CHROM – chromatolog
CHRT – casing hanger running tool
CIBP – cast iron bridge plug
CICR – cast iron cement retainer
CIDL – chemical injection downhole lower
CIDU – chemical injection downhole upper
CIL – chemical injection line
CILD – conduction log
CIMV – chemical injection metering valve
CIRC – circulation
CITHP – closed-in tubing head pressure (tubing head pressure when the well is shut in)
CIV – chemical injection valve
CK – choke (a restriction in a flowline or a system, usually referring to a production choke during a test or the choke in the well control system)
CL – core log
CLG – core log and graph
CM – choke module
CMC – crown mounted compensators
CMC – critical micelle concentration
CMP – common midpoint (geophysics)
CMR – combinable magnetic resonance (NMR log tool)
CMT – cement
CNA – clay, no analysis
CND – compensated neutron density
CNFDP – CNFD true vertical-depth playback log
CNGR – compensated neutron gamma-ray log
CNL – compensated neutron log
CNLFD – CNL/FDC log
CNS – Central North Sea
CNCF – field-normalised compensated neutron porosity
CNR – Canadian natural resources
CO – change out (ex. from rod equipment to casing equipment)
COA – conditions of approval
COC – certificate of conformance
COD – chemical oxygen demand
COL – collar log
COMAN – compositional analysis
COML – compaction log
COMP – composite log
COMPR – completion program report
COMPU – computest report
COMRE – completion record log
COND – condensate production
CONDE – condensate analysis report
CONDR – continuous directional log
CORAN – core analysis report
CORE – core report
CORG – corgun log
CORIB – CORIBAND log
CORLG – correlation log
COROR – core orientation report
COW – Control of Work
COXY – carbon/oxygen log
CP – cathodic protection
CP – crown plug
cP – centipoise (viscosity unit of measurement)
CPI separator – corrugated plate interceptor
CPI – computer-processed interpretation
CPI – corrugated plate interceptor
CPICB – computer-processed interpretation coriband log
CPIRE – computer-processed interpretation report
CPP – central processing platform
CRA – corrosion-resistant alloy
CRET – cement retainer setting log
CRI – cuttings reinjection
CRINE – cost reduction in the new era
CRP – control riser platform
CRP – common/central reference point (subsea survey)
CRT – clamp replacement tool
CRT – casing running tool
CSE – confined space entry
CsF – caesium formate
CSC – car seal closed
CSG – coal seam gas
csg – casing
CSHN – cased-hole neutron log
CSI – combinable seismic imager (VSP) log (Schlumberger)
CSMT – core sampler tester log
CSO – complete seal-off
CSO – car seal open
CSPG – Canadian Society of Petroleum Geologists
CSR – corporate social responsibility
CST – chronological sample taker log (Schlumberger)
CSTAK – core sample taken log
CSTR – continuously-stirred tank reactor
CSTRE – CST report
CSU – commissioning and start-up
CSU – construction safety unit
CSUG – Canadian Society for Unconventional Gas
CT – coiled tubing
CTD – coiled tubing drilling
CTCO – coiled tubing clean-out
CTLF – coiled tubing lift frame
CTLF – compensated tension lift frame
CTOD – crack tip opening displacement
CTP – commissioning test procedure
CTR – Critical Transport Rate
CTRAC – cement tracer log
CUI – corrosion under insulation
CUL – cross-unit lateral
CUT – cutter log
CUTTD – cuttings description report
CWOP – complete well on paper
CWOR – completion work over riser
CWR – cooling water return
CWS – cooling water supply
X/O – cross-over
CYBD – Cyberbond log
CYBLK – Cyberlook log
CYDIP – Cyberdip log
CYDN – Cyberdon log
CYPRO – Cyberproducts log
CVD – Cost versus Depth
CVX – Chevron
D
D – development
D – Darcy, unit of permeability
D&A – dry and abandoned
D&C – drilling and completions
D&I – direction and inclination (MWD borehole deviation survey)
DAC – dipole acoustic log
DARCI – Darci log
DAS – data acquisition system
DAT – wellhead housing drill-ahead tool
DAZD – dip and azimuth display
DBB – double block and bleed
DBP – drillable bridge plug
DBR – damaged beyond repair
DCA – decline curve analysis
DC – drill centre
DC – drill collar/collars
DCAL – dual caliper log
DCC – distance cross course
DCS – distributed control system
DD – directional driller or directional drilling
DDC – daily drilling cost
DDC – de-watering and drying contract
DDBHC – DDBHC waveform log
DDET – depth determination log
DDM – derrick drilling machine (a.k.a. top drive)
DDNL – dual det. neutron life log
DDPT – drill data plot log
DDPU – double drum pulling unit
DDR – daily drilling report
DEA – diethanolamine
DECC – Department for Energy and Climate Change (UK)
DECT – decay time
DECT – down-hole electric cutting tool
DEFSU – definitive survey report
DEH – direct electrical heating
DELTA – delta-T log
DEN – density log
DEPAN – deposit analysis report
DEPC – depth control log
DEPT – depth
DESFL – deep induction SFL log
DEV – development well, Lahee classification
DEVLG – deviation log
DEXP – D-exponent log
DF – derrick floor
DFI – design, fabrication and installation résumé
DFIT – diagnostic fracture injection test
DFPH – Barrels of fluid per hour
DFR – drilling factual report
DG/DG# – diesel generator ('#'- means identification letter or number of the equipment i.e. DG3 or DG#3 means diesel generator nr 3)
DGA – diglycoamine
DGDS – dual-gradient drilling systems
DGP – dynamic geohistory plot (3D technique)
DH – drilling history
DHC – depositional history curve
DHSV – downhole safety valve
DHPG – downhole pressure gauge
DHPTT – downhole pressure/temperature transducer
DIBHC – DIS BHC log
DIEGR – dielectric gamma ray log
DIF – drill in fluids
DIL – dual-induction log
DILB – dual-induction BHC log
DILL – dual-induction laterolog
DILLS – dual-induction log-LSS
DILSL – dual-induction log-SLS
DIM – directional inertia mechanism
DINT – dip interpretation
DIP – dipmeter log
DIPAR – dipole acoustic report
DIPBH – dipmeter borehole log
DIPFT – dipmeter fast log
DIPLP – dip lithology pressure log
DIPRE – dipmeter report
DIPRM – dip removal log
DIPSA – dipmeter soda log
DIPSK – dipmeter stick log
DIRS – directional survey log
DIRSU – directional survey report
DIS – DIS-SLS log
DISFL – DISFL DBHC gamma ray log
DISO – dual induction sonic log
DL – development license (United Kingdom), a type of onshore license issued before 1996
DLIST – dip-list log
DLL – dual laterolog (deep and shallow resistivity)
DLS – dog-leg severity (directional drilling)
DM – dry mate
DMA – dead-man anchor
DMAS – dead-man auto-shear DMAS
DMRP – density – magnetic resonance porosity (wireline tool)
DMT – down-hole monitoring tool
DNHO – down-hole logging
DNV – Det Norske Veritas
DOA – delegation of authority
DOE – Department of Energy, United States
DOGGR – Division of Oil, Gas, and Geothermal Resources (former name of California's regulatory entity for oil, gas, and geothermal production)
DOPH – drilled-out plugged hole
DOWRE – downhole report
DP – drill pipe
DP – dynamic positioning
DPDV – dynamically positioned drilling vessel
DPL – dual propagation log
DPLD – differential pressure levitated device (or vehicle)
DPRES – dual propagation resistivity log
DPT – deeper pool test, Lahee classification
DQLC – dipmeter quality control log
DR – dummy-run log
DR – drilling report
DRI – drift log
DRL – drilling
DRLCT – drilling chart
DRLOG – drilling log
DRLPR – drilling proposal/progress report
DRO – discovered resources opportunities
DRPG – drilling program report
DRPRS – drilling pressure
DRREP – drilling report
DRYRE – drying report
DS – deviation survey, (also directional system)
DSA – Double Studded Adapter
DSCAN – DSC analysis report
DSI – dipole shear imager
DSL – digital spectralog (western atlas)
DSPT – cross-plots log
DST – drill-stem test
DSTG – DSTG log
DSTL – drill-stem test log
DSTND – dual-space thermal neutron density log
DSTPB – drill-stem test true vertical depth playback log
DSTR – drill-stem test report
DSTRE – drill-stem test report
DSTSM – drill-stem test summary report
DSTW – drill-stem test job report/works
DSU – drill spacing unit
DSV – diving support vessel or drilling supervisor
DTI – Department of Trade and Industry (UK) (obsolete; superseded by dBERR, which was then superseded by DECC)
DTPB – CNT true vertical-depth playback log
DTT – depth to time
DUC – drilled but uncompleted wells
DVD – Depth versus Day
DVT – differential valve tool (for cementing multiple stages)
DWOP – drilling well on paper (a theoretical exercise conducted involving the service-provider managers)
DWQL – dual-water quicklook log
DWSS – dig-well seismic surface log
DXC – DXC pressure pilot report
E
E – exploration
E&A – exploration and appraisal
E&I – electrical and instrumentation
E&P – exploration and production, another name for the upstream sector
EA – exploration asset
EAGE – European Association of Geoscientists and Engineers
ECA – Easington Catchment Area
ECD – equivalent circulating density
EDG/EDGE – emergency diesel generator
ECMS – electrical control and monitoring system
ECMWF – European Centre for Medium-Range Weather Forecasts
ECP – external casing packer
ECRD – electrically-controlled release device (for abandoning stuck wireline tool from cable)
ECT – external cantilevered turret
EDG – Emergency Diesel Generator
EDP – exploration drilling program report
EDP – emergency disconnect pPackage
EDP – emergency depressurisation
EDPHOT – emergency drill pipe hang-off tool
EDR – exploration drilling report
EDR – electronic drilling recorder
EDS – emergency disconnection sequence
EEAR – emergency electrical auto restart
EEHA – electrical equipment for hazardous areas (IECEx)
EFL – electrical flying lead
EFR – engineering factual report
EHT – electric heat trace
EGBE – ethylene glycol monobutyl ether (2-butoxyethanol)
EGMBE – ethylene glycol monobutyl ether
EHU – electro-hydraulic unit
EIA – environmental impact assessment
EI – Energy Institute
ELEC TECH – electronics technician
ELT – economic limit test
EL – electric log
EM – EMOP log
EMCS – energy management and control systems
EMD – equivalent mud density
EMG – equivalent mud gradient
EMOP – EMOP well site processing log
EMP – electromagnetic propagation log
EMR – electronic memory read-out
EMS – environment measurement sonde (wireline multi-caliper)
EMW – equivalent mud weight
EN PI – enhanced productivity index log
ENG – engineering log
ENGF – engineer factual report
ENGPD – engineering porosity data
Eni – Ente Nazionale Idrocarburi S.p.A. (Italy)
ENJ – enerjet log
ENMCS – electrical network monitoring and control system
EODU – electrical and optical distribution unit
EOFL – end of field life
EOR – enhanced oil recovery
EOT – end of tubing
EOT – electric overhead travelling
ELV – extra-low voltage
EOW – end-of-well report
EPCM/I – engineering procurement construction and management/installation
EPCU – electrical power conditioning unit
EPIDORIS – exploration and production integrated drilling operations and reservoir information system
EPL – EPL log
EPLG – epilog
EPLPC – EPL-PCD-SGR log
EPS – early production system
EPT – electromagnetic propagation
EPU – electrical power unit
EPTNG – EPT-NGT log
EPV – early production vessel
– extended reach (drilling)
ERT – emergency response training
ESD – emergency shutdown
ESD – equivalent static density
ESDV – emergency shutdown valve
ESHIA – environmental, social and health impact assessment
ESIA – environmental and social impact assessment
ESP – electric submersible pump
ETAP – Eastern Trough Area Project
ETD – external turret disconnectable
ETECH – electronics technician
ETTD – electromagnetic thickness test
ETU – electrical test unit
EUE – external-upset-end (tubing connection)
EUR – estimated ultimate recovery
EVARE – evaluation report
EWMP – earthworks/electrical works/excavation works management plan
EWR – end-of-well report
EXL – or XL, exploration licence (United Kingdom), a type of onshore licence issued between the first onshore licensing round (1986) and the sixth (1992)
EXP – exposed
EZSV – easy sliding valve (drillable packer plug)
F
F&G – fire and gas
FAC – factual report
FAC – first aid case
FACHV – four-arm calliper log
FANAL – formation analysis sheet log
FANG – friction angle
FAR – field auxiliary room
FAT – factory acceptance testing
FB – full bore
FBE – fusion-bonded epoxy
FBHP – flowing bottom-hole pressure
FBHT – flowing bottom-hole temperature
FC – float collar
FC – fail closed (valve or damper)
FCGT – flood clean gauge test
FCM – flow control module
FCP – final circulating pressure
FCV – flow control valve
FCVE – F-curve log
FDC – formation density log
FDF – forced-draft fan
FDP – field development plan
FDS – functional design specification
FDT – fractional dead time
FEED – front-end engineering design
FEL – from east line
FER – field equipment room
FER – formation evaluation report
FEWD – formation evaluation while drilling
FFAC – formation factor log
FFM – full field model
FG – fiberglass
FGHT – flood gauge hydrotest
FRP – fiberglass reinforced plastics
FGEOL – final geological report
FH – full-hole tool joint
FI – final inspection
FID – final investment decision
FID – flame ionisation detection
FIH – finish in hole (tripping pipe)
FIL – FIL log
– free issue (materials)
FINST – final stratigraphic report
FINTP – formation interpretation
FIP – flow-induced pulsation
FIT – fairing intervention tool
FIT – fluid identification test
FIT – formation integrity test
FIT – formation interval tester
FIT – flow indicator transmitter
FIV – flow-induced vibration
FIV – formation isolation valve
FJC – field joint coating
FL – F log
FL – fail locked (valve or damper)
FL – fluid level
FLAP –fluid level above pump
FLB – field logistics base
FLDF – flying lead deployment frame
FLIV – flowline injection valve
FLIV – flowline isolation valve
FLET – flowline end termination
aFLET – actuated flowline end termination
FLNG – floating liquefied natural gas
FLOG – FLOG PHIX RHGX log
FLOPR – flow profile report
FLOT – flying lead orientation tool
FLOW – flow and buildup test report
FLRA – field-level risk assessment
FLS – fluid sample
FLT – fault (geology)
FLT – flying lead termination
FLTC – fail locked tending to close
FLTO – fail locked tending to open
FMD – flooded member detection
FMEA – failure modes, & effects analysis
FMECA – failure modes, effects, and criticality analysis
FMI – formation micro imaging log (azimuthal microresistivity)
FMP – formation microscan report
FMP – Field Management Plan
FMS – formation multi-scan log; formation micro-scan log
FMS – flush-mounted slips
FMT – flow management tool
FMTAN – FMT analysis report
FNL – from north line
FO – fail open (valve or damper)
FOBOT – fibre optic breakout tray
FOET – further offshore emergency training
FOF – face of flange
FOH – finish out of hole (tripping pipe)
FOSA – field operating services agreement
FOSV – full-opening safety valve
FPDM – fracture potential and domain modelling/mapping
FPH – feet per hour
FPIT – free-point indicator tool
FPL – flow analysis log
FPLP – freshman petroleum learning program (Penn State)
FPLAN – field plan log
FPS – field production system
FPO – floating production and offloading – vessel with no or very limited (process only) on-board produced fluid storage capacity.
FPSO – floating production storage and offloading vessel
FPU – floating processing unit
FRA – fracture log
FRARE – fracture report
FRES – final reserve report
FS – fail safe
FSB – flowline support base
FSI – flawless start-up initiative
FSL – from south line
FSLT – flexible sealine lifting tool
FSO – floating storage offloading vessel
FSR – facility status report
FSU – floating storage unit
FT – formation tester log
FTHP – Flowing Tubing Head Pressure
FTL – field team leader
FTM – fire-team member
FTP – first tranche petroleum
FTP – field terminal platform
FTR – function test report
FTRE – formation testing report
FULDI – full diameter study report
FV – funnel viscosity
FV – float valve
FVF – formation volume factor
FWHP – flowing well-head pressure
FWKO – free water knock-out
FWL – free water level
FWL – from West line
FWR – final well report
FWV – flow wing valve (also known as production wing valve on a christmas tree)
FR – flow rate
G
G/C – gas condensate
GC – gathering center
G&P – gathering and processing
G&T – gathering and transportation
GALT – gross air leak test
GAS – gas log
GASAN – gas analysis report
GBS – gravity-based structure
GBT – gravity base tank
GC – Gauge Cutter
GCB – generator circuit breaker
GCLOG – graphic core log
GCT – GCT log
GDAT – geodetic datum
GDE – gross depositional environment
GDIP – geodip log
GDT – gas down to
GE – condensate gas equivalent
GE – ground elevation (also GR, or GRE)
GEOCH – geochemical evaluation
GEODY – GEO DYS log
GEOEV – geochemical evaluation report
GEOFO – geological and formation evaluation report
GEOL – geological surveillance log
GEOP – geophone data log
GEOPN – geological well prognosis report
GEOPR – geological operations progress report
GEORE – geological report
GGRG – gauge ring
GIIP – gas initially in place
GIH – go in hole
GIP – gas in place
GIS – geographic information system
GL – gas lift
GL – ground level
GLE – ground level elevation (generally in metres above mean sea level)
GLM – gas lift mandrel (alternative name for side pocket mandrel)
GLR – gas-liquid ratio
GLT – GLT log
GLV – gas lift valve
GLW –
GM – gas migration
GOC – gas oil contact
GOM – Gulf of Mexico
GOP – geological operations report
GOR – gas oil ratio
GOSP – gas/oil separation plant
GPIT – general-purpose inclinometry tool (borehole survey)
GPLT – geol plot log
GPTG – gallons per thousand gallons
GPM – gallons per Mcf
GPSL – geo pressure log
GR – ground level
GR – gamma ray
GR – gauge ring (measure hole size)
GRAD – gradiometer log
GRE – ground elevation
GRLOG – grapholog
GRN – gamma ray neutron log
GRP – glass-reinforced plastic
GRV – gross rock volume
GRSVY – gradient survey log
GS – gas supplier
GS – gel strength
GST – GST log
GTC/G –gas turbine compressor/generator
GTL – gas to liquids
GTW – gas to wire
GUN – gun set log
GWC – gas-water contact
GWR – guided wave radar
GWREP – geo well report
H
HAT – highest astronomical tide
HAZ – heat-affected zone
HAZID – hazard identification (meeting)
HAZOP – hazard and operability study (meeting)
HBE – high-build epoxy
HBP – held by production
HC – hydrocarbons
HCAL – HRCC caliper (in logs)(in inches)
HCCS – horizontal clamp connection system
HCM – horizontal connection module (to connect the christmas tree to the manifold)
HCS – high-capacity square mesh screens
HD – head
HDA – helideck assistant
HDD – horizontal directional drilling
HDPE – high-density polyethylene
HDT – high-resolution dipmeter log
HDU – horizontal drive unit
HEXT – hex diplog
HFE – human factors engineering
HFL – hydraulic flying lead
HGO – heavy gas oil
HGS – high (wpecific-)gravity solids
HH – horse head (on pumping unit)
HHP – hydraulic horsepower
HI – hydrogen index
HiPAP – high-precision acoustic positioning
HIPPS – high-integrity pressure protection system
HIRA – hazard identification and risk assessment
HISC – hydrogen-induced stress cracking
HKLD – hook load
HL – hook load
HLCV – heavy-lift crane vessel
HLO – heavy load-out (facility)
HLO – helicopter landing officer
Hmax – maximum wave height
HNGS – flasked hostile natural gamma-ray spectrometry tool
HO – hole opener
HOB – hang on bridle (cable assembly)
HMR – heating medium return
HMS – heating medium supply
HP – hydrostatic pressure
HPAM – partially hydrolyzed polyacrylamide
HPGAG – high-pressure gauge
HPHT – high-pressure high-temperature
HPPS – HP pressure log
HPU – hydraulic power unit
HPWBM – high-performance water-based mud
HRCC – HCAl of caliper (in inches)
HRLA – high-resolution laterolog array (resistivity logging tool)
HRF – hyperbaric rescue facility/vessel
HRSG – heat recovery steam generator
Hs – significant wave height
HSE – health, safety and environment or Health & Safety Executive (United Kingdom)
HSV – hyperbaric support vessel
HTHP – high-temperature high pressure
HTM – helideck team member
HVDC – high voltage direct current
HWDP – heavy-weight drill pipe (sometimes spelled hevi-wate)
HUD – hold-up depth
HUN – hold-up nipple
HUET – helicopter underwater escape training
HVAC – heating, ventilation and air-conditioning
HWDP – heavy weight drill pipe
HYPJ – hyperjet
HYROP – hydrophone log
I
I:P – injector to producer ratio
IADC – International Association of Drilling Contractors
IAT – internal active turret
IBC – intermediate bulk container
IC – instrument cable
ICoTA – Intervention and Coiled Tubing Association
ICC – isolation confirmation (or control) certificate
ICD – inflow control device
ICEX;IECEx – international electrotechnical commission system for certification to standards relating to equipment for use in explosive atmospheres (EEHA)
ICP – initial circulating pressure
ICP – intermediate casing point
ICP – inductively coupled plasma
ICSS – integrated controls and safety system
ICSU – integrated commissioning and start-up
ICV – interval control valve
ICV – integrated cement volume (of borehole)
ICW – incomplete work
ID – inner or internal diameter (of a tubular component such as a casing)
IDC – intangible drilling costs
IDEL – IDEL log
IEB – induction electro BHC log
IEL – induction electrical log
IF – internal flush tool joint
iFLS – intelligent fast load shedding
IFP – French Institute of Petroleum (Institut Français du Petrole)
IFT – interfacial tension
IGPE – immersion grade phenolic epoxy
IGV – inlet guide vane
IH – gamma ray log
IHEC – isolation of hazardous energy certificate
IHUC – installation, hook-up and commissioning
IHV – integrated hole volume (of borehole)
IIC – infield installation contractor
IJL – injection log
IL – induction log
ILI – inline inspection (intelligent pigging)
ILOGS – image logs
ILT – inline tee
IMAG – image analysis report
IMCA – International Marine Contractors Association
IMPP – injection-molded polypropylene coating system
IMR – inspection, maintenance, and repair
INCR – incline report
INCRE – incline report
INDRS – IND RES sonic log
INDT – INDT log
INDWE – individual well record report
INJEC – injection falloff log
INS – insufficient sample
INS – integrated navigation system
INSUR – inrun survey report
INVES – investigative program report
IOC – international oil company
IOM – installation, operation and maintenance manual
IOS – internal olefin sulfonate
IOS – isomerized olefin sulfonate
IP – ingress protection
IP – Institute of Petroleum, now Energy Institute
IP – intermediate pressure
IPAA – Independent Petroleum Association of America
IPC – installed production capacity
IPLS – IPLS log
IPR – inflow performance relationship
IPT – internal passive turret
IR – interpretation report
IRC – inspection release certificate
IRDV – intelligent remote dual valve
IRTJ – IRTJ gamma ray slimhole log
ISD – instrument-securing device
ISF – ISF sonic log
ISFBG – ISF BHC GR log
ISFCD – ISF conductivity log
ISFGR – ISF GR casing collar locator log
ISFL – ISF-LSS log
ISFP – ISF sonic true vertical depth playback log
ISFPB – ISF true vertical depth playback log
ISFSL – ISF SLS MSFL log
ISIP – initial shut-in pressure
ISSOW – integrated safe system of work
ISV – infield support vessel
ITD – internal turret disconnectable
ITO – inquiry to order
ITR – inspection test record
ITS – influx to surface
ITT – internal testing tool (for BOP test)
IUG – instrument utility gas
IWCF – International Well Control Federation
IWOCS – installation/workover control system
IWTT – interwell tracer test
J
J&A – junked and abandoned
JB – junk basket
JHA – job hazard analysis
JIB – joint-interest billing
JLT – J-lay tower
JSA – job safety analysis
JT – Joule-Thomson (effect/valve/separator)
JTS – joints
JU – jack-up drilling rig
JV – joint venture
JVP – joint venture partners/participants
K
KB – kelly bushing
KBE – kelly bushing elevation (in meters above sea level, or meters above ground level)
KBG – kelly bushing height above ground level
KBUG – kelly bushing underground (drilling up in coal mines, West Virginia, Baker & Taylor drilling)
KCI – potassium chloride
KD – kelly down
KMW – kill mud weight
KOEBD – gas converted to oil-equivalent at 6 million cubic feet = 1 thousand barrels
KOH – potassium hydroxide
KOP – kick-off point (directional drilling)
KOP – kick-off plug
KP – kilometre post
KRP – kill rate pressure
KT – kill truck
KLPD- kiloliters per day
L
LACT – lease automatic custody transfer
LAH – lookahead
LAOT – linear activation override tool
LARS – launch and recovery system
LAS – Log ASCII standard
LAT – lowest astronomical tide
LBL – long baseline (acoustics)
LC – locked closed
LCM – lost circulation material
LCNLG – LDT CNL gamma ray log
LCR – local control room
LCV – level control valve
L/D – lay down (such as tubing or rods)
LD – lay down (such as tubing or rods)
LDAR – leak detection and repair
LDHI – low-dosage hydrate inhibitor
LDL – litho density log
LDS – leak detection system (pipeline monitoring)
LDTEP – LDT EPT gamma ray log
LEAKL – leak detection log
LEPRE – litho-elastic property report
LER – lands eligible for remining or land equivalent ratio
LER – Local Equipment Room
LGO – Light Gas Oil
LGR – Liquid Gas Ratio
LGS – Low (specific-)Gravity Solids
LHT- Left Hand Turn
LIC – License
LIB – Lead Impression Block
LINCO – Liner and Completion Progress Report
LIOG – Lithography Log
LIT – Lead Impression Tool
LIT – level indicator transmitter
LITDE – Litho Density Quicklook Log
LITHR – Lithological Description Report
LITRE – Lithostratigraphy Report
LITST – Lithostratigraphic Log
LKO – Lowest Known Oil
LL – Laterolog
LMAP – Location Map
LMRP – Lower Marine Riser Package
LMTD – Log Mean Temperature Difference
LMV – Lower Master Valve (on a Xmas tree)
LNG – Liquefied Natural Gas
LO – Locked Open
LOA – Letter of Authorisation/Agreement/Authority
LOD – Lines of Defence
LOE – Lease Operating Expenses
LOGGN – Logging Whilst Drilling
LOGGS – Lincolnshire Offshore Gas Gathering System
LOGRS – Log Restoration Report
LOGSM – Log Sample
LOK – Low Permeability
LOKG – Low Permeability Gas
LOKO – Low Permeability Oil
LOLER – Lifting Operations and Lifting Equipment Regulations
LOPA – Layers of protection analysis IEC 61511
LOT – Leak-Off Test
LOT – Linear Override Tool
LOT – Lock Open Tool
LOTO – Lock Out / Tag Out
LP – Low Pressure
LPG – Liquefied Petroleum Gas
LPH – Litres Per Hour
LPWHH – Low Pressure Well Head Housing
LQ – Living Quarters
LRA – Lower Riser Assembly
LRG – Liquified Refinery Gas
LRP – Lower Riser Package
LSBGR – Long Spacing BHC GR Log
LSD – Land Surface Datum
LSP – Life Support Package
LSSON – Long Spacing Sonic Log
LT – Linear Time or Lag Time
LTA - Land Treatment Area
L&T – Load and Test
LTC – Long Thread and Coupled
LT&C – Long Thread and Coupled
LTHCP – Lower Tubing Hanger Crown Plug
– Lost Time Incident (Frequency Rate)
LTP – liner shaker, tensile bolting cloth, perforated panel backing
LTX – Low temperature extraction unit
LUMI – Luminescence Log
LUN – Livening Up Notice
LVEL – Linear Velocity Log
LVOT – Linear Valve Override Tool
LWD – Logging While Drilling
LWOL – Last Well on Lease
LWOP – Logging Well on Paper
M
M or m – prefix designating a number in thousands (not to be confused with SI prefix M for mega- or m for milli)
m – metre
MAASP – maximum acceptable [or allowable] annular surface pressure
MAC – multipole acoustic log
MACL – multiarm caliper log
MAE – major accident event
MAGST – magnetostratigraphic report
MAL – Master Acronym List
MAOP – maximum allowable operating pressure
MAP – metrol acoustic processor
MARA – maralog
MAST – sonic tool (for recording waveform)
MAWP – maximum allowable working pressure
MBC – marine breakaway coupling
MBC – membrane brine concentrator
Mbd – thousand barrels per day
MBES – multibeam echosounder
Mbod – thousand barrels of oil per day
Mboe – thousand barrels of oil equivalent
Mboed – thousand barrels of oil equivalent per day
MBP – mixed-bed polisher
Mbpd – thousand barrels of oil per day
MBR – minimum bend radius
MBRO – multi-bore restriction orifices
MBT – methylene blue test
MBWH – multi-bowl wellhead
MCC – motor control centre
MCD – mechanical completion dossier
Mcf – thousand cubic feet of natural gas
Mcfe – thousand cubic feet of natural gas equivalent
MCHE – main cryogenic heat exchanger
MCM – manifold choke module
MCP – monocolumn platform
MCS – manifold and connection system
MCS – master control station
MCSS – multi-cycle sliding sleeve
mD – millidarcy, measure of permeability, with units of area
MD – measured depth
MDO – marine diesel oil
MDR – master document register
MDRT – measured depth referenced to rotary table zero datum
MD – measurements/drilling log
MDEA – methyl diethanolamine (aMDEA)
MDL – methane drainage licence (United Kingdom), a type of onshore licence allowing natural gas to be collected "in the course of operations for making and keeping safe mines whether or not disused"
MDSS – measured depth referenced to mean sea level zero datum – "subsea" level
MDT – modular formation dynamic tester, a tool used to get formation pressure in the hole (not borehole pressure which the PWD does). MDT could be run on Wireline or on the Drill Pipe
MDR – mud damage removal (acid bullheading)
MEA – monoethanolamine
MEG – monoethylene glycol
MEIC – Mechanical Electrical Instrumentation Commission
MeOH – methanol (CH3OH)
MEPRL – mechanical properties log
MER – Maximum Efficiency Rating
MERCR – mercury injection study report
MERG – merge FDC/CNL/gamma ray/dual laterolog/micro SFL log
MEST – micro-electrical scanning tool
MF – marsh funnel (mud viscosity)
MFCT – multifinger caliper tool
MGL – magnelog
MGS – Mud Gas Separator
MGU – Motor Gauge Unit
MGPS – marine growth prevention system
MHWN – mean high water neaps
MHWS – mean high water springs
MLE – Motor Lead Extension
MLH – mud liner hanger
MIFR – mini frac log
MINL – minilog
MIPAL – micropalaeo log
MIRU – move in and rig up
MIST – minimum industry safety training
MIT – mechanical integrity test
MIYP – maximum internal yield pressure
mKB – meters below kelly bushing
ML – mud line (depth reference)
ML – microlog, or mud log
MLL – microlaterolog
MLF – marine loading facility
MLWN – mean low water neaps
MLWS – mean low water springs
mm – millimetre (SI unit)
MM – prefix designating a number in millions (thousand-thousand)
MMbod – million barrels of oil per day
MMboe – million barrels of oil equivalent
MMboed – million barrels of oil equivalent per day
MMbpd – million barrels per day
MMcf – million cubic feet (of natural gas)
MMcfe – million cubic feet (of natural gas equivalent)
MMcfge – million cubic feet (of natural gas equivalent)
MMS – Minerals Management Service (United States)
MMscfd – million standard cubic feet per day
MMTPA – millions of metric tonnes per annum
MMstb – million stock barrels
MNP – merge and playback log
MODU – mobile offshore drilling unit (either of jack-up drill rig or semi-submersible rig or drill ship)
MOF – marine offloading facility
MOPO – matrix of permitted operations
MOPU – mobile offshore production unit (to describe jack-up production rig, or semi-submersible production rig, or floating production, or storage ship)
MOT – materials/marine offloading terminal
MOV – motor operated valve
MPA – micropalaeo analysis report
MPD – managed pressure drilling
MPFM – multi-phase flow meter
MPK – merged playback log
MPP – multiphase pump
MPQT – manufacturing procedure qualification test
MPS – manufacturing procedure specification
MPSP – maximum predicted surface pressure
MPSV – multi-purpose support vessel
MPV – multi-purpose vessel
MQC – multi-quick connection plate
MR – marine riser
MR – mixed refrigerant
MR – morning report
MRBP – magna range bridge plug
MRC – maximum reservoir contact
MRCV – multi-reverse circulating valve
MRIT – magnetic resonance imaging tool
MRIRE – magnetic resonance image report
MRP – material requirement planning
MRR – material receipt report
MRT – marine riser tensioners
MRT – mechanical run test
MRX – magnetic resonance expert (wireline NMR tool)
MSCT – mechanical sidewall coring tool
MSDS – material safety data sheet
MSFL – micro SFL log; micro-spherically focussed log (resistivity)
MSI – mechanical and structural inspection
MSIP – modular sonic imaging platform (sonic scanner)
MSIPC – multi-stage inflatable packer collar
MSL – mean sea level
MSL – micro spherical log
MSS – magnetic single shot
MST – MST EXP resistivity log
MSV – multipurpose support vessel
MTBF – mean time between failures
MT – motor temperature; DMT parameter for ESP motor
MTO – material take-off
MTT – MTT multi-isotope trace tool
M/U – make up
MUD – mud log
MUDT – mud temperature log
MuSol – mutual solvent
MVB – master valve block on christmas tree
MVC – minimum volume commitment
MW – mud weight
MWD – measurement while drilling
MWDRE – measurement while drilling report
MWP – maximum working pressure
MWS – marine warranty survey
N
NACE – National Association of Corrosion Engineers
NAPE – Nigerian Association of Petroleum Explorationists
NAM – North American
NAPF – non-aqueous phase fluid
NAPL – non-aqueous phase liquid
NASA – non-active side arm (term used in North Sea oil for kill wing valve on a christmas tree)
NAVIG – navigational log
NB – nominal bore
NCC – normally clean condensate
ND – nipple down
NDE – non-destructive examination
NEFE – non-emulsifying iron inhibitor (usually used with hydrochloric acid)
NEUT – neutron log
NFG – 'no fucking good' used for marking damaged equipment,
NFI – no further investment
NFW – new field wildcat, Lahee classification
NG – natural gas
NGDC – national geoscience data centre (United Kingdom)
NGL – natural gas liquids
NGR – natural gamma ray
NGRC – national geological records centre (United Kingdom)
NGS – NGS log
NGSS – NGS spectro log
NGT – natural gamma ray tool
NGTLD – NGT LDT QL log
NGLQT – NGT QL log
NGTR – NGT ratio log
NHDA – National Hydrocarbons Data Archive (United Kingdom)
NHPV – net hydrocarbon pore volume
NL-NG – No loss-no gain
NMDC – non-magnetic drill collar
NMHC – non-methane hydrocarbons
NMR – nuclear magnetic resonance kog
NMVOC – non-methane volatile organic compounds
NNF – normally no flow
NNS – northern North Sea
NOISL – noise log
NOC – National Oil Company
NORM – naturally-occurring radioactive material
NP – non-producing well
NPD – Norwegian Petroleum Directorate
NPS – nominal pipe size (sometimes NS)
NPSH(R) – net-positive suction head (required)
NPT – Non-Productive Time (used during drilling or well intervention operations mainly, malfunction of equipment or the lack of personnel competencies that result in loss of time, which is costly)
NPV – net present value
NRB – not required back
NRPs – non-rotating protectors
NRI – net revenue interest
NRV – non-return valve
NPW – new pool wildcat, Lahee classification
NS – North Sea; can also refer to the North Slope Borough, Alaska, the North Slope, which includes Prudhoe Bay Oil Field (the largest US oil field), Kuparuk Oil Field, Milne Point, Lisburne, and Point McIntyre among others
NTHF – non-toxic high flash
NTP – Normal temperature and pressure
NTU – nephelometric turbidity unit
NUBOP – nipple (ed),(ing) up blow-out preventer
NUI – normally unattended installation
NUMAR – nuclear and magnetic resonance – image log
O
O&G – oil and gas
O&M – operations and maintenance
O/S – overshot, fishing tool
OBCS – ocean bottom cable system
OBDTL – OBDT log
OBEVA – OBDT evaluation report
OBM – oil-based mud
OCD - Oil Conservation Division
OBO – operated by others
OCIMF – Oil Companies International Marine Forum
OCI – oil corrosion inhibitor (vessels)
OCL – quality control log
OCM – offshore construction manager
OCS – offshore construction supervisor
OCTG – oil country tubular goods (oil well casing, tubing, and drill pipe)
OD – outer diameter (of a tubular component such as casing)
ODT – oil down to
OFE – oil field equipment
OFST – offset vertical seismic profile
OEM – original equipment manufacturer
OFIC – offshore interim completion certificate
OGA – Oil and Gas Authority (UK oil and gas regulatory authority)
OH – open hole
OH – open hole log
OHC – open hole completion
OHD – open hazardous drain
OHUT – offshore hook-up team
OI – oxygen index
OIM – offshore installation manager
OLAF – offshore footless loading arm
OMC – Offshore Material Coordinator
OMRL – oriented micro-resistivity log
ONAN – oil natural air natural cooled transformer
ONNR – Office of Natural Resources Revenue (formerly MMS)
OOE – offshore operation engineer (senior technical authority on an offshore oil platform)
OOIP – original oil in place
OOT/S – out of tolerance/straightness
O/P – Over Pull
OPITO – offshore petroleum industry training organization
OPEC – Organization of Petroleum Exporting Countries
OPL – operations log
OPRES – overpressure log
OPS – operations report
ORICO – oriented core data report
ORM – operability reliability maintainability
ORRI – overriding royalty interest
ORF – onshore receiving facility
OS&D – over, short, and damage report
OS – online survey
OSA _ Offshore Safety Advisor
OSV – offshore supply vessel
OT – a well on test
OT – off tree
OTDR – optical time domain reflectometry
OTIP – operational testing implementation plan
OTL – operations team leader
OTP – operational test procedure
OTR – order to remit
OTSG – one-time through steam generator
OWC – oil-water contact
OUT – outpost, Lahee classification
OUT – oil up to
OVCH – oversize charts
OVID – offshore vessel inspection database
P
P – producing well
P&A – plug(ged) and abandon(ed) (well)
PA – producing asset
PA – polyamide
PA – producing asset with exploration potential
PACO – process, automation, control and optimisation
PACU – packaged air conditioning unit
PADPRT – pressure assisted drillpipe running tool
PAGA – public address general alarm
PAL – palaeo chart
PALYN – palynological analysis report
PAR – pre-assembled rack
PAU – pre-assembled ynit
PBDMS – playback DMSLS log
PBHL – proposed bottom hole location
PBR – polished bore receptacle (component of a completion string)
PBD – pason billing system
PBTD – plug back total depth
PBU – pressure build-up (applies to integrity testing on valves)
PCA – production concession agreement
PCB – polychlorinated biphenyl
PCCC – pressure containing anti‐corrosion caps
PCCL – perforation casing collar locator log
PCDC – pressure-cased directional (geometry i.e. borehole survey) MWD tool
PCE – pressure control equipment
PCDM – power and control distribution module
PCKR – packer
PCMS – polymer coupon monitoring system
PCN – process control network
PCO – pre-commission preparations (pipeline)
PCOLL – perforation and collar
PCP – progressing cavity pump
PCP – possible condensate production
PCPT – piezo-cone penetration test
PCS – process control system
PDC – perforation depth control
PDC – polycrystalline diamond compact (a type of drilling bit)
PDG/PDHG – permanent downhole gauge
PDGB – permanent drilling guide base
PDKL – PDK log
PDKR – PDK 100 report
PDM –positive displacement motor
PDMS – permanent downhole monitoring system
PDP – proved developed producing (reserves)
PDP – positive displacement pump
PDPM – power distribution protection module
PDNP – proved developed not producing
PDR – physical data room
PDT – differential pressure transmitter
PE – petroleum engineer
PE – professional engineer
PE – production engineer
PE – polyethylene
PE – product emulsion
PE – production enhancement
PEA – palaeo environment study report
PED – pressure equipment directive
PEDL – petroleum exploration and development licence (United Kingdom)
PEFS – process engineering flow scheme
PENL – penetration log
PEP – PEP log
PERC – powered emergency release coupling
PERDC – perforation depth control
PERFO – perforation log
PERM – permeability
PERML – permeability log
PESGB – Petroleum Exploration Society of Great Britain
PETA – petrographical analysis report
PETD – petrographic data log
PETLG – petrophysical evaluation log
PETPM – petrography permeametry report
PETRP – petrophysical evaluation report
PEX – platform express toolstring (resistivity, porosity, imaging)
PFC – perforation formation correlation
PFD – process flow diagram
PFD – probability of failure on demand
PFE – plate/frame heat exchanger
PFHE – plate fin/frame heat exchanger
PFPG – perforation plug log
PFREC – perforation record log
PG – pressure gauge (report)
PGC – Potential Gas Committee
PGB – permanent guide base
PGOR – produced gas oil ratio
PGP – possible gas production
PH – phasor log
PHASE – phasor processing log
PHB – pre-hydrated bentonite
PHC – passive heave compensator
PHOL – photon log
PHPU – platform hydraulic power unit
PHPA – partially hydrolyzed polyacrylamide
PHYFM – physical formation log
PI – productivity index
PI – permit issued
PI – pressure indicator
P&ID – piping and instrumentation diagram
PINTL – production interpretation
PIP – pump intake pressure
PIP – pipe in pipe
PIT – pump intake temperature
PJSM – pre-job safety meeting
PL – production license
PLEM – pipeline end manifold
PLES – pipeline end structure
PLET – pipeline end termination
PLG – plug log
PLR – pig launcher/receiver
PLS – position location system
PLSV – pipelay support vessel
PLT – production logging tool
PLTQ – production logging tool quick-look log
PLTRE – production logging tool report
PLQ – permanent living quarters
PMI – positive material identification
PMM – permanent magnet motor
PMOC – project management of change
PMR – precooled mixed refrigerant
PMV – production master valve
PNP – proved not producing
POB – personnel on board
POBM – pseudo-oil-based mud
POD – plan of development
POF – permanent operations facility
POH – pull out of hole
POOH – pull out of hole
PON – petroleum operations notice (United Kingdom)
POP – pump-out plug
POP – possible oil production
POP – place on production
POR – density porosity log
PORRT – pack off run retrieval tool
POSFR – post-fracture report
POSTW – post-well appraisal report
POSWE – post-well summary report
PP – DXC pressure plot log
PP – pump pressure
PPA – Pounds of Proppant added
PP&A – permanent plug and abandon (also P&A)
ppb – pounds per barrel
PPC – powered positioning caliper (Schlumberger dual-axis wireline caliper tool)
ppcf – pounds per cubic foot
PPD – pour point depressant
PPE – preferred pressure end
PPE – personal protective equipment
PPFG – pore pressure/fracture gradient
ppg – pounds per gallon
PPI – post production inspection/intervention
PPI – post pipelay installation
PPL – pre-perforated liner
– pounds (per square inch) per thousand feet (of depth) – a unit of fluid density/pressure
PPS – production packer setting
PPU – pipeline process and umbilical
PQR – procedure qualification record
PR2 – testing regime to API6A annex F
PRA – production reporting and allocation
PREC – perforation record
PRESS – pressure report
PRL – polished rod liner
PRV – pressure relief valve
PROD – production log
PROTE – production test report
PROX – proximity log
PRSRE – pressure gauge report
PSANA – pressure analysis
PSA – production service agreement
PSA – production sharing agreement
PSC – production sharing contract
PSD – planned shutdown
PSD – pressure safety device
PSD – process shutdown
PSD – pump setting depth
PSE – pressure safety element (rupture disc)
PSIA – pounds per square inch atmospheric
PSIG – pounds per square inch gauge
PSL – product specification level
PSLOG – pressure log
PSM – process safety management
PSP – pseudostatic spontaneous potential
PSP – positive sealing plug
PSPL – PSP leak detection log
PSSR – pre-startup safety review
PSSR – pressure systems safety regulations (UK)
PSQ – plug squeeze log
PST – PST log
PSV – pipe/platform supply vessel
PSV – pressure safety valve
PSVAL – pressure evaluation log
PTA/S – pipeline termination assembly/structure
PTO – permit to sperate
PTRO – test rack opening pressure (For a gas lift valve)
PTSET – production test setter
PTTC – Petroleum Technology Transfer Council, United States
PTW – permit to work
PU – pick-up (tubing, rods, power swivel, etc.)
PUD – proved undeveloped reserves
PUN – puncher log
PUR – plant upset report
PUQ – production utilities quarters (platform)
PUWER – Provision and Use of Work Equipment Regulations 1998
PV – plastic viscosity
PVDF – polyvinylidene fluoride
PVSV –pressure vacuum safety valve
PVT – pressure volume temperature
PVTRE – pressure volume temperature report
PW – produced water
PWD – pressure while drilling
PWB – production wing block (XT)
PWHT – post-weld heat treat
PWRI – produced water reinjection
PWV – production wing valve (also known as a flow wing valve on a christmas tree)
Q
QA – quality assurance
QC – quality control
QCR – quality control report
QL – quick-look log
QJ - Quad Joint
R
R/B – rack back
R&M – repair and maintenance
RAC – ratio curves
RACI – responsible / accountable / consulted / informed
RAT – riser assembly tower
RAM – reliability, availability, and maintainability
RAWS – raw stacks VSP log
RBI – risk-based inspection
RBP – retrievable bridge plug
RBS – riser base spool
RCA – root cause analysis
RCRA - Resource Conservation and Recovery Act
RCKST – rig checkshot
RCD – rotating control device
RCI – reservoir characterization instrument (for downhole fluid measurements e.g. spectrometry, density)
RCL – retainer correlation log
RCM – reliability-centred maintenance
RCR – remote component replacement (tool)
RCU – remote control unit
RDMO – rig down move out
RDS – ROV-deployed sonar
RDRT – rig down rotary tools
RDT – reservoir description tool
RDVI – remote digital video inspection
RDWL – rig down wireline
RE – reservoir engineer
REOR – reorientation log
RE-PE – re-perforation report
RESAN – reservoir analysis
RESDV – riser emergency shutdown valve
RESEV – reservoir evaluation
RESFL – reservoir fluid
RESI – resistivity log
RESL – reservoir log
RESOI – residual oil
REZ – renewable energy zone (United Kingdom)
RF – recovery factor
RFCC – ready for commissioning certificate
RFLNG – ready for liquefied natural gas
RFM – riser feeding machine
RFMTS – repeat formation tester
RFO – ready for operations (pipelines/cables)
RFR – refer to attached (e.g., letter, document)
RFSU – ready for start-up
RFT – repeat formation tester
RFTRE – repeat formation tester report
RFTS – repeat formation tester sample
RHA – riser heel anchor
RHD – rectangular heavy duty – usually screens used for shaking
RHT – Right Hand Turn
RIGMO – rig move
RIH – run in hole
RIMS – riser integrity monitoring system
RITT – riser insertion tube (tool)
RKB – rotary kelly bushing (a datum for measuring depth in an oil well)
RLOF – rock load-out facility
RMLC – request for mineral land clearance
RMP – reservoir management plan
RMS – ratcheting mule shoe
RMS – riser monitoring system
RNT – RNT log
ROB – received on board (used for fuel/water received in bunkering operations)
ROCT – rotary coring tool
ROP – rate of penetration
ROP – rate of perforation
ROT – remote-operated tool
ROV/WROV – remotely-operated vehicle/work class remotely-operated vehicle, used for subsea construction and maintenance
ROZ – recoverable oil zone
ROWS – remote operator workstation
RPCM – ring pair corrosion monitoring
RPM – revolutions per minute (rotations per minute)
RRC – Railroad Commission of Texas (governs oil and gas production in Texas)
RROCK – routine rock properties report
RRR – reserve replacement ratio
RSES – responsible for safety and environment on site
RSPP – a publicly-traded oil and gas producer focused on horizontal drilling of multiple stacked pay zones in the oil-rich Permian basin
RSS – rig site survey
RSS – rotary steerable systems
RST – reservoir saturation tool (Schlumberger) log
RTMS – riser tension monitoring system
RTE – rotary table elevation
RTO – real-time operation
RTP/RTS – return to production/service
RTTS – retrievable test-treat-squeeze (packer)
RU – rig up
RURT – rig up rotary tools
RV – relief valve
RVI – remote video inspection
RWD – reaming while drilling
S
SABA – supplied air-breathing apparatus
SAFE – safety analysis function evaluation
SAGD – steam-assisted gravity drainage
SALM – single anchor loading mooring
SAM – subsea accumulator module
SAML – sample log
SAMTK – sample-taker log
SANDA – sandstone analysis log
SAPP – sodium acid pyrophosphate
SAS – safety and automation system
SAT – SAT log
SAT – site acceptance test
SB – SIT-BO log
SBF – synthetic base fluid
SBM – synthetic base mud
SBT – segmented bond tool
SC – seismic calibration
SCADA – supervisory control and data acquisition
SCAL – special core analysis
SCAP – scallops log
SCBA – self-contained breathing apparatus
SCUBA – self-contained underwater breathing apparatus
SCC – system completion certificate
SCD – system control diagram
SCDES – sidewall core description
scf – standard cubic feet (of gas)
scf/STB – standard cubic feet (of gas) / stock tank barrel (of fluid)
SCHLL – Schlumberger log
– subsea control module (mounting base)
SCO – synthetic crude oil
SCO – sand clean-out
SCR – slow circulation rate
SCR – steel catenary riser
SCRS – slow circulation rates
SCSG – type of pump
SCSSV – surface-controlled subsurface safety valve
SDON – shut down overnight
SEP – surface emissive power
SPCU – subsea control unit
SCVF – surface casing vent flow. It's kind of test
SD – sonic density
SDFD – shut down for day
SDFN – shut down for night
SDIC – sonic dual induction
SDL – supplier document list
SDM/U – subsea distribution module/unit
SDPBH – SDP bottom hole pressure report
SDSS – super duplex stainless steel
SDT – step draw-down test (sometimes SDDT)
SDU/M – subsea distribution unit/module
SEA – strategic environmental assessment (United Kingdom)
SECGU – section gauge log
SEDHI – sedimentary history
SEDIM – sedimentology
SEDL – sedimentology log
SEDRE – sedimentology report
SEG – Society of Exploration Geophysicsists
SEM – subsea electronics module
Semi (or semi-sub) – semi-submersible drilling rig
SEP – surface emissive power
SEPAR – separator sampling report
SEQSU – sequential survey
SF – Self Flowing
SFERAE – global association for the use of knowledge on fractured rock in a state of stress, in the field of energy, culture and environment
SFL – steel flying lead
SG – static gradient, specific gravity
SGR – shale gouge ratio
SGS – steel gravity structure
SGSI – Shell Global Solutions International
SGUN – squeeze gun
SHA – sensor harness assembly
SHC – system handover certificate
SHDT – stratigraphic high resolution dipmeter tool
SHINC – sunday holiday including
SHO – stab and hinge over
SHOCK – shock log
SHOWL – show log
SHT – shallow hole test
SI – shut in well
SI – structural integrity
SI – scale inhibitor
SIBHP – Shut in Bottom-Hole Pressure
SIBHT – Shut in Bottom-Hole Temperature
SID – Specific Instruction Document/ Standard Instruction for Drillers
SIT – System Integrity Test
SI/TA – shut in/temporarily abandoned
SIA – social impact assessment
SIC – subsea installation contractor
SICP – shut-in casing pressure
SIDPP – shut-in drill pipe pressure
SIDSM – sidewall sample
SIF – safety instrumented functions (test)
SIGTTO – Society of International Gas Tanker and Terminal Operators
SIL – safety integrity level
SIMCON – simultaneous construction
SIMOPS – simultaneous operations
SIP – shut-in pressure
SIPCOM – simultaneous production and commissioning
SIPES – Society of Independent Professional Earth Scientists, United States
SIPROD – simultaneous production and drilling
SIS – safety-instrumented system
SIT – system integration test FR SIT – field representation SIT
SIT – (casing) shoe integrity test
SITHP – shut-in tubing hanger/head pressure
SITT – single TT log
SIWHP – shut-in well head pressure
SKPLT – stick plot log
SL – seismic lines
SLS – SLS GR log
SLT – SLT GR log
SM or S/M – safety meeting
SMA – small amount
SMLS – seamless PipeMPP
SMO – suction module
SMPC – subsea multiphase pump, which can increase flowrate and pressure of the untreated wellstream
SN – seat nipple
SNAM – Societá Nazionale Metanodotti now Snam S.p.A. (Italy)
SNP – sidewall neutron porosity
SNS – southern North Sea
S/O – Slack Off
SOBM – synthetic oil-based mud
SOLAS – safety of life at sea
SONCB – sonic calibration log
SONRE – sonic calibration report
SONWR – sonic waveform report
SONWV – sonic waveform log
SOP – Safe Operating Procedure
SOP – shear-out plug
SOP - Standard Operating Procedure
SOR – senior operations representative
SOW - Scope of Work
SOW – slip-on wellhead
SP – set point
SP – shot point (geophysics)
SP – spontaneous potential (well log)
SPAMM – subsea pressurization and monitoring manifold
SPCAN – special core analysis
SPCU – subsea power and control unit
SPE – Society of Petroleum Engineers
SPEAN – spectral analysis
SPEL – spectralog
spf – shots per foot (perforation density)
SPFM – single-phase flow meter
SPH – SPH log
SPHL – self-propelled hyperbaric lifeboats
SPM – side pocket mandrel
SPM – strokes per minute (of a positive-displacement pump)
spm – shots per meter (perforation density)
SPMT – self-propelled modular transporter
SPOP – spontaneous potential log
SPP – stand pipe pressure
SPR – slow pumping rate
SPROF – seismic profile
SPS – subsea production systems
SPT – shallower pool test, Lahee classification
SPUD – spud date (started drilling well)
SPWLA – Society of Petrophysicists and Well Log Analysts
SQL – seismic quicklook log
SQZ – squeeze job
SR – shear rate
SRD – seismic reference datum, an imaginary horizontal surface at which TWT is assumed to be zero
SREC – seismic record log
SRJ – semi-rigid jumper
SRK – Soave-Redlich-Kwong
SRO – surface read-out
SRP – sucker rod pump
SRB – sulfate-reducing bacteria
SRT – site receival test
SS – subsea, as in a datum of depth, e.g. TVDSS (true vertical depth subsea)
SSCC – sulphide stress corrosion cracking
SSCP – subsea cryogenic pipeline
SSCS – subsea control system
SSD – sub-sea level depth (in metres or feet, positive value in downwards direction with respect to the geoid)
SSD – sliding sleeve door
SSFP – subsea flowline and pipeline
SSG – sidewall sample gun
SSH – steam superheater
SSIC – safety system inhibit certificate
SSIV – subsea isolation valve
SSTV – subsea test valve
SSM – subsea manifolds
SSMAR – synthetic seismic marine log
SSPLR – subsea pig launcher/receiver
SSSL – Supplementary Seismic Survey Licence (United Kingdom), a type of onshore licence
SSSV – sub-surface safety valve
SSTT – subsea test tree
SSU – subsea umbilicals
SSV – surface safety valve
SSWI – subsea well intervention
STAB – stabiliser
STAGR – static gradient survey report
STB – stock tank barrel
STC – STC log
STD – 2-3 joints of tubing
STFL – steel tube fly lead
STG – steam turbine generator
STGL – stratigraphic log
STHE – shell-and-tube heat exchanger
STIMU – stimulation report
STKPT – stuck point
STL – STL gamma ray log
STL – submerged turret loading
STRAT – stratigraphy, stratigraphic
STRRE – stratigraphy report
STOIIP – stock tank oil initially in place
STOOIP – stock tank oil originally in place
STOP – safety training observation program
STP – submerged turret production
STP – standard temperature and pressure
STSH – string shot
STTR – single top tension riser
ST&C – short thread and coupled
STC – short thread and coupled
STU – steel tube umbilical
STV – select tester valve
SUML – summarised log
SUMRE – summary report
SUMST – geological summary sheet
SURF – subsea/umbilicals/risers/flowlines
SURFR – surface sampling report
SURRE – survey report
SURU – start-up ramp-up
SURVL – survey chart log
– subsea umbilical termination (assembly/box)
SUTA – subsea umbilical termination assembly
SUTU – subsea umbilical termination unit
SW – salt water
SWC – side wall core
SWD – salt water disposal well
SWE – senior well engineer
SWHE – spiral-wound heat exchanger
SWOT – strengths, weaknesses, opportunities, and threats
SWT – surface well testing
SV – sleeve valve, or standing valve
SVLN – safety valve landing nipple
SWLP – seawater lift pump
SYNRE – synthetic seismic report
SYSEI – synthetic seismogram log
T
T – well flowing to tank
T/T – tangent to tangent
TA – temporarily abandoned well
TA – top assembly
TAC – tubing anchor (or tubing–annulus communication)
TAGOGR – thermally assisted gas/oil gravity drainage
TAN – total acid number
TAPLI – tape listing
TAPVE – tape verification
TAR – true amplitude recovery
TB – tubing puncher log
TBE – technical bid evaluation
TBG – tubing
TBT – through bore tree / toolbox talk
TC – type curve
TCA – total corrosion allowance
TCC – tungsten carbide coating
TCCC – transfer of care, custody and control
TCF – temporary construction facilities
TCF – trillion cubic feet (of gas)
TCI – tungsten carbide insert (a type of rollercone drillbit)
TCP – tubing conveyed perforating (gun)
TCPD – tubing-conveyed perforating depth
TCU – thermal combustion unit
TD – target depth
TD – total depth (depth of the end of the well; also a verb, to reach the final depth, used as an acronym in this case)
TDD – total depth (driller)
TDC - Top Dead Center
TDC – total drilling cost
TDL – total depth (logger)
TDM – touch-down monitoring
TDP – touch-down point
TDS – top drive system
TDS – total dissolved solids
TDT – thermal decay time log
TDTCP – TDT CPI log
TDT GR – TDT gamma ray casing collar locator log
TEA – triethanolamine
TEFC – totally enclosed fan-cooled
TEG – triethylene glycol
TEG – thermal electric generator
TELER – teledrift report
TEMP – temperature log
TETT – too early to tell
TFE – total fina elf (obsolete; Now Total S.A.) major French multinational oil company
TFL – through flow line
TFM – TaskForceMajella research project
TFM – tubular feeding machine
TGB – temporary guide base
TGT / TG – tank gross test
TGOR – total gas oil ratio (GOR uncorrected for gas lift gas present in the production fluid)
TH – tubing hanger
THCP – tubing hanger crown plug
Thr/Th# – thruster ('#'- means identification letter/number of the equipment, e.g. thr3 or thr#3 means "thruster no. 3")
THD – tubing head
THERM – thermometer log
THF – tubing hanger flange
THF – tetrahydrofuran (organic solvent)
THP – tubing hanger pressure (pressure in the production tubing as measured at the tubing hanger)
THRT – tubing hanger running tool
THS – tubing head spool
TIE – tie-in log
TIH – trip into hole
TIS – tie-in spool
TIT – tubing integrity test
TIW – Texas Iron Works (pressure valve)
TIEBK – tieback report
TLI – top of logging interval
TLOG – technical log
TLP – tension-leg platform
TMCM – transverse mercator central meridian
TMD – total measured depth in a wellbore
TNDT – thermal neutron decay time
TNDTG – thermal neutron decay time/gamma ray log
TOC – top of cement
TOC – Total organic carbon
TOF – top of fish
TOFD – time of first data sample (on seismic trace)
TOFS – time of first surface sample (on seismic trace)
TOH – trip out of hole
TOOH – trip out of hole
TOL – top of liner
TOL _ Top of Lead Cement
TOP - Top of Pipe
TORAN – torque and drag analysis
TOT – Top of Tail Cement
TOVALOP – tanker owners' voluntary agreement concerning liability for oil pollution
TPC – temporary plant configuration
TPERF – tool performance
TQM – total quality management
TR – temporary refuge
TRCFR – total recordable case frequency rate
TRT – tree running tool
TR – temporary refuge
TRA – top riser assembly
TRA – tracer log
TRACL – tractor log
TRAN –transition zone
TRD – total report data
TREAT – treatment report
TREP – test report
TRIP – trip condition log
TRS – tubing running services
TRSV – tubing-retrievable safety valve
TRSCSSV – tubing-Retrievable surface-controlled sub-surface valve
TRSCSSSV – tubing-retrievable surface-controlled sub-surface safety valve
TSA – thermally-sprayed aluminium
TSA – terminal storage agreement
TSI – temporarily shut in
TSJ – tapered stress joint
TSOV – tight shut-off valve
TSS – total suspended solids
TSTR – tensile strength
TT – torque tool
TT – transit time log
TTOC – theoretical top of cement
TTVBP – through-tubing vented bridge plug
TTRD – through-tubing rotary drilling
TUC – topside umbilical connection
TUC – turret utility container
TUM – tracked umbilical machine
TUPA – topside umbilical panel assembly
TUTA – topside umbilical termination box/unit/assembly (TUTU)
TVBDF – true vertical depth below derrick floor
TV/BIP – ratio of total volume (ore and overburden) to bitumen in place
TVD – true vertical depth
TVDPB – true vertical depth playback log
TVDRT – true vertical depth (referenced to) rotary table zero datum
TVDKB – true vertical depth (referenced to) top kelly bushing zero datum
TVDSS – true vertical depth (referenced to) mean sea level zero datum
TVELD – time and velocity to depth
TVRF – true vertical depth versus repeat formation tester
TWT – two-way time (seismic)
TWTTL – two-way travel time log
U
UBHO – universal bottom hole orientation (sub)
UBI – ultrasonic borehole imager
UBIRE – ultrasonic borehole imager report
UCH – umbilical connection housing
UCIT – ultrasonic casing imaging tool (high resolution casing and corrosion imaging tool)
UCL – unit control logic
UCR – unsafe condition report
UCS – unconfined compressive strength
UCSU – upstream commissioning and start-up
UFJ – upper flex joint
UFR – umbilical flow lines and risers
UGF – universal guide frame
UIC – underground injection control
UKCS – United Kingdom continental shelf
UKOOA – United Kingdom Offshore Operators Association
UKOOG – United Kingdom Onshore Operators Group
ULCGR – uncompressed LDC CNL gamma ray log
UMCA – umbilical midline connection assembly
UMV – upper master valve (from a christmas tree)
UPB – unmanned production buoy
UPL – upper pressure limit
UPR – upper pipe ram
UPT – upper pressure threshold
URA – upper riser assembly
URT – universal running tool
USBL – ultra-short baseline systems
USIT – ultrasonic imaging tool (cement bond logging, casing wear logging)
USGS – United States Geological Survey
UTA/B – umbilical termination assembly/box
UTAJ – umbilical termination assembly jumper
UTHCP – upper tubing hanger crown plug
UTM – universal transverse mercator
UWI – unique well identifier
UWILD – underwater inspection in lieu of dry-docking
UZV – shutdown valve
V
VBR – variable bore ram
VCCS – vertical clamp connection system
VDENL – variation density log
VDL – variable density log
VDU – vacuum distillation unit, used in processing bitumen
VELL – velocity log
VERAN – verticality analysis
VERIF – verification list
VERLI – verification listing
VERTK – vertical thickness
VFC – volt-free contact
VGMS – vent gas monitoring system (flexible riser annulus vent system)
VIR – value-investment ratio
VISME – viscosity measurement
VIV – vortex-induced vibration
VLP – vertical lift performance
VLS – vertical lay system
VLTCS – very-low-temperature carbon steel
VO – variation order
VOCs – volatile organic compounds
VOR – variation order request
VPR – vertical pipe racker
VRS – vapor recovery system
VRR – voidage replacement ratio
VS – vertical section
VSD – variable-speed drive
VSI – versatile seismic imager (Schlumberger VSP tool)
VSP – vertical seismic profile
VSPRO – vertical seismic profile
VTDLL – vertical thickness dual laterolog
VTFDC – vertical thickness FDC CNL log
VTISF – vertical thickness ISF log
VWL – velocity well log
VXT – vertical christmas tree
W
W – watt
WABAN – well abandonment report
WAC – weak acid cation
WAG – water alternating gas (describes an injection well which alternates between water and gas injection)
WALKS – walkaway seismic profile
WAS – well access system
WATAN – water analysis
WAV3 – amplitude (in seismics)
WAV4 – two-way travel time (in seismics)
WAV5 – compensate amplitudes
WAVF – waveform log
WBCO – wellbore clean-out
WBE – well barrier element
WBM – water-based drilling mud
WBS – well bore schematic
WBS – work breakdown structure
WC – watercut
WC – wildcat (well)
W/C – water cushion
WCC – work control certificate
WCT – wet christmas tree
WE – well engineer
WEG – wireline entry guide
WELDA – well data report
WELP – well log plot
WEQL – well equipment layout
WESTR – well status record
WESUR – well summary report
WF – water flood(ing)
WFAC – waveform acoustic log
WGEO – well geophone report
WGFM – wet gas flow meter
WGR – water gas ratio
WGUNT – water gun test
Wh – white
WH – well history
WHIG – whitehouse gauge
WHM – wellhead maintenance
WHP – wellhead pressure
WHRU – waste heat recovery unit
WHSIP – wellhead shut-in pressure
WI – water injection
WI – working interest
WI – work instructions
WIH – working in hole
WIKA – definition needed
WIMS – well integrity management system
WIR – water intake risers
WIT – water investigation tool
WITS – Wellsite Information Transfer Specification
WITSML – wellsite information transfer standard markup language
WIPSP – WIP stock packer
WLC – wireline composite log
WLL – wireline logging
WLSUM – well summary
WLTS –well log tracking system
WLTS – well log transaction system
WM – wet mate
WHM – wellhead maintenance
WHMIS – workplace hazardous material information systems
WHP – wellhead pressure
WLM – Wireline Measurement
WO – well in work over
WO/O – waiting on orders
WOA – well operations authorization
WOB – weight on bit
WOC – wait on cement
WOC – water/oil contact (or oil/water contact)
WOE – well operations engineer (a key person of well services)
WOM – wait/waiting on material
WOR – water-oil ratio
WORKO – workover
WOS – west of Shetland, oil province on the UKCS
WOW – wait/waiting on weather
WP – well proposal or working pressure
WPC – water pollution control
WPLAN – well course plan
WPQ/S/T – weld procedure qualification/specification/test
WPP – wellhead protection platform
WPR – well prognosis report
WQ – a textural parameter used for CBVWE computations (Halliburton)
WQCA – Water Quality Control Act
WQCB – Water Quality Control Board
WR – wireline retrievable (as in a WR plug)
WR – wet resistivity
WRS – well report sepia
WRSCSSV – wireline-retrievable surface-controlled sub-surface valve
WSCL – well site core log
WSE – well seismic edit
WSERE – well seismic edit report
WSG – wellsite geologist
WSHT – well shoot
WSL – well site log
WSO – water shut-off
WSOG – well-specific operation guidelines
WSP – well seismic profile
WSR – well shoot report
WSS – well services supervisor (leader of well services at the wellsite)
WSS – working spreadsheet (for logging)
WSSAM – well site sample
WSSOF – WSS offset profile
WSSUR – well seismic survey plot
WSSVP – WSS VSP raw shots
WSSVS – WSS VSP stacks
WST – well seismic tool (checkshot)
WSTL – well site test log
WSU – well service unit
wt – wall thickness
WT – well test
WTI – West Texas Intermediate benchmark crude
WTR – water
WUT – water up to
WV – wing valve (from a christmas tree)
WVS – well velocity survey
WWS – wire-wrapped (sand) screens
X
XC – cross-connection, cross correlation
XL or EXL – exploration licence (United Kingdom), a type of onshore licence issued between the First Onshore Licensing Round (1986) and the sixth (1992)
Xln – crystalline (minerals)
XLPE -cross-linked polyethylene
XMAC – cross-multipole array acoustic log
XMAC-E – XMAC elite (next generation of XMAC)
XMRI – extended-range micro-imager (Halliburton)
XMT/XT/HXT – christmas tree
XO – cross-over
XOM – Exxon Mobil
XOV – cross-over valve
XPERM – matrix permeability in the x-direction
XPHLOC – crossplot selection for XPHI
XPOR – crossplot porosity
XPT – formation pressure test log (Schlumberger)
XV – on/off valve (process control)
XYC – XY caliper log (Halliburton)
Y
yd – yarduhbk g
yl – holdup factor
YP – yield point
yr – year
Z
Z – depth, in the geosciences referring to the depth dimension in any x, y, z data
ZDENP – density log
ZDL – compensated Z-densilog
ZLD – zero liquid discharge
ZOI – zone of influence
See also
Oilfield terminology
References
External links
Network International Glossary July-11
Oil Field Acronyms and Abbreviations July-11
Oil Gas Technical Terms Glossary July-11
Schlumberger Oilfield Glossary July-11
Oil Drum Acronyms July-11
Oiltrashgear Oilfield Acronyms & Terminology November-15
OCIMF Acronyms Oct-11
SPWLA Petrophysical Curve Names and Mnemonics Oct-11
American Royalty Council Glossary Nov-11
Technip Glossary Apr-13
Petroleum industry
Abbreviations
Abbreviations
Drilling technology
Energy-related lists
Lists of abbreviations
Lists of acronyms
Oil exploration | List of abbreviations in oil and gas exploration and production | [
"Chemistry",
"Engineering"
] | 18,825 | [
"Oil platforms",
"Structural engineering",
"Petroleum technology",
"Petroleum industry",
"Petroleum",
"Natural gas technology",
"Oil wells",
"Chemical process engineering"
] |
11,832,736 | https://en.wikipedia.org/wiki/Ryszard%20Engelking | Ryszard Engelking (16 November 1935 – 16 November 2023) was a Polish mathematician. He was working mainly on general topology and dimension theory. He is the author of several influential monographs in this field. The 1989 edition of his General Topology is nowadays a standard reference for topology. Engelking died on 16 November 2023, his 88th birthday.
Scientific work
Apart from his books, Ryszard Engelking is known, among other things, for a generalization to an arbitrary topological space of the "Alexandroff double circle", for works on completely metrizable spaces, suborderable spaces and generalized ordered spaces. The Engelking–Karlowicz theorem, proved together with Monica Karlowicz, is a statement about the existence of a family of functions from to with topological and set-theoretical applications.
Books
Engelking's books include:
Notes
External links
1935 births
2023 deaths
20th-century Polish mathematicians
Topologists
Translators of Charles Baudelaire
Translators of Gérard de Nerval
People from Sosnowiec | Ryszard Engelking | [
"Mathematics"
] | 218 | [
"Topologists",
"Topology"
] |
47,353 | https://en.wikipedia.org/wiki/Subject%20and%20object%20%28philosophy%29 | The distinction between subject and object is a basic idea of philosophy.
A subject is a being that exercises agency, undergoes conscious experiences, and is situated in relation to other things that exist outside itself; thus, a subject is any individual, person, or observer.
An object is any of the things observed or experienced by a subject, which may even include other beings (thus, from their own points of view: other subjects).
A simple common differentiation for subject and object is: an observer versus a thing that is observed. In certain cases involving personhood, subjects and objects can be considered interchangeable where each label is applied only from one or the other point of view. Subjects and objects are related to the philosophical distinction between subjectivity and objectivity: the existence of knowledge, ideas, or information either dependent upon a subject (subjectivity) or independent from any subject (objectivity).
Etymology
In English the word object is derived from the Latin (p.p. of ) with the meaning "to throw, or put before or against", from , "against", and the root , "to throw". Some other related English words include objectify (to reify), objective (a future reference), and objection (an expression of protest). Subject uses the same root, but with the prefix sub-, meaning "under".
Broadly construed, the word object names a maximally general category, whose members are eligible for being referred to, quantified over and thought of. Terms similar to the broad notion of object include thing, being, entity, item, existent, term, unit, and individual.
In ordinary language, one is inclined to call only a material object "object". In certain contexts, it may be socially inappropriate to apply the word object to animate beings, especially to human beings, while the words entity and being are more acceptable.
Some authors use object in contrast to property; that is to say, an object is an entity that is not a property. Objects differ from properties in that objects cannot be referred to by predicates. Some philosophers include abstract objects as counting as objects, while others do not. Terms similar to such usage of object include substance, individual, and particular.
There are two definitions of object. The first definition holds that an object is an entity that fails to experience and that is not conscious. The second definition holds that an object is an entity experienced. The second definition differs from the first one in that the second definition allows for a subject to be an object at the same time.
One approach to defining an object is in terms of its properties and relations. Descriptions of all bodies, minds, and persons must be in terms of their properties and relations. For example, it seems that the only way to describe an apple is by describing its properties and how it is related to other things, such as its shape, size, composition, color, temperature, etc., while its relations may include "on the table", "in the room" and "being bigger than other apples". Metaphysical frameworks also differ in whether they consider objects existing independently of their properties and, if so, in what way. The notion of an object must address two problems: the change problems and the problems of substances. Two leading theories about objecthood are substance theory, wherein substances (objects) are distinct from their properties, and bundle theory, wherein objects are no more than bundles of their properties.
In philosophy
Mahayana Buddhism
In the Mūlamadhyamakakārikā, the Indian philosopher Nagarjuna seizes upon the dichotomy between objects as collections of properties or as separate from those properties to demonstrate that both assertions fall apart under analysis. By uncovering this paradox he then provides a solution (pratītyasamutpāda – "dependent origination") that lies at the very root of Buddhist praxis. Although Pratītyasamutpāda is normally limited to caused objects, Nagarjuna extends his argument to objects in general by differentiating two distinct ideas – dependent designation and dependent origination. He proposes that all objects are dependent upon designation, and therefore any discussion regarding the nature of objects can only be made in light of the context. The validity of objects can only be established within those conventions that assert them.
Cartesian dualism
The formal separation between subject and object in the Western world corresponds to the dualistic framework, in the early modern philosophy of René Descartes, between thought and extension (in common language, mind and matter). Descartes believed that thought (subjectivity) was the essence of the mind, and that extension (the occupation of space) was the essence of matter. For modern philosophers like Descartes, consciousness is a state of cognition experienced by the subject—whose existence can never be doubted as its ability to doubt (and think) proves that it exists. On the other hand, he argues that the object(s) which a subject perceives may not have real or full existence or value, independent of that observing subject.
Substance theory
An attribute of an object is called a property if it can be experienced (e.g. its color, size, weight, smell, taste, and location). Objects manifest themselves through their properties. These manifestations seem to change in a regular and unified way, suggesting that something underlies the properties. The change problem asks what that underlying thing is. According to substance theory, the answer is a substance, that which stands for the change.
According to substance theory, because substances are only experienced through their properties a substance itself is never directly experienced. The problem of substance asks on what basis can one conclude the existence of a substance that cannot be seen or scientifically verified. According to David Hume's bundle theory, the answer is none; thus an object is merely its properties.
German idealism
Subject as a key-term in thinking about human consciousness began its career with the German idealists, in response to David Hume's radical skepticism. The idealists' starting point is Hume's conclusion that there is nothing to the self over and above a big, fleeting bundle of perceptions. The next step was to ask how this undifferentiated bundle comes to be experienced as a unity – as a single subject. Hume had offered the following proposal:
"...the imagination must by long custom acquire the same method of thinking, and run along the parts of space and time in conceiving its objects.
Kant, Hegel and their successors sought to flesh out the process by which the subject is constituted out of the flow of sense impressions. Hegel, for example, stated in his Preface to the Phenomenology of Spirit that a subject is constituted by "the process of reflectively mediating itself with itself."
Hegel begins his definition of the subject at a standpoint derived from Aristotelian physics: "the unmoved which is also self-moving" (Preface, para. 22). That is, what is not moved by an outside force, but which propels itself, has a prima facie case for subjectivity. Hegel's next step, however, is to identify this power to move, this unrest that is the subject, as pure negativity. Subjective self-motion, for Hegel, comes not from any pure or simple kernel of authentic individuality, but rather, it is
"...the bifurcation of the simple; it is the doubling which sets up opposition, and then again the negation of this indifferent diversity and of its anti-thesis" (Preface, para. 18).
The Hegelian subject's modus operandi is therefore cutting, splitting and introducing distinctions by injecting negation into the flow of sense-perceptions. Subjectivity is thus a kind of structural effect – what happens when Nature is diffused, refracted around a field of negativity and the "unity of the subject" for Hegel, is in fact a second-order effect, a "negation of negation". The subject experiences itself as a unity only by purposively negating the very diversity it itself had produced. The Hegelian subject may therefore be characterized either as "self-restoring sameness" or else as "reflection in otherness within itself" (Preface, para. 18).
American pragmatism
Charles S. Peirce of the late-modern American philosophical school of pragmatism, defines the broad notion of an object as anything that we can think or talk about. In a general sense it is any entity: the pyramids, gods, Socrates, the nearest star system, the number seven, a disbelief in predestination, or the fear of cats.
20th century onwards
Continental philosophy
The thinking of Karl Marx and Sigmund Freud provided a point of departure for questioning the notion of a unitary, autonomous Subject, which for many thinkers in the Continental tradition is seen as the foundation of the liberal theory of the social contract. These thinkers opened up the way for the deconstruction of the subject as a core-concept of metaphysics.
Freud's explorations of the unconscious mind added up to a wholesale indictment of Enlightenment notions of subjectivity.
Among the most radical re-thinkers of human self-consciousness was Martin Heidegger, whose concept of Dasein or "Being-there" displaces traditional notions of the personal subject altogether. With Heidegger, phenomenology tries to go beyond the classical dichotomy between subject and object, because they are linked by an inseparable and original relationship, in the sense that there can be no world without a subject, nor the subject without world.
Jacques Lacan, inspired by Heidegger and Ferdinand de Saussure, built on Freud's psychoanalytic model of the subject, in which the split subject is constituted by a double bind: alienated from jouissance when they leave the Real, enters into the Imaginary (during the mirror stage), and separates from the Other when they come into the realm of language, difference, and demand in the Symbolic or the Name of the Father.
Thinkers such as structural Marxist Louis Althusser and poststructuralist Michel Foucault theorize the subject as a social construction, the so-called "poststructuralist subject". According to Althusser, the "subject" is an ideological construction (more exactly, constructed by the "Ideological State Apparatuses"). One's subjectivity exists, "always-already" and is constituted through the process of interpellation. Ideology inaugurates one into being a subject, and every ideology is intended to maintain and glorify its idealized subject, as well as the metaphysical category of the subject itself (see antihumanism).
According to Foucault, it is the "effect" of power and "disciplines" (see Discipline and Punish: construction of the subject (subjectivation or subjectification, ) as student, soldier, "criminal", etc.)). Foucault believed it was possible to transform oneself; he used the word ethopoiein from the word ethos to describe the process. Subjectification was a central concept in Gilles Deleuze and Félix Guattari's work as well.
Analytic philosophy
Bertrand Russell updated the classical terminology with a term, the fact; "Everything that there is in the world I call a fact." Russell uses the term "fact" in two distinct senses. In 1918, facts are distinct from objects. "I want you to realize that when I speak of a fact I do not mean a particular existing thing, such as Socrates or the rain or the sun. Socrates himself does not render any statement true or false. You might be inclined to suppose that all by himself he would give truth to the statement ‘Socrates existed’, but as a matter of fact that is a mistake." But in 1919, he identified facts with objects. "I mean by ‘fact’ anything complex. If the world contains no simples, then whatever it contains is a fact; if it contains any simples, then facts are whatever it contains except simples... That Socrates was Greek, that he married Xantippe , that he died of drinking the hemlock, are facts that all have something in common, namely, that they are ‘about’ Socrates, who is accordingly said to be a constituent of each of them."
Facts, or objects, are opposed to beliefs, which are "subjective" and may be errors on the part of the subject, the knower who is their source and who is certain of himself and little else. All doubt implies the possibility of error and therefore admits the distinction between subjectivity and objectivity. The knower is limited in ability to tell fact from belief, false from true objects and engages in reality testing, an activity that will result in more or less certainty regarding the reality of the object. According to Russell, "we need a description of the fact which would make a given belief true" where "Truth is a property of beliefs." Knowledge is "true beliefs".
In contemporary analytic philosophy, the issue of subject—and more specifically the "point of view" of the subject, or "subjectivity"—has received attention as one of the major intractable problems in philosophy of mind (a related issue being the mind–body problem). In the essay "What Is It Like to Be a Bat?", Thomas Nagel famously argued that explaining subjective experience—the "what it is like" to be something—is currently beyond the reach of scientific inquiry, because scientific understanding by definition requires an objective perspective, which, according to Nagel, is diametrically opposed to the subjective first-person point of view. Furthermore, one cannot have a definition of objectivity without being connected to subjectivity in the first place since they are mutual and interlocked.
In Nagel's book The View from Nowhere, he asks: "What kind of fact is it that I am Thomas Nagel?". Subjects have a perspective but each subject has a unique perspective and this seems to be a fact in Nagel's view from nowhere (i.e. the birds-eye view of the objective description in the universe). The Indian view of "Brahman" suggests that the ultimate and fundamental subject is existence itself, through which each of us as it were "looks out" as an aspect of a frozen and timeless everything, experienced subjectively due to our separated sensory and memory apparatuses. These additional features of subjective experience are often referred to as qualia (see Frank Cameron Jackson and Mary's room).
In other disciplines
Physics
Limiting discussions of objecthood to the realm of physical objects may simplify them. However, defining physical objects in terms of fundamental particles (e.g. quarks) leaves open the question of what is the nature of a fundamental particle and thus asks what categories of being can be used to explain physical objects.
Semantics
Symbols represent objects; how they do so, the map–territory relation, is the basic problem of semantics.
See also
Abstract object theory
Abstraction
Binding problem
Category theory
Cognitive linguistics
Concept
Continuous predicate
Donald Davidson's swamp man thought experiment (in "Knowing One Own's Mind", 1987 paper)
Ethics and meta-ethics
Hypostasis (philosophy and religion)
Hypostatic abstraction
List of ethics topics
Michel Foucault's critique of the subject and the oxymoron "historical subject"
Moral relativism
Neo-Kantianism
Nonexistent object
Noumenon and phenomenon
Object-oriented ontology
Observer (physics)
Open individualism
Paramatma
Personhood theory
Ship of Theseus
Sign relation
Soul
Subject (grammar)
Subjectivity and objectivity (philosophy)
Transcendental subject
Vertiginous question
References
Bibliography
Alain de Libera, "When Did the Modern Subject Emerge?", American Catholic Philosophical Quarterly, Vol. 82, No. 2, 2008, pp. 181–220.
Robert B. Pippin, The Persistence of Subjectivity. On the Kantian Aftermath, Cambridge: Cambridge University Press, 2005.
Udo Thiel, The Early Modern Subject. Self-Consciousness and Personal Identity from Descartes to Hume, New York: Oxford University Press, 2011.
External links
Concepts in metaphysics
Consciousness
Subjective experience
Concepts in epistemology
Ontology
Physical objects | Subject and object (philosophy) | [
"Physics"
] | 3,357 | [
"Physical objects",
"Matter"
] |
47,403 | https://en.wikipedia.org/wiki/Instrumentation | Instrumentation is a collective term for measuring instruments, used for indicating, measuring, and recording physical quantities. It is also a field of study about the art and science about making measurement instruments, involving the related areas of metrology, automation, and control theory. The term has its origins in the art and science of scientific instrument-making.
Instrumentation can refer to devices as simple as direct-reading thermometers, or as complex as multi-sensor components of industrial control systems. Instruments can be found in laboratories, refineries, factories and vehicles, as well as in everyday household use (e.g., smoke detectors and thermostats).
Measurement parameters
Instrumentation is used to measure many parameters (physical values), including:
Pressure, either differential or static
Flow
Temperature
Levels of liquids, etc.
Moisture or humidity
Density
Viscosity
ionising radiation
Frequency
Current
Voltage
Inductance
Capacitance
Resistivity
Chemical composition
Chemical properties
Toxic gases
Position
Vibration
Weight
History
The history of instrumentation can be divided into several phases.
Pre-industrial
Elements of industrial instrumentation have long histories. Scales for comparing weights and simple pointers to indicate position are ancient technologies. Some of the earliest measurements were of time. One of the oldest water clocks was found in the tomb of the ancient Egyptian pharaoh Amenhotep I, buried around 1500 BCE. Improvements were incorporated in the clocks. By 270 BCE they had the rudiments of an automatic control system device.
In 1663 Christopher Wren presented the Royal Society with a design for a "weather clock". A drawing shows meteorological sensors moving pens over paper driven by clockwork. Such devices did not become standard in meteorology for two centuries. The concept has remained virtually unchanged as evidenced by pneumatic chart recorders, where a pressurized bellows displaces a pen. Integrating sensors, displays, recorders, and controls was uncommon until the industrial revolution, limited by both need and practicality.
Early industrial
Early systems used direct process connections to local control panels for control and indication, which from the early 1930s saw the introduction of pneumatic transmitters and automatic 3-term (PID) controllers.
The ranges of pneumatic transmitters were defined by the need to control valves and actuators in the field. Typically, a signal ranged from 3 to 15 psi (20 to 100kPa or 0.2 to 1.0 kg/cm2) as a standard, was standardized with 6 to 30 psi occasionally being used for larger valves.
Transistor electronics enabled wiring to replace pipes, initially with a range of 20 to 100mA at up to 90V for loop powered devices, reducing to 4 to 20mA at 12 to 24V in more modern systems. A transmitter is a device that produces an output signal, often in the form of a 4–20 mA electrical current signal, although many other options using voltage, frequency, pressure, or ethernet are possible. The transistor was commercialized by the mid-1950s.
Instruments attached to a control system provided signals used to operate solenoids, valves, regulators, circuit breakers, relays and other devices. Such devices could control a desired output variable, and provide either remote monitoring or automated control capabilities.
Each instrument company introduced their own standard instrumentation signal, causing confusion until the 4–20 mA range was used as the standard electronic instrument signal for transmitters and valves. This signal was eventually standardized as ANSI/ISA S50, "Compatibility of Analog Signals for Electronic Industrial Process Instruments", in the 1970s. The transformation of instrumentation from mechanical pneumatic transmitters, controllers, and valves to electronic instruments reduced maintenance costs as electronic instruments were more dependable than mechanical instruments. This also increased efficiency and production due to their increase in accuracy. Pneumatics enjoyed some advantages, being favored in corrosive and explosive atmospheres.
Automatic process control
In the early years of process control, process indicators and control elements such as valves were monitored by an operator, that walked around the unit adjusting the valves to obtain the desired temperatures, pressures, and flows. As technology evolved pneumatic controllers were invented and mounted in the field that monitored the process and controlled the valves. This reduced the amount of time process operators needed to monitor the process. Latter years, the actual controllers were moved to a central room and signals were sent into the control room to monitor the process and outputs signals were sent to the final control element such as a valve to adjust the process as needed. These controllers and indicators were mounted on a wall called a control board. The operators stood in front of this board walking back and forth monitoring the process indicators. This again reduced the number and amount of time process operators were needed to walk around the units. The most standard pneumatic signal level used during these years was 3–15 psig.
Large integrated computer-based systems
Process control of large industrial plants has evolved through many stages. Initially, control would be from panels local to the process plant. However, this required a large manpower resource to attend to these dispersed panels, and there was no overall view of the process. The next logical development was the transmission of all plant measurements to a permanently staffed central control room. Effectively this was the centralization of all the localized panels, with the advantages of lower manning levels and easy overview of the process. Often the controllers were behind the control room panels, and all automatic and manual control outputs were transmitted back to plant.
However, whilst providing a central control focus, this arrangement was inflexible as each control loop had its own controller hardware, and continual operator movement within the control room was required to view different parts of the process. With coming of electronic processors and graphic displays it became possible to replace these discrete controllers with computer-based algorithms, hosted on a network of input/output racks with their own control processors. These could be distributed around plant, and communicate with the graphic display in the control room or rooms. The distributed control concept was born.
The introduction of DCSs and SCADA allowed easy interconnection and re-configuration of plant controls such as cascaded loops and interlocks, and easy interfacing with other production computer systems. It enabled sophisticated alarm handling, introduced automatic event logging, removed the need for physical records such as chart recorders, allowed the control racks to be networked and thereby located locally to plant to reduce cabling runs, and provided high level overviews of plant status and production levels.
Application
In some cases, the sensor is a very minor element of the mechanism. Digital cameras and wristwatches might technically meet the loose definition of instrumentation because they record and/or display sensed information. Under most circumstances neither would be called instrumentation, but when used to measure the elapsed time of a race and to document the winner at the finish line, both would be called instrumentation.
Household
A very simple example of an instrumentation system is a mechanical thermostat, used to control a household furnace and thus to control room temperature. A typical unit senses temperature with a bi-metallic strip. It displays temperature by a needle on the free end of the strip. It activates the furnace by a mercury switch. As the switch is rotated by the strip, the mercury makes physical (and thus electrical) contact between electrodes.
Another example of an instrumentation system is a home security system. Such a system consists of
sensors (motion detection, switches to detect door openings), simple algorithms to detect intrusion, local control (arm/disarm) and remote monitoring of the system so that the police can be summoned. Communication is an inherent part of the design.
Kitchen appliances use sensors for control.
A refrigerator maintains a constant temperature by actuating the cooling system when the temperature becomes too high.
An automatic ice machine makes ice until a limit switch is thrown.
Pop-up bread toasters allow the time to be set.
Non-electronic gas ovens will regulate the temperature with a thermostat controlling the flow of gas to the gas burner. These may feature a sensor bulb sited within the main chamber of the oven. In addition, there may be a safety cut-off flame supervision device: after ignition, the burner's control knob must be held for a short time in order for a sensor to become hot, and permit the flow of gas to the burner. If the safety sensor becomes cold, this may indicate the flame on the burner has become extinguished, and to prevent a continuous leak of gas the flow is stopped.
Electric ovens use a temperature sensor and will turn on heating elements when the temperature is too low. More advanced ovens will actuate fans in response to temperature sensors, to distribute heat or to cool.
A common toilet refills the water tank until a float closes the valve. The float is acting as a water level sensor.
Automotive
Modern automobiles have complex instrumentation. In addition to displays of engine rotational speed and vehicle linear speed, there are also displays of battery voltage and current, fluid levels, fluid temperatures, distance traveled, and feedback of various controls (turn signals, parking brake, headlights, transmission position). Cautions may be displayed for special problems (fuel low, check engine, tire pressure low, door ajar, seat belt unfastened). Problems are recorded so they can be reported to diagnostic equipment. Navigation systems can provide voice commands to reach a destination. Automotive instrumentation must be cheap and reliable over long periods in harsh environments. There may be independent airbag systems that contain sensors, logic and actuators. Anti-skid braking systems use sensors to control the brakes, while cruise control affects throttle position. A wide variety of services can be provided via communication links on the OnStar system. Autonomous cars (with exotic instrumentation) have been shown.
Aircraft
Early aircraft had a few sensors. "Steam gauges" converted air pressures into needle deflections that could be interpreted as altitude and airspeed. A magnetic compass provided a sense of direction. The displays to the pilot were as critical as the measurements.
A modern aircraft has a far more sophisticated suite of sensors and displays, which are embedded into avionics systems. The aircraft may contain inertial navigation systems, global positioning systems, weather radar, autopilots, and aircraft stabilization systems. Redundant sensors are used for reliability. A subset of the information may be transferred to a crash recorder to aid mishap investigations. Modern pilot displays now include computer displays including head-up displays.
Air traffic control radar is a distributed instrumentation system. The ground part sends an electromagnetic pulse and receives an echo (at least). Aircraft carry transponders that transmit codes on reception of the pulse. The system displays an aircraft map location, an identifier and optionally altitude. The map location is based on sensed antenna direction and sensed time delay. The other information is embedded in the transponder transmission.
Laboratory instrumentation
Among the possible uses of the term is a collection of laboratory test equipment controlled by a computer through an IEEE-488 bus (also known as GPIB for General Purpose Instrument Bus or HPIB for Hewlitt Packard Instrument Bus). Laboratory equipment is available to measure many electrical and chemical quantities. Such a collection of equipment might be used to automate the testing of drinking water for pollutants.
Instrumentation engineering
Instrumentation engineering is the engineering specialization focused on the principle and operation of measuring instruments that are used in design and configuration of automated systems in areas such as electrical and pneumatic domains, and the control of quantities being measured.
They typically work for industries with automated processes, such as chemical or manufacturing plants, with the goal of improving system productivity, reliability, safety, optimization and stability.
To control the parameters in a process or in a particular system, devices such as microprocessors, microcontrollers or PLCs are used, but their ultimate aim is to control the parameters of a system.
Instrumentation engineering is loosely defined because the required tasks are very domain dependent. An expert in the biomedical instrumentation of laboratory rats has very different concerns than the expert in rocket instrumentation. Common concerns of both are the selection of appropriate sensors based on size, weight, cost, reliability, accuracy, longevity, environmental robustness, and frequency response. Some sensors are literally fired in artillery shells. Others sense thermonuclear explosions until destroyed. Invariably sensor data must be recorded, transmitted or displayed. Recording rates and capacities vary enormously. Transmission can be trivial or can be clandestine, encrypted and low power in the presence of jamming. Displays can be trivially simple or can require consultation with human factors experts. Control system design varies from trivial to a separate specialty.
Instrumentation engineers are responsible for integrating the sensors with the recorders, transmitters, displays or control systems, and producing the Piping and instrumentation diagram for the process. They may design or specify installation, wiring and signal conditioning. They may be responsible for commissioning, calibration, testing and maintenance of the system.
In a research environment it is common for subject matter experts to have substantial instrumentation system expertise. An astronomer knows the structure of the universe and a great deal about telescopes – optics, pointing and cameras (or other sensing elements). That often includes the hard-won knowledge of the operational procedures that provide the best results. For example, an astronomer is often knowledgeable of techniques to minimize temperature gradients that cause air turbulence within the telescope.
Instrumentation technologists, technicians and mechanics specialize in troubleshooting, repairing and maintaining instruments and instrumentation systems.
Typical industrial transmitter signal types
Pneumatic loop (20-100KPa/3-15PSI) – Pneumatic
Current loop (4-20mA) – Electrical
HART – Data signalling, often overlaid on a current loop
Foundation Fieldbus – Data signalling
Profibus – Data signalling
Impact of modern development
Ralph Müller (1940) stated, "That the history of physical science is largely the history of instruments and their intelligent use is well known. The broad generalizations and theories which have arisen from time to time have stood or fallen on the basis of accurate measurement, and in several instances new instruments have had to be devised for the purpose. There is little evidence to show that the mind of modern man is superior to that of the ancients. His tools are incomparably better."
Davis Baird has argued that the major change associated with Floris Cohens identification of a "fourth big scientific revolution" after World War II is the development of scientific instrumentation, not only in chemistry but across the sciences. In chemistry, the introduction of new instrumentation in the 1940s was "nothing less than a scientific and technological revolution" in which classical wet-and-dry methods of structural organic chemistry were discarded, and new areas of research opened up.
As early as 1954, W. A. Wildhack discussed both the productive and destructive potential inherent in process control.
The ability to make precise, verifiable and reproducible measurements of the natural world, at levels that were not previously observable, using scientific instrumentation, has "provided a different texture of the world". This instrumentation revolution fundamentally changes human abilities to monitor and respond, as is illustrated in the examples of DDT monitoring and the use of UV spectrophotometry and gas chromatography to monitor water pollutants.
See also
Industrial control system
Instrumentation and control engineering
Instrumentation in petrochemical industries
Institute of Measurement and Control
International Society of Automation
List of sensors
Measurement
Medical instrumentation
Metrology
Piping and instrumentation diagram – a diagram in the process industry which shows the piping of the process flow together with the installed equipment and instrumentation.
Programmable logic controller
Timeline of temperature and pressure measurement technology
References
External links
Control engineering
Industrial automation
Sensors
Measuring instruments | Instrumentation | [
"Technology",
"Engineering"
] | 3,173 | [
"Industrial engineering",
"Measuring instruments",
"Automation",
"Control engineering",
"Sensors",
"Industrial automation"
] |
47,433 | https://en.wikipedia.org/wiki/Atlas%20%28architecture%29 | In European architectural sculpture, an atlas (also known as an atlant, or atlante or atlantid; plural atlantes)<ref name="atlex">[http://www.artlex.com/ArtLex/Aru.html Aru-Az] , Michael Delahunt, ArtLex Art Dictionary , 1996–2008.</ref> is a support sculpted in the form of a man, which may take the place of a column, a pier or a pilaster. The Roman term for such a sculptural support is telamon''' (plural telamones or telamons).
The term atlantes is the Greek plural of the name Atlas—the Titan who was forced to hold the sky on his shoulders for eternity. The alternative term, telamones, also is derived from a later mythological hero, Telamon, one of the Argonauts, who was the father of Ajax.
The caryatid is the female precursor of this architectural form in Greece, a woman standing in the place of each column or pillar. Caryatids are found at the treasuries at Delphi and the Erechtheion on the Acropolis at Athens for Athene. They usually are in an Ionic context and represented a ritual association with the goddesses worshiped within. The Atlante is typically life-size or larger; smaller similar figures in the decorative arts are called terms. The body of many Atlantes turns into a rectangular pillar or other architectural feature around the waist level, a feature borrowed from the term. The pose and expression of Atlantes very often show their effort to bear the heavy load of the building, which is rarely the case with terms and caryatids. The herma or herm is a classical boundary marker or wayside monument to a god which is usually a square pillar with only a carved head on top, about life-size, and male genitals at the appropriate mid-point. Figures that are rightly called Atlantes may sometimes be described as herms.
Atlantes express extreme effort in their function, heads bent forward to support the weight of the structure above them across their shoulders, forearms often lifted to provide additional support, providing an architectural motif. Atlantes and caryatids were noted by the Roman late Republican architect Vitruvius, whose description of the structures, rather than surviving examples, transmitted the idea of atlantes to the Renaissance architectural vocabulary.
Origin
Not only did the Caryatids precede them, but similar architectural figures already had been made in ancient Egypt out of monoliths. Atlantes originated in Greek Sicily and in Magna Graecia, Southern Italy. The earliest surviving atlantes are fallen ones from the Early Classical Greek temple of Zeus, the Olympeion, in Agrigento, Sicily. Atlantes also played a significant role in Mannerist and Baroque architecture.
During the eighteenth and nineteenth centuries, the designs of many buildings featured glorious atlantes that looked much like Greek originals. Their inclusion in the final design
for the portico of the Hermitage Museum in St. Petersburg that was built for Tsar Nicholas I of Russia in the 1840’s made the use of atlantes especially fashionable. The Hermitage portico incorporates ten enormous atlantes, approximately three times life-size, carved from Serdobol granite, which were designed by Johann Halbig and executed by the sculptor Alexander Terebenev.
Mesoamerica
Similar carved stone columns or pillars in the shape of fierce men at some sites of Pre-Columbian Mesoamerica are typically called Atlantean figures. These figures are considered to be "massive statues of Toltec warriors".
Examples
Basilica di Santa Croce, Lecce, Italy
Casa degli Omenoni, Milan, Italy
Church of St. Georg'', Hamburg, Germany
Dům U Čtyř mamlasů, Brno, Czech Republic
Hermitage Museum, St. Petersburg, Russia
House in Kanałowa Str. 17, Poznań, Poland
Palazzo Davia Bargellini, Bologna, Italy
Pavilion Vendôme, Aix-en-Provence, France
Porta Nuova, Palermo, Italy
Sanssouci, Potsdam, Germany
Sunshine Marketplace, Victoria, Australia
Temple of Olympian Zeus, Valle dei Templi, Agrigento, Italy
Tyszkiewicz Palace, Warsaw, Poland
Zwinger Palace, Germany
Wayne County Courthouse, Wooster, Ohio, United States
Gallery
See also
Telamon
References
Bibliography
Columns and entablature
Architectural sculpture
Architectural history
Ancient Greek architecture
Ancient Roman architecture
Atlas (mythology)
Sculptures of Greek gods | Atlas (architecture) | [
"Technology",
"Engineering"
] | 964 | [
"Structural system",
"Architectural history",
"Columns and entablature",
"Architecture"
] |
47,436 | https://en.wikipedia.org/wiki/Atlas%20%28topology%29 | In mathematics, particularly topology, an atlas is a concept used to describe a manifold. An atlas consists of individual charts that, roughly speaking, describe individual regions of the manifold. In general, the notion of atlas underlies the formal definition of a manifold and related structures such as vector bundles and other fiber bundles.
Charts
The definition of an atlas depends on the notion of a chart. A chart for a topological space M is a homeomorphism from an open subset U of M to an open subset of a Euclidean space. The chart is traditionally recorded as the ordered pair .
When a coordinate system is chosen in the Euclidean space, this defines coordinates on : the coordinates of a point of are defined as the coordinates of The pair formed by a chart and such a coordinate system is called a local coordinate system, coordinate chart, coordinate patch, coordinate map, or local frame.
Formal definition of atlas
An atlas for a topological space is an indexed family of charts on which covers (that is, ). If for some fixed n, the image of each chart is an open subset of n-dimensional Euclidean space, then is said to be an n-dimensional manifold.
The plural of atlas is atlases, although some authors use atlantes.
An atlas on an -dimensional manifold is called an adequate atlas if the following conditions hold:
The image of each chart is either or , where is the closed half-space,
is a locally finite open cover of , and
, where is the open ball of radius 1 centered at the origin.
Every second-countable manifold admits an adequate atlas. Moreover, if is an open covering of the second-countable manifold , then there is an adequate atlas on , such that is a refinement of .
Transition maps
A transition map provides a way of comparing two charts of an atlas. To make this comparison, we consider the composition of one chart with the inverse of the other. This composition is not well-defined unless we restrict both charts to the intersection of their domains of definition. (For example, if we have a chart of Europe and a chart of Russia, then we can compare these two charts on their overlap, namely the European part of Russia.)
To be more precise, suppose that and are two charts for a manifold M such that is non-empty.
The transition map is the map defined by
Note that since and are both homeomorphisms, the transition map is also a homeomorphism.
More structure
One often desires more structure on a manifold than simply the topological structure. For example, if one would like an unambiguous notion of differentiation of functions on a manifold, then it is necessary to construct an atlas whose transition functions are differentiable. Such a manifold is called differentiable. Given a differentiable manifold, one can unambiguously define the notion of tangent vectors and then directional derivatives.
If each transition function is a smooth map, then the atlas is called a smooth atlas, and the manifold itself is called smooth. Alternatively, one could require that the transition maps have only k continuous derivatives in which case the atlas is said to be .
Very generally, if each transition function belongs to a pseudogroup of homeomorphisms of Euclidean space, then the atlas is called a -atlas. If the transition maps between charts of an atlas preserve a local trivialization, then the atlas defines the structure of a fibre bundle.
See also
Smooth atlas
Smooth frame
References
, Chapter 5 "Local coordinate description of fibre bundles".
External links
Atlas by Rowland, Todd
Manifolds | Atlas (topology) | [
"Mathematics"
] | 714 | [
"Topological spaces",
"Manifolds",
"Topology",
"Space (mathematics)"
] |
47,454 | https://en.wikipedia.org/wiki/Stratosphere | The stratosphere () is the second-lowest layer of the atmosphere of Earth, located above the troposphere and below the mesosphere. The stratosphere is composed of stratified temperature zones, with the warmer layers of air located higher (closer to outer space) and the cooler layers lower (closer to the planetary surface of the Earth). The increase of temperature with altitude is a result of the absorption of the Sun's ultraviolet (UV) radiation by the ozone layer, where ozone is exothermically photolyzed into oxygen in a cyclical fashion. This temperature inversion is in contrast to the troposphere, where temperature decreases with altitude, and between the troposphere and stratosphere is the tropopause border that demarcates the beginning of the temperature inversion.
Near the equator, the lower edge of the stratosphere is as high as , at mid-latitudes around , and at the poles about . Temperatures range from an average of near the tropopause to an average of near the mesosphere. Stratospheric temperatures also vary within the stratosphere as the seasons change, reaching particularly low temperatures in the polar night (winter). Winds in the stratosphere can far exceed those in the troposphere, reaching near in the Southern polar vortex.
Discovery
In 1902, Léon Teisserenc de Bort from France and Richard Assmann from Germany, in separate but coordinated publications and following years of observations, published the discovery of an isothermal layer at around 11–14 km (6.8-8.7 mi), which is the base of the lower stratosphere. This was based on temperature profiles from mostly unmanned and a few manned instrumented balloons.
Ozone layer
The mechanism describing the formation of the ozone layer was described by British mathematician and geophysicist Sydney Chapman in 1930, and is known as the Chapman cycle or ozone–oxygen cycle. Molecular oxygen absorbs high energy sunlight in the UV-C region, at wavelengths shorter than about 240 nm. Radicals produced from the homolytically split oxygen molecules combine with molecular oxygen to form ozone. Ozone in turn is photolysed much more rapidly than molecular oxygen as it has a stronger absorption that occurs at longer wavelengths, where the solar emission is more intense. Ozone (O3) photolysis produces O and O2. The oxygen atom product combines with atmospheric molecular oxygen to reform O3, releasing heat. The rapid photolysis and reformation of ozone heat the stratosphere, resulting in a temperature inversion. This increase of temperature with altitude is characteristic of the stratosphere; its resistance to vertical mixing means that it is stratified. Within the stratosphere temperatures increase with altitude (see temperature inversion); the top of the stratosphere has a temperature of about 270 K (−3°C or 26.6°F).
This vertical stratification, with warmer layers above and cooler layers below, makes the stratosphere dynamically stable: there is no regular convection and associated turbulence in this part of the atmosphere. However, exceptionally energetic convection processes, such as volcanic eruption columns and overshooting tops in severe supercell thunderstorms, may carry convection into the stratosphere on a very local and temporary basis. Overall, the attenuation of solar UV at wavelengths that damage DNA by the ozone layer allows life to exist on the surface of the planet outside of the ocean. All air entering the stratosphere must pass through the tropopause, the temperature minimum that divides the troposphere and stratosphere. The rising air is literally freeze dried; the stratosphere is a very dry place. The top of the stratosphere is called the stratopause, above which the temperature decreases with height.
Formation and destruction
Sydney Chapman gave a correct description of the source of stratospheric ozone and its ability to generate heat within the stratosphere; he also wrote that ozone may be destroyed by reacting with atomic oxygen, making two molecules of molecular oxygen. We now know that there are additional ozone loss mechanisms and that these mechanisms are catalytic, meaning that a small amount of the catalyst can destroy a great number of ozone molecules. The first is due to the reaction of hydroxyl radicals (•OH) with ozone. •OH is formed by the reaction of electrically excited oxygen atoms produced by ozone photolysis, with water vapor. While the stratosphere is dry, additional water vapor is produced in situ by the photochemical oxidation of methane (CH4). The HO2 radical produced by the reaction of OH with O3 is recycled to OH by reaction with oxygen atoms or ozone. In addition, solar proton events can significantly affect ozone levels via radiolysis with the subsequent formation of OH. Nitrous oxide (N2O) is produced by biological activity at the surface and is oxidised to NO in the stratosphere; the so-called NOx radical cycles also deplete stratospheric ozone. Finally, chlorofluorocarbon molecules are photolysed in the stratosphere releasing chlorine atoms that react with ozone giving ClO and O2. The chlorine atoms are recycled when ClO reacts with O in the upper stratosphere, or when ClO reacts with itself in the chemistry of the Antarctic ozone hole.
Paul J. Crutzen, Mario J. Molina and F. Sherwood Rowland were awarded the Nobel Prize in Chemistry in 1995 for their work describing the formation and decomposition of stratospheric ozone.
Aircraft flight
Commercial airliners typically cruise at altitudes of which is in the lower reaches of the stratosphere in temperate latitudes. This optimizes fuel efficiency, mostly due to the low temperatures encountered near the tropopause and low air density, reducing parasitic drag on the airframe. Stated another way, it allows the airliner to fly faster while maintaining lift equal to the weight of the plane. (The fuel consumption depends on the drag, which is related to the lift by the lift-to-drag ratio.) It also allows the airplane to stay above the turbulent weather of the troposphere.
The Concorde aircraft cruised at Mach 2 at about , and the SR-71 cruised at Mach 3 at , all within the stratosphere.
Because the temperature in the tropopause and lower stratosphere is largely constant with increasing altitude, very little convection and its resultant turbulence occurs there. Most turbulence at this altitude is caused by variations in the jet stream and other local wind shears, although areas of significant convective activity (thunderstorms) in the troposphere below may produce turbulence as a result of convective overshoot.
On October 24, 2014, Alan Eustace became the record holder for reaching the altitude record for a manned balloon at . Eustace also broke the world records for vertical speed skydiving, reached with a peak velocity of 1,321 km/h (822 mph) and total freefall distance of – lasting four minutes and 27 seconds.
Circulation and mixing
The stratosphere is a region of intense interactions among radiative, dynamical, and chemical processes, in which the horizontal mixing of gaseous components proceeds much more rapidly than does vertical mixing. The overall circulation of the stratosphere is termed as Brewer-Dobson circulation, which is a single-celled circulation, spanning from the tropics up to the poles, consisting of the tropical upwelling of air from the tropical troposphere and the extra-tropical downwelling of air. Stratospheric circulation is a predominantly wave-driven circulation in that the tropical upwelling is induced by the wave force by the westward propagating Rossby waves, in a phenomenon called Rossby-wave pumping.
An interesting feature of stratospheric circulation is the quasi-biennial oscillation (QBO) in the tropical latitudes, which is driven by gravity waves that are convectively generated in the troposphere. The QBO induces a secondary circulation that is important for the global stratospheric transport of tracers, such as ozone or water vapor.
Another large-scale feature that significantly influences stratospheric circulation is the breaking planetary waves resulting in intense quasi-horizontal mixing in the midlatitudes. This breaking is much more pronounced in the winter hemisphere where this region is called the surf zone. This breaking is caused due to a highly non-linear interaction between the vertically propagating planetary waves and the isolated high potential vorticity region known as the polar vortex. The resultant breaking causes large-scale mixing of air and other trace gases throughout the midlatitude surf zone. The timescale of this rapid mixing is much smaller than the much slower timescales of upwelling in the tropics and downwelling in the extratropics.
During northern hemispheric winters, sudden stratospheric warmings, caused by the absorption of Rossby waves in the stratosphere, can be observed in approximately half of winters when easterly winds develop in the stratosphere. These events often precede unusual winter weather <ref>M.P. Baldwin and T.J. Dunkerton. 'Stratospheric Harbingers of Anomalous Weather Regimes , Science Magazine.</ref> and may even be responsible for the cold European winters of the 1960s.
Stratospheric warming of the polar vortex results in its weakening. When the vortex is strong, it keeps the cold, high-pressure air masses contained in the Arctic; when the vortex weakens, air masses move equatorward, and results in rapid changes of weather in the mid latitudes.
Upper-atmospheric lightning
Upper-atmospheric lightning is a family of short-lived electrical-breakdown phenomena that occur well above the altitudes of normal lightning and storm clouds. Upper-atmospheric lightning is believed to be electrically induced forms of luminous plasma. Lightning extending above the troposphere into the stratosphere is referred to as blue jet, and that reaching into the mesosphere as red sprite.
Life
Bacteria
Bacterial life survives in the stratosphere, making it a part of the biosphere. In 2001, dust was collected at a height of 41 kilometres in a high-altitude balloon experiment and was found to contain bacterial material when examined later in the laboratory.
Birds
Some bird species have been reported to fly at the upper levels of the troposphere. On November 29, 1973, a Rüppell's vulture (Gyps rueppelli) was ingested into a jet engine above the Ivory Coast. Bar-headed geese (Anser indicus'') sometimes migrate over Mount Everest, whose summit is .
See also
Le Grand Saut
Lockheed U-2
Overshooting top
Ozone depletion
Paris Gun (projectile was the first artificial object to reach the upper stratosphere)
Perlan Project
Project Excelsior, world record for highest recorded jump 1961-2012
Red Bull Stratos, world record for highest recorded jump 2012-2014
RQ-4 Global Hawk
Service ceiling
Upper-atmospheric lightning
References
External links
Current map of global winds and temperatures at the 10 hPa level.
Atmosphere
Atmosphere of Earth
Meteorological phenomena | Stratosphere | [
"Physics"
] | 2,304 | [
"Meteorological phenomena",
"Physical phenomena",
"Earth phenomena"
] |
47,473 | https://en.wikipedia.org/wiki/Algal%20bloom | An algal bloom or algae bloom is a rapid increase or accumulation in the population of algae in fresh water or marine water systems. It is often recognized by the discoloration in the water from the algae's pigments. The term algae encompasses many types of aquatic photosynthetic organisms, both macroscopic multicellular organisms like seaweed and microscopic unicellular organisms like cyanobacteria. Algal bloom commonly refers to the rapid growth of microscopic unicellular algae, not macroscopic algae. An example of a macroscopic algal bloom is a kelp forest.
Algal blooms are the result of a nutrient, like nitrogen or phosphorus from various sources (for example fertilizer runoff or other forms of nutrient pollution), entering the aquatic system and causing excessive growth of algae. An algal bloom affects the whole ecosystem.
Consequences range from the benign feeding of higher trophic levels to more harmful effects like blocking sunlight from reaching other organisms, causing a depletion of oxygen levels in the water, and, depending on the organism, secreting toxins into the water. Blooms that can injure animals or the ecology, especially those blooms where toxins are secreted by the algae, are usually called "harmful algal blooms" (HAB), and can lead to fish die-offs, cities cutting off water to residents, or states having to close fisheries. The process of the oversupply of nutrients leading to algae growth and oxygen depletion is called eutrophication.
Algal and bacterial blooms have persistently contributed to mass extinctions driven by global warming in the geologic past, such as during the end-Permian extinction driven by Siberian Traps volcanism and the biotic recovery following the mass extinction.
Characterization
The term algal bloom is defined inconsistently depending on the scientific field and can range from a "minibloom" of harmless algae to a large, harmful bloom event. Since algae is a broad term including organisms of widely varying sizes, growth rates, and nutrient requirements, there is no officially recognized threshold level as to what is defined as a bloom. Because there is no scientific consensus, blooms can be characterized and quantified in several ways: measurements of new algal biomass, the concentration of photosynthetic pigment, quantification of the bloom's negative effect, or relative concentration of the algae compared to the rest of the microbial community. For example, definitions of blooms have included when the concentration of chlorophyll exceeds 100 ug/L, when the concentration of chlorophyll exceeds 5 ug/L, when the species considered to be blooming exceeds concentrations of 1000 cells/mL, and when the algae species concentration simply deviates from its normal growth.
Blooms are the result of a nutrient needed by the particular algae being introduced to the local aquatic system. This growth-limiting nutrient is typically nitrogen or phosphorus, but can also be iron, vitamins, or amino acids. There are several mechanisms for the addition of these nutrients in water. In the open ocean and along coastlines, upwelling from both winds and topographical ocean floor features can draw nutrients to the photic, or sunlit zone of the ocean. Along coastal regions and in freshwater systems, agricultural, city, and sewage runoff can cause algal blooms.
Algal blooms, especially large algal bloom events, can reduce the transparency of the water and can discolor the water. The photosynthetic pigments in the algal cells, like chlorophyll and photoprotective pigments, determine the color of the algal bloom. Depending on the organism, its pigments, and the depth in the water column, algal blooms can be green, red, brown, golden, and purple. Bright green blooms in freshwater systems are frequently a result of cyanobacteria (colloquially known as "blue-green algae") such as Microcystis. Blooms may also consist of macroalgal (non-phytoplanktonic) species. These blooms are recognizable by large blades of algae that may wash up onto the shoreline.
Once the nutrient is present in the water, the algae begin to grow at a much faster rate than usual. In a mini bloom, this fast growth benefits the whole ecosystem by providing food and nutrients for other organisms.
Of particular note are the harmful algal blooms (HABs), which are algal bloom events involving toxic or otherwise harmful phytoplankton. Many species can cause harmful algal blooms. For example, Gymnodinium nagasakiense can cause harmful red tides, dinoflagellates Gonyaulax polygramma can cause oxygen depletion and result in large fish kills, cyanobacteria Microcystis aeruginosa can make poisonous toxins, and diatom Chaetoceros convolutus can damage fish gills.
Freshwater algal blooms
Freshwater algal blooms are the result of an excess of nutrients, particularly some phosphates. Excess nutrients may originate from fertilizers that are applied to land for agricultural or recreational purposes and may also originate from household cleaning products containing phosphorus.
The reduction of phosphorus inputs is required to mitigate blooms that contain cyanobacteria. In lakes that are stratified in the summer, autumn turnover can release substantial quantities of bio-available phosphorus potentially triggering algal blooms as soon as sufficient photosynthetic light is available. Excess nutrients can enter watersheds through water runoff. Excess carbon and nitrogen have also been suspected as causes. Presence of residual sodium carbonate acts as catalyst for the algae to bloom by providing dissolved carbon dioxide for enhanced photosynthesis in the presence of nutrients.
When phosphates are introduced into water systems, higher concentrations cause increased growth of algae and plants. Algae tend to grow very quickly under high nutrient availability, but each alga is short-lived, and the result is a high concentration of dead organic matter which starts to decompose. Natural decomposers present in the water begin decomposing the dead algae, consuming dissolved oxygen present in the water during the process. This can result in a sharp decrease in available dissolved oxygen for other aquatic life. Without sufficient dissolved oxygen in the water, animals and plants may die off in large numbers. This may also be known as a dead zone.
Blooms may be observed in freshwater aquariums when fish are overfed and excess nutrients are not absorbed by plants. These are generally harmful for fish, and the situation can be corrected by changing the water in the tank and then reducing the amount of food given.
Natural causes of algal blooms
Algal blooms in freshwater systems are not always caused by human contamination and have been observed to occur naturally in both eutrophic and oligotrophic lakes. Eutrophic lakes contain an abundance of nutrients such as nitrogen and phosphates which increase the likelihood for blooms. Oligotrophic lakes don't contain much of these nutrients. Oligotrophic lakes are defined by various degrees of scarcity. The trophic state index (TSI) measures nutrients in freshwater systems and a TSI under 30 defines oligotrophic waters. However, algal blooms in oligotrophic bodies of water have also been observed. This is a result of cyanobacteria which cause blooms in eutrophic lakes and oligotrophic lakes despite the latter containing a lack of natural and man-made nutrients.
Nutrient uptake and cyanobacteria
A cause for algal blooms in nutrient-lacking environments come in the form of nutrient uptake. Cyanobacteria have evolved to have better nutrient uptake in oligotrophic waters. Cyanobacteria utilize nitrogen and phosphates in their biological processes. Because of this, cyanobacteria are known to be important in the nitrogen and phosphate fixing cycle in oligotrophic waters. Cyanobacteria can fix nitrogen by accessing atmospheric nitrogen () that has been dissolved into water and transforming it into nitrogen accessible to other organisms. This higher amount of nitrogen is then able to sustain large algae blooms in oligotrophic waters.
Cyanobacteria are able to retain high phosphorus uptake in the absence of nutrients which help their success in oligotrophic environments. Cyanobacteria species such as D. lemmermannii are able to move between the hypolimnion which is rich in nutrients such as phosphates and the nutrient-poor metalimnion which lacks phosphates. This causes phosphates to be brought up to the metalimnion and give organisms an abundance of phosphates, exacerbating the likelihood for algal blooms.
Upwelling of nutrients
Upwelling events happen when nutrients such as phosphates and nitrogen are moved from the nutrient dense hypolimnion to the nutrient poor metalimnion. This happens as result of geological processes such as seasonal overturn when lake surfaces freeze or melt, prompting mixing due to changing water densities mixing up the composition of limnion layers and mixing nutrients around the system. This overabundance in nutrients leads to blooms.
Marine algal blooms
Turbulent storms churn the ocean in summer, adding nutrients to sunlit waters near the surface. This sparks a feeding frenzy each spring that gives rise to massive blooms of phytoplankton. Tiny molecules found inside these microscopic plants harvest vital energy from sunlight through photosynthesis. The natural pigments, called chlorophyll, allow phytoplankton to thrive in Earth's oceans and enable scientists to monitor blooms from space. Satellites reveal the location and abundance of phytoplankton by detecting the amount of chlorophyll present in coastal and open waters—the higher the concentration, the larger the bloom. Observations show blooms typically last until late spring or early summer, when nutrient stocks are in decline and predatory zooplankton start to graze. The visualization on the left immediately below uses NASA SeaWiFS data to map bloom populations.
The NAAMES study conducted between 2015 and 2019 investigated aspects of phytoplankton dynamics in ocean ecosystems, and how such dynamics influence atmospheric aerosols, clouds, and climate.
In France, citizens are requested to report coloured waters through the project PHENOMER. This helps to understand the occurrence of marine blooms.
Wildfires can cause phytoplankton blooms via oceanic deposition of wildfire aerosols.
Harmful algal blooms
A harmful algal bloom (HAB) is an algal bloom that causes negative impacts to other organisms via production of natural toxins, mechanical damage to other organisms, or by other means. The diversity of these HABs make them even harder to manage, and present many issues, especially to threatened coastal areas. HABs are often associated with large-scale marine mortality events and have been associated with various types of shellfish poisonings. Due to their negative economic and health impacts, HABs are often carefully monitored.
HAB has been proved to be harmful to humans. Humans may be exposed to toxic algae by direct consuming seafood containing toxins, swimming or other activities in water, and breathing tiny droplets in the air that contain toxins. Because human exposure can take place by consuming seafood products that contain the toxins expelled by HAB algae, food-borne diseases are present and can affect the nervous, digestive, respiratory, hepatic, dermatological, and cardiac systems in the body. Beach users have often experienced upper respiratory diseases, eye and nose irritation, fever, and have often needed medical care in order to be treated. Ciguatera fish poisoning (CFP) is very common from the exposure of algal blooms. Water-borne diseases are also present as our drinking waters can be contaminated by cyanotoxins.
If the HAB event results in a high enough concentration of algae the water may become discoloured or murky, varying in colour from purple to almost pink, normally being red or green. Not all algal blooms are dense enough to cause water discolouration.
Bioluminescence
Dinoflagellates are microbial eukaryotes that link bioluminesce and toxin production in algal blooms. They use a luciferin-luciferase reaction to create a blue light emission glow. There are seventeen major types of dinoflagellate toxins, in which the strains, Saxitoxin and Yessotoxin, are both bioluminescent and toxic. These two strains are found to have similar niches in coastal areas. A surplus of Dinoflagellates in the night time creates a blue-green glow, however, in the day, it presents as a red brown color which names algal blooms, Red Tides. Dinoflagellates have been reported to be the cause of seafood poisoning from the neurotoxins.
Management
There are three major categories for management of algal blooms consisting of mitigation, prevention, and control. Within mitigation, routine monitoring programs are implemented for toxins in shellfish and an overall surveillance of the area. The HAB levels of the shellfish will be determined and can manage restrictions to keep contaminated shellfish off the food market. Moving fish pens away from algal blooms is also another form of mitigation. Within prevention, this category is less known but policy changes are implemented to control sewage and waste. Within control, there are mechanical, biological, chemical, genetic and environmental controls. Mechanical control involves dispersing clay into the water to aggregate with the HAB leading to less of these HAB to go through the process of sedimentation. Biological control varies largely and can be used through pheromones or releasing sterile males to reduce reproduction. Chemical control uses toxic chemical release. However, it may cause problems of mortality of other non targeted organisms. Genetic control involves genetically engineering species in their environmental tolerances and reproduction processes. However, there are problems of harming indigenous organisms. For environmental control, it can use water circulation and aeration.
Environmental Impacts
Harmful algae blooms have a large effect on the Great Lakes St. Lawrence River Basin. Invasive zebra and quagga mussels are positively correlated with their impact on the environment. These mussels increase the cycling of phosphorus which therefore increases harmful algae blooms in areas they are present. Harmful algae blooms continue to infect water supplies at the Binational Great Lakes Basin and due to the world’s recovery from the Covid-19 Pandemic, solving the issue has become a low priority. This economical problem has become part of politics in the United States, whereas in allied countries such as Canada there is low concern, as well.
The impact of harmful algae blooms on the environment have a substantial effect on marine life. For example, in August 2024 the growth of the toxic algae, Pseudo-nitzschia, along California coasts were making sea lions sick and aggressive to beach goers. Scientists claim this is a seasonal occurrence. The growth of Pseudo-nitzschia leads to the production of a dominic acid which accumulates in fishes such as sardines, anchovies, and squids. This directly affects the food web and the primary food source of sea lions. Once the toxins are transferred via consumption, they can cause seizures, brain damage, and death to the animal. During this surge, people reported bites and unpredictable, aggressive behavior from the infected sea lions. In this sickened state, the sea lions are scared and act out of fear in order to protect themselves. Pregnant sea lions are most vulnerable to toxic algae poisoning and are more likely to die from the effects.
See also
Chironomus annularius – A species of nonbiting midges that act as a natural algae control.
Thin layers (oceanography)
References
External links
FAQ about Harmful Algal Blooms (NOAA)
Algal blooms
Algae
Aquatic ecology
Articles containing video clips
Biological oceanography
Environmental issues with water | Algal bloom | [
"Chemistry",
"Biology",
"Environmental_science"
] | 3,248 | [
"Algae",
"Water treatment",
"Water pollution",
"Water quality indicators",
"Ecosystems",
"Aquatic ecology",
"Algal blooms"
] |
47,474 | https://en.wikipedia.org/wiki/Aperture | In optics, the aperture of an optical system (including a system consisted of a single lens) is a hole or an opening that primarily limits light propagated through the system. More specifically, the entrance pupil as the front side image of the aperture and focal length of an optical system determine the cone angle of a bundle of rays that comes to a focus in the image plane.
An optical system typically has many openings or structures that limit ray bundles (ray bundles are also known as pencils of light). These structures may be the edge of a lens or mirror, or a ring or other fixture that holds an optical element in place or may be a special element such as a diaphragm placed in the optical path to limit the light admitted by the system. In general, these structures are called stops, and the aperture stop is the stop that primarily determines the cone of rays that an optical system accepts (see entrance pupil). As a result, it also determines the ray cone angle and brightness at the image point (see exit pupil). The aperture stop generally depends on the object point location; on-axis object points at different object planes may have different aperture stops, and even object points at different lateral locations at the same object plane may have different aperture stops (vignetted). In practice, many object systems are designed to have a single aperture stop at designed working distance and field of view.
In some contexts, especially in photography and astronomy, aperture refers to the opening diameter of the aperture stop through which light can pass. For example, in a telescope, the aperture stop is typically the edges of the objective lens or mirror (or of the mount that holds it). One then speaks of a telescope as having, for example, a aperture. The aperture stop is not necessarily the smallest stop in the system. Magnification and demagnification by lenses and other elements can cause a relatively large stop to be the aperture stop for the system. In astrophotography, the aperture may be given as a linear measure (for example, in inches or millimetres) or as the dimensionless ratio between that measure and the focal length. In other photography, it is usually given as a ratio.
A usual expectation is that the term aperture refers to the opening of the aperture stop, but in reality, the term aperture and the aperture stop are mixed in use. Sometimes even stops that are not the aperture stop of an optical system are also called apertures. Contexts need to clarify these terms.
The word aperture is also used in other contexts to indicate a system which blocks off light outside a certain region. In astronomy, for example, a photometric aperture around a star usually corresponds to a circular window around the image of a star within which the light intensity is assumed.
Application
The aperture stop is an important element in most optical designs. Its most obvious feature is that it limits the amount of light that can reach the image/film plane. This can be either unavoidable due to the practical limit of the aperture stop size, or deliberate to prevent saturation of a detector or overexposure of film. In both cases, the size of the aperture stop determines the amount of light admitted by an optical system. The aperture stop also affects other optical system properties:
The opening size of the stop is one factor that affects DOF (depth of field). A smaller stop (larger f number) produces a longer DOF because it only allows a smaller angle of the cone of light reaching the image plane so the spread of the image of an object point is reduced. A longer DOF allows objects at a wide range of distances from the viewer to all be in focus at the same time.
The stop limits the effect of optical aberrations by limiting light such that the light does not reach edges of optics where aberrations are usually stronger than the optics centers. If the opening of the stop (called the aperture) is too large, then the image will be distorted by stronger aberrations. More sophisticated optical system designs can mitigate the effect of aberrations, allowing a larger aperture and therefore greater light collecting ability.
The stop determines whether the image will be vignetted. Larger stops can cause the light intensity reaching the film or detector to fall off toward the edges of the picture, especially when, for off-axis points, a different stop becomes the aperture stop by virtue of cutting off more light than did the stop that was the aperture stop on the optic axis.
The stop location determines the telecentricity. If the aperture stop of a lens is located at the front focal plane of the lens, then it becomes image-space telecentricity, i.e., the lateral size of the image is insensitive to the image plane location. If the stop is at the back focal plane of the lens, then it becomes object-space telecentricity where the image size is insensitive to the object plane location. The telecentricity helps precise two-dimensional measurements because measurement systems with the telecentricity are insensitive to axial position errors of samples or the sensor.
In addition to an aperture stop, a photographic lens may have one or more field stops, which limit the system's field of view. When the field of view is limited by a field stop in the lens (rather than at the film or sensor) vignetting results; this is only a problem if the resulting field of view is less than was desired.
In astronomy, the opening diameter of the aperture stop (called the aperture) is a critical parameter in the design of a telescope. Generally, one would want the aperture to be as large as possible, to collect the maximum amount of light from the distant objects being imaged. The size of the aperture is limited, however, in practice by considerations of its manufacturing cost and time and its weight, as well as prevention of aberrations (as mentioned above).
Apertures are also used in laser energy control, close aperture z-scan technique, diffractions/patterns, and beam cleaning. Laser applications include spatial filters, Q-switching, high intensity x-ray control.
In light microscopy, the word aperture may be used with reference to either the condenser (that changes the angle of light onto the specimen field), field iris (that changes the area of illumination on specimens) or possibly objective lens (forms primary images). See Optical microscope.
In photography
The aperture stop of a photographic lens can be adjusted to control the amount of light reaching the film or image sensor. In combination with variation of shutter speed, the aperture size will regulate the film's or image sensor's degree of exposure to light. Typically, a fast shutter will require a larger aperture to ensure sufficient light exposure, and a slow shutter will require a smaller aperture to avoid excessive exposure.
A device called a diaphragm usually serves as the aperture stop and controls the aperture (the opening of the aperture stop). The diaphragm functions much like the iris of the eye – it controls the effective diameter of the lens opening (called pupil in the eyes). Reducing the aperture size (increasing the f-number) provides less light to sensor and also increases the depth of field (by limiting the angle of cone of image light reaching the sensor), which describes the extent to which subject matter lying closer than or farther from the actual plane of focus appears to be in focus. In general, the smaller the aperture (the larger the f-number), the greater the distance from the plane of focus the subject matter may be while still appearing in focus.
The lens aperture is usually specified as an f-number, the ratio of focal length to effective aperture diameter (the diameter of the entrance pupil). A lens typically has a set of marked "f-stops" that the f-number can be set to. A lower f-number denotes a greater aperture which allows more light to reach the film or image sensor. The photography term "one f-stop" refers to a factor of (approx. 1.41) change in f-number which corresponds to a change in aperture diameter, which in turn corresponds to a factor of 2 change in light intensity (by a factor 2 change in the aperture area).
Aperture priority is a semi-automatic shooting mode used in cameras. It permits the photographer to select an aperture setting and let the camera decide the shutter speed and sometimes also ISO sensitivity for the correct exposure. This is also referred to as Aperture Priority Auto Exposure, A mode, AV mode (aperture-value mode), or semi-auto mode.
Typical ranges of apertures used in photography are about – or – , covering six stops, which may be divided into wide, middle, and narrow of two stops each, roughly (using round numbers) – , – , and – or (for a slower lens) – , – , and – . These are not sharp divisions, and ranges for specific lenses vary.
Maximum and minimum apertures
The specifications for a given lens typically include the maximum and minimum aperture (opening) sizes, for example, – . In this case, is currently the maximum aperture (the widest opening on a full-frame format for practical use), and is the minimum aperture (the smallest opening). The maximum aperture tends to be of most interest and is always included when describing a lens. This value is also known as the lens "speed", as it affects the exposure time. As the aperture area is proportional to the light admitted by a lens or an optical system, the aperture diameter is proportional to the square root of the light admitted, and thus inversely proportional to the square root of required exposure time, such that an aperture of allows for exposure times one quarter that of . ( is 4 times larger than in the aperture area.)
Lenses with apertures opening or wider are referred to as "fast" lenses, although the specific point has changed over time (for example, in the early 20th century aperture openings wider than were considered fast. The fastest lenses for the common 35 mm film format in general production have apertures of or , with more at and , and many at or slower; is unusual, though sees some use. When comparing "fast" lenses, the image format used must be considered. Lenses designed for a small format such as half frame or APS-C need to project a much smaller image circle than a lens used for large format photography. Thus the optical elements built into the lens can be far smaller and cheaper.
In exceptional circumstances lenses can have even wider apertures with f-numbers smaller than 1.0; see lens speed: fast lenses for a detailed list. For instance, both the current Leica Noctilux-M 50mm ASPH and a 1960s-era Canon 50mm rangefinder lens have a maximum aperture of . Cheaper alternatives began appearing in the early 2010s, such as the Cosina Voigtländer Nokton (several in the range) and () Super Nokton manual focus lenses in the for the Micro Four-Thirds System, and the Venus Optics (Laowa) Argus .
Professional lenses for some movie cameras have f-numbers as small as . Stanley Kubrick's film Barry Lyndon has scenes shot by candlelight with a NASA/Zeiss 50mm f/0.7, the fastest lens in film history. Beyond the expense, these lenses have limited application due to the correspondingly shallower depth of field (DOF) – the scene must either be shallow, shot from a distance, or will be significantly defocused, though this may be the desired effect.
Zoom lenses typically have a maximum relative aperture (minimum f-number) of to through their range. High-end lenses will have a constant aperture, such as or , which means that the relative aperture will stay the same throughout the zoom range. A more typical consumer zoom will have a variable maximum relative aperture since it is harder and more expensive to keep the maximum relative aperture proportional to the focal length at long focal lengths; to is an example of a common variable aperture range in a consumer zoom lens.
By contrast, the minimum aperture does not depend on the focal length – it is limited by how narrowly the aperture closes, not the lens design – and is instead generally chosen based on practicality: very small apertures have lower sharpness due to diffraction at aperture edges, while the added depth of field is not generally useful, and thus there is generally little benefit in using such apertures. Accordingly, DSLR lens typically have minimum aperture of , , or , while large format may go down to , as reflected in the name of Group f/64. Depth of field is a significant concern in macro photography, however, and there one sees smaller apertures. For example, the Canon MP-E 65mm can have effective aperture (due to magnification) as small as . The pinhole optic for Lensbaby creative lenses has an aperture of just .
Aperture area
The amount of light captured by an optical system is proportional to the area of the entrance pupil that is the object space-side image of the aperture of the system, equal to:
Where the two equivalent forms are related via the f-number N = f / D, with focal length f and entrance pupil diameter D.
The focal length value is not required when comparing two lenses of the same focal length; a value of 1 can be used instead, and the other factors can be dropped as well, leaving area proportion to the reciprocal square of the f-number N.
If two cameras of different format sizes and focal lengths have the same angle of view, and the same aperture area, they gather the same amount of light from the scene. In that case, the relative focal-plane illuminance, however, would depend only on the f-number N, so it is less in the camera with the larger format, longer focal length, and higher f-number. This assumes both lenses have identical transmissivity.
Aperture control
Though as early as 1933 Torkel Korling had invented and patented for the Graflex large format reflex camera an automatic aperture control, not all early 35mm single lens reflex cameras had the feature. With a small aperture, this darkened the viewfinder, making viewing, focusing, and composition difficult. Korling's design enabled full-aperture viewing for accurate focus, closing to the pre-selected aperture opening when the shutter was fired and simultaneously synchronising the firing of a flash unit. From 1956 SLR camera manufacturers separately developed automatic aperture control (the Miranda T 'Pressure Automatic Diaphragm', and other solutions on the Exakta Varex IIa and Praktica FX2) allowing viewing at the lens's maximum aperture, stopping the lens down to the working aperture at the moment of exposure, and returning the lens to maximum aperture afterward. The first SLR cameras with internal ("through-the-lens" or "TTL") meters (e.g., the Pentax Spotmatic) required that the lens be stopped down to the working aperture when taking a meter reading. Subsequent models soon incorporated mechanical coupling between the lens and the camera body, indicating the working aperture to the camera for exposure while allowing the lens to be at its maximum aperture for composition and focusing; this feature became known as open-aperture metering.
For some lenses, including a few long telephotos, lenses mounted on bellows, and perspective-control and tilt/shift lenses, the mechanical linkage was impractical, and automatic aperture control was not provided. Many such lenses incorporated a feature known as a "preset" aperture, which allows the lens to be set to working aperture and then quickly switched between working aperture and full aperture without looking at the aperture control. A typical operation might be to establish rough composition, set the working aperture for metering, return to full aperture for a final check of focus and composition, and focusing, and finally, return to working aperture just before exposure. Although slightly easier than stopped-down metering, operation is less convenient than automatic operation. Preset aperture controls have taken several forms; the most common has been the use of essentially two lens aperture rings, with one ring setting the aperture and the other serving as a limit stop when switching to working aperture. Examples of lenses with this type of preset aperture control are the Nikon PC Nikkor 28 mm and the SMC Pentax Shift 6×7 75 mm . The Nikon PC Micro-Nikkor 85 mm lens incorporates a mechanical pushbutton that sets working aperture when pressed and restores full aperture when pressed a second time.
Canon EF lenses, introduced in 1987, have electromagnetic diaphragms, eliminating the need for a mechanical linkage between the camera and the lens, and allowing automatic aperture control with the Canon TS-E tilt/shift lenses. Nikon PC-E perspective-control lenses, introduced in 2008, also have electromagnetic diaphragms, a feature extended to their E-type range in 2013.
Optimal aperture
Optimal aperture depends both on optics (the depth of the scene versus diffraction), and on the performance of the lens.
Optically, as a lens is stopped down, the defocus blur at the Depth of Field (DOF) limits decreases but diffraction blur increases. The presence of these two opposing factors implies a point at which the combined blur spot is minimized (Gibson 1975, 64); at that point, the f-number is optimal for image sharpness, for this given depth of field – a wider aperture (lower f-number) causes more defocus, while a narrower aperture (higher f-number) causes more diffraction.
As a matter of performance, lenses often do not perform optimally when fully opened, and thus generally have better sharpness when stopped down some – this is sharpness in the plane of critical focus, setting aside issues of depth of field. Beyond a certain point, there is no further sharpness benefit to stopping down, and the diffraction occurred at the edges of the aperture begins to become significant for imaging quality. There is accordingly a sweet spot, generally in the – range, depending on lens, where sharpness is optimal, though some lenses are designed to perform optimally when wide open. How significant this varies between lenses, and opinions differ on how much practical impact this has.
While optimal aperture can be determined mechanically, how much sharpness is required depends on how the image will be used – if the final image is viewed under normal conditions (e.g., an 8″×10″ image viewed at 10″), it may suffice to determine the f-number using criteria for minimum required sharpness, and there may be no practical benefit from further reducing the size of the blur spot. But this may not be true if the final image is viewed under more demanding conditions, e.g., a very large final image viewed at normal distance, or a portion of an image enlarged to normal size (Hansma 1996). Hansma also suggests that the final-image size may not be known when a photograph is taken, and obtaining the maximum practicable sharpness allows the decision to make a large final image to be made at a later time; see also critical sharpness.
In biology
In many living optical systems, the eye consists of an iris which adjusts the size of the pupil, through which light enters. The iris is analogous to the diaphragm, and the pupil (which is the adjustable opening in the iris) the aperture. Refraction in the cornea causes the effective aperture (the entrance pupil in optics parlance) to differ slightly from the physical pupil diameter. The entrance pupil is typically about 4 mm in diameter, although it can range from as narrow as 2 mm () in diameter in a brightly lit place to 8 mm () in the dark as part of adaptation. In rare cases in some individuals are able to dilate their pupils even beyond 8 mm (in scotopic lighting, close to the physical limit of the iris. In humans, the average iris diameter is about 11.5 mm, which naturally influences the maximal size of the pupil as well, where larger iris diameters would typically have pupils which are able to dilate to a wider extreme than those with smaller irises. Maximum dilated pupil size also decreases with age.
The iris controls the size of the pupil via two complementary sets muscles, the sphincter and dilator muscles, which are innervated by the parasympathetic and sympathetic nervous systems respectively, and act to induce pupillary constriction and dilation respectively. The state of the pupil is closely influenced by various factors, primarily light (or absence of light), but also by emotional state, interest in the subject of attention, arousal, sexual stimulation, physical activity, accommodation state, and cognitive load. The field of view is not affected by the size of the pupil.
Some individuals are also able to directly exert manual and conscious control over their iris muscles and hence are able to voluntarily constrict and dilate their pupils on command. However, this ability is rare and potential use or advantages are unclear.
Equivalent aperture range
In digital photography, the 35mm-equivalent aperture range is sometimes considered to be more important than the actual f-number. Equivalent aperture is the f-number adjusted to correspond to the f-number of the same size absolute aperture diameter on a lens with a 35mm equivalent focal length. Smaller equivalent f-numbers are expected to lead to higher image quality based on more total light from the subject, as well as lead to reduced depth of field. For example, a Sony Cyber-shot DSC-RX10 uses a 1" sensor, 24 – 200 mm with maximum aperture constant along the zoom range; has equivalent aperture range , which is a lower equivalent f-number than some other cameras with smaller sensors.
However, modern optical research concludes that sensor size does not actually play a part in the depth of field in an image. An aperture's f-number is not modified by the camera's sensor size because it is a ratio that only pertains to the attributes of the lens. Instead, the higher crop factor that comes as a result of a smaller sensor size means that, in order to get an equal framing of the subject, the photo must be taken from further away, which results in a less blurry background, changing the perceived depth of field. Similarly, a smaller sensor size with an equivalent aperture will result in a darker image because of the pixel density of smaller sensors with equivalent megapixels. Every photosite on a camera's sensor requires a certain amount of surface area that is not sensitive to light, a factor that results in differences in pixel pitch and changes in the signal-noise ratio. However, neither the changed depth of field, nor the perceived change in light sensitivity are a result of the aperture. Instead, equivalent aperture can be seen as a rule of thumb to judge how changes in sensor size might affect an image, even if qualities like pixel density and distance from the subject are the actual causes of changes in the image.
In scanning or sampling
The terms scanning aperture and sampling aperture are often used to refer to the opening through which an image is sampled, or scanned, for example in a Drum scanner, an image sensor, or a television pickup apparatus. The sampling aperture can be a literal optical aperture, that is, a small opening in space, or it can be a time-domain aperture for sampling a signal waveform.
For example, film grain is quantified as graininess via a measurement of film density fluctuations as seen through a 0.048 mm sampling aperture.
In popular culture
Aperture Science, a fictional company in the Portal fictional universe, is named after the optical system. The company's logo heavily features an aperture in its logo, and has come to symbolize the series, fictional company, and the Aperture Science Laboratories Computer-Aided Enrichment Center that the game series takes place in.
See also
Numerical aperture
Antenna aperture
Angular resolution
Diaphragm (optics)
Waterhouse stop
Bokeh
Shallow focus
Deep focus
Entrance pupil
Exit pupil
Lyot stop
References
Gibson, H. Lou. 1975. Close-Up Photography and Photomacrography. 2nd combined ed. Kodak Publication No. N-16. Rochester, NY: Eastman Kodak Company, Vol II: Photomacrography.
Hansma, Paul K. 1996. View Camera Focusing in Practice. Photo Techniques, March/April 1996, 54–57. Available as GIF images on the Large Format page.
External links
Stops and Apertures
Science of photography
Geometrical optics
Physical optics
Observational astronomy | Aperture | [
"Astronomy"
] | 5,040 | [
"Observational astronomy",
"Astronomical sub-disciplines"
] |
47,476 | https://en.wikipedia.org/wiki/Altimeter | An altimeter or an altitude meter is an instrument used to measure the altitude of an object above a fixed level. The measurement of altitude is called altimetry, which is related to the term bathymetry, the measurement of depth under water.
Types
Pressure altimeter
Sonic altimeter
In 1931, the US Army Air Corps and General Electric tested a sonic altimeter for aircraft, which was considered more reliable and accurate than one that relied on air pressure when heavy fog or rain was present. The new altimeter used a series of high-pitched sounds like those made by a bat to measure the distance from the aircraft to the surface, which on return to the aircraft was converted to feet shown on a gauge inside the aircraft cockpit.
Radar altimeter
A radar altimeter measures altitude more directly, using the time taken for a radio signal to reflect from the surface back to the aircraft. Alternatively, Frequency Modulated Continuous-wave radar can be used. The greater the frequency shift the further the distance travelled. This method can achieve much better accuracy than the pulsed radar for the same outlay and radar altimeters that use frequency modulation are industry standard. The radar altimeter is used to measure height above ground level during landing in commercial and military aircraft. Radar altimeters are also a component of terrain avoidance warning systems, warning the pilot if the aircraft is flying too low, or if there is rising terrain ahead. Radar altimeter technology is also used in terrain-following radar allowing combat aircraft to fly at very low height above the terrain.
After extensive research and experimentation, it has been shown that "phase radio-altimeters" are most suitable for ground effect vehicles, as compared to laser, isotropic or ultrasonic altimeters.
Laser altimeter
Lidar technology is used to help navigate the helicopter Ingenuity on its record-setting flights over the terrain of Mars by means of a downward-facing Lidar altimeter.
Global Positioning System
Global Positioning System (GPS) receivers can also determine altitude by trilateration with four or more satellites. In aircraft, altitude determined using autonomous GPS is not reliable enough to supersede the pressure altimeter without using some method of augmentation. In hiking and climbing, it is common to find that the altitude measured by GPS is off by as much as depending on satellite orientation.
See also
Acronyms and abbreviations in avionics
ICAO recommendations on use of the International System of Units
Flight instruments
Flight level
Hypsometer
Jason-1 and Ocean Surface Topography Mission (Jason-2) are satellite missions that use altimeters to measure sea surface height
Level sensor
Lidar
Pressure sensor
Primary flight display
Radar altimeter
Satellite altimetry
Turkish Airlines Flight 1951, an accident attributed to a malfunctioning radio altimeter
United Airlines Flight 389, an accident attributed to misreading of an altimeter
Variometer, a gauge measuring the rate of change of altitude
References
External links | Altimeter | [
"Technology",
"Engineering"
] | 586 | [
"Aircraft instruments",
"Measuring instruments",
"Altimeters"
] |
47,481 | https://en.wikipedia.org/wiki/Aquifer | An aquifer is an underground layer of water-bearing material, consisting of permeable or fractured rock, or of unconsolidated materials (gravel, sand, or silt). Aquifers vary greatly in their characteristics. The study of water flow in aquifers and the characterization of aquifers is called hydrogeology. Related terms include aquitard, which is a bed of low permeability along an aquifer, and aquiclude (or aquifuge), which is a solid, impermeable area underlying or overlying an aquifer, the pressure of which could lead to the formation of a confined aquifer. The classification of aquifers is as follows: Saturated versus unsaturated; aquifers versus aquitards; confined versus unconfined; isotropic versus anisotropic; porous, karst, or fractured; transboundary aquifer.
Groundwater from aquifers can be sustainably harvested by humans through the use of qanats leading to a well. This groundwater is a major source of fresh water for many regions, however can present a number of challenges such as overdrafting (extracting groundwater beyond the equilibrium yield of the aquifer), groundwater-related subsidence of land, and the salinization or pollution of the groundwater.
Properties
Depth
Aquifers occur from near-surface to deeper than . Those closer to the surface are not only more likely to be used for water supply and irrigation, but are also more likely to be replenished by local rainfall. Although aquifers are sometimes characterized as "underground rivers or lakes," they are actually porous rock saturated with water.
Many desert areas have limestone hills or mountains within them or close to them that can be exploited as groundwater resources. Part of the Atlas Mountains in North Africa, the Lebanon and Anti-Lebanon ranges between Syria and Lebanon, the Jebel Akhdar in Oman, parts of the Sierra Nevada and neighboring ranges in the United States' Southwest, have shallow aquifers that are exploited for their water. Overexploitation can lead to the exceeding of the practical sustained yield; i.e., more water is taken out than can be replenished.
Along the coastlines of certain countries, such as Libya and Israel, increased water usage associated with population growth has caused a lowering of the water table and the subsequent contamination of the groundwater with saltwater from the sea.
In 2013 large freshwater aquifers were discovered under continental shelves off Australia, China, North America and South Africa. They contain an estimated half a million cubic kilometers of "low salinity" water that could be economically processed into potable water. The reserves formed when ocean levels were lower and rainwater made its way into the ground in land areas that were not submerged until the ice age ended 20,000 years ago. The volume is estimated to be 100 times the amount of water extracted from other aquifers since 1900.
Groundwater recharge
Classification
An aquitard is a zone within the Earth that restricts the flow of groundwater from one aquifer to another. An aquitard can sometimes, if completely impermeable, be called an aquiclude or aquifuge. Aquitards are composed of layers of either clay or non-porous rock with low hydraulic conductivity.
Saturated versus unsaturated
Groundwater can be found at nearly every point in the Earth's shallow subsurface to some degree, although aquifers do not necessarily contain fresh water. The Earth's crust can be divided into two regions: the saturated zone or phreatic zone (e.g., aquifers, aquitards, etc.), where all available spaces are filled with water, and the unsaturated zone (also called the vadose zone), where there are still pockets of air that contain some water, but can be filled with more water.
Saturated means the pressure head of the water is greater than atmospheric pressure (it has a gauge pressure > 0). The definition of the water table is the surface where the pressure head is equal to atmospheric pressure (where gauge pressure = 0).
Unsaturated conditions occur above the water table where the pressure head is negative (absolute pressure can never be negative, but gauge pressure can) and the water that incompletely fills the pores of the aquifer material is under suction. The water content in the unsaturated zone is held in place by surface adhesive forces and it rises above the water table (the zero-gauge-pressure isobar) by capillary action to saturate a small zone above the phreatic surface (the capillary fringe) at less than atmospheric pressure. This is termed tension saturation and is not the same as saturation on a water-content basis. Water content in a capillary fringe decreases with increasing distance from the phreatic surface. The capillary head depends on soil pore size. In sandy soils with larger pores, the head will be less than in clay soils with very small pores. The normal capillary rise in a clayey soil is less than but can range between .
The capillary rise of water in a small-diameter tube involves the same physical process. The water table is the level to which water will rise in a large-diameter pipe (e.g., a well) that goes down into the aquifer and is open to the atmosphere.
Aquifers versus aquitards
Aquifers are typically saturated regions of the subsurface that produce an economically feasible quantity of water to a well or spring (e.g., sand and gravel or fractured bedrock often make good aquifer materials).
An aquitard is a zone within the Earth that restricts the flow of groundwater from one aquifer to another. A completely impermeable aquitard is called an aquiclude or aquifuge. Aquitards contain layers of either clay or non-porous rock with low hydraulic conductivity.
In mountainous areas (or near rivers in mountainous areas), the main aquifers are typically unconsolidated alluvium, composed of mostly horizontal layers of materials deposited by water processes (rivers and streams), which in cross-section (looking at a two-dimensional slice of the aquifer) appear to be layers of alternating coarse and fine materials. Coarse materials, because of the high energy needed to move them, tend to be found nearer the source (mountain fronts or rivers), whereas the fine-grained material will make it farther from the source (to the flatter parts of the basin or overbank areas—sometimes called the pressure area). Since there are less fine-grained deposits near the source, this is a place where aquifers are often unconfined (sometimes called the forebay area), or in hydraulic communication with the land surface.
Confined versus unconfined
An unconfined aquifer has no impermeable barrier immediately above it, such that the water level can rise in response to recharge. A confined aquifer has an overlying impermeable barrier that prevents the water level in the aquifer from rising any higher. An aquifer in the same geologic unit may be confined in one area and unconfined in another. Unconfined aquifers are sometimes also called water table or phreatic aquifers, because their upper boundary is the water table or phreatic surface (see Biscayne Aquifer). Typically (but not always) the shallowest aquifer at a given location is unconfined, meaning it does not have a confining layer (an aquitard or aquiclude) between it and the surface. The term "perched" refers to ground water accumulating above a low-permeability unit or strata, such as a clay layer. This term is generally used to refer to a small local area of ground water that occurs at an elevation higher than a regionally extensive aquifer. The difference between perched and unconfined aquifers is their size (perched is smaller). Confined aquifers are aquifers that are overlain by a confining layer, often made up of clay. The confining layer might offer some protection from surface contamination.
If the distinction between confined and unconfined is not clear geologically (i.e., if it is not known if a clear confining layer exists, or if the geology is more complex, e.g., a fractured bedrock aquifer), the value of storativity returned from an aquifer test can be used to determine it (although aquifer tests in unconfined aquifers should be interpreted differently than confined ones). Confined aquifers have very low storativity values (much less than 0.01, and as little as ), which means that the aquifer is storing water using the mechanisms of aquifer matrix expansion and the compressibility of water, which typically are both quite small quantities. Unconfined aquifers have storativities (typically called specific yield) greater than 0.01 (1% of bulk volume); they release water from storage by the mechanism of actually draining the pores of the aquifer, releasing relatively large amounts of water (up to the drainable porosity of the aquifer material, or the minimum volumetric water content).
Isotropic versus anisotropic
In isotropic aquifers or aquifer layers the hydraulic conductivity (K) is equal for flow in all directions, while in anisotropic conditions it differs, notably in horizontal (Kh) and vertical (Kv) sense.
Semi-confined aquifers with one or more aquitards work as an anisotropic system, even when the separate layers are isotropic, because the compound Kh and Kv values are different (see hydraulic transmissivity and hydraulic resistance).
When calculating flow to drains or flow to wells in an aquifer, the anisotropy is to be taken into account lest the resulting design of the drainage system may be faulty.
Porous, karst, or fractured
To properly manage an aquifer its properties must be understood. Many properties must be known to predict how an aquifer will respond to rainfall, drought, pumping, and contamination. Considerations include where and how much water enters the groundwater from rainfall and snowmelt, how fast and in what direction the groundwater travels, and how much water leaves the ground as springs. Computer models can be used to test how accurately the understanding of the aquifer properties matches the actual aquifer performance. Environmental regulations require sites with potential sources of contamination to demonstrate that the hydrology has been characterized.
Porous
Porous aquifers typically occur in sand and sandstone. Porous aquifer properties depend on the depositional sedimentary environment and later natural cementation of the sand grains. The environment where a sand body was deposited controls the orientation of the sand grains, the horizontal and vertical variations, and the distribution of shale layers. Even thin shale layers are important barriers to groundwater flow. All these factors affect the porosity and permeability of sandy aquifers.
Sandy deposits formed in shallow marine environments and in windblown sand dune environments have moderate to high permeability while sandy deposits formed in river environments have low to moderate permeability. Rainfall and snowmelt enter the groundwater where the aquifer is near the surface. Groundwater flow directions can be determined from potentiometric surface maps of water levels in wells and springs. Aquifer tests and well tests can be used with Darcy's law flow equations to determine the ability of a porous aquifer to convey water.
Analyzing this type of information over an area gives an indication how much water can be pumped without overdrafting and how contamination will travel. In porous aquifers groundwater flows as slow seepage in pores between sand grains. A groundwater flow rate of 1 foot per day (0.3 m/d) is considered to be a high rate for porous aquifers, as illustrated by the water slowly seeping from sandstone in the accompanying image to the left.
Porosity is important, but, alone, it does not determine a rock's ability to act as an aquifer. Areas of the Deccan Traps (a basaltic lava) in west central India are good examples of rock formations with high porosity but low permeability, which makes them poor aquifers. Similarly, the micro-porous (Upper Cretaceous) Chalk Group of south east England, although having a reasonably high porosity, has a low grain-to-grain permeability, with its good water-yielding characteristics mostly due to micro-fracturing and fissuring.
Karst
Karst aquifers typically develop in limestone. Surface water containing natural carbonic acid moves down into small fissures in limestone. This carbonic acid gradually dissolves limestone thereby enlarging the fissures. The enlarged fissures allow a larger quantity of water to enter which leads to a progressive enlargement of openings. Abundant small openings store a large quantity of water. The larger openings form a conduit system that drains the aquifer to springs.
Characterization of karst aquifers requires field exploration to locate sinkholes, swallets, sinking streams, and springs in addition to studying geologic maps. Conventional hydrogeologic methods such as aquifer tests and potentiometric mapping are insufficient to characterize the complexity of karst aquifers. These conventional investigation methods need to be supplemented with dye traces, measurement of spring discharges, and analysis of water chemistry. U.S. Geological Survey dye tracing has determined that conventional groundwater models that assume a uniform distribution of porosity are not applicable for karst aquifers.
Linear alignment of surface features such as straight stream segments and sinkholes develop along fracture traces. Locating a well in a fracture trace or intersection of fracture traces increases the likelihood to encounter good water production. Voids in karst aquifers can be large enough to cause destructive collapse or subsidence of the ground surface that can initiate a catastrophic release of contaminants. Groundwater flow rate in karst aquifers is much more rapid than in porous aquifers as shown in the accompanying image to the left. For example, in the Barton Springs Edwards aquifer, dye traces measured the karst groundwater flow rates from 0.5 to 7 miles per day (0.8 to 11.3 km/d). The rapid groundwater flow rates make karst aquifers much more sensitive to groundwater contamination than porous aquifers.
In the extreme case, groundwater may exist in underground rivers (e.g., caves underlying karst topography).
Fractured
If a rock unit of low porosity is highly fractured, it can also make a good aquifer (via fissure flow), provided the rock has a hydraulic conductivity sufficient to facilitate movement of water.
Human use of groundwater
Challenges for using groundwater include: overdrafting (extracting groundwater beyond the equilibrium yield of the aquifer), groundwater-related subsidence of land, groundwater becoming saline, groundwater pollution.
By country or continent
Africa
Aquifer depletion is a problem in some areas, especially in northern Africa, where one example is the Great Manmade River project of Libya. However, new methods of groundwater management such as artificial recharge and injection of surface waters during seasonal wet periods has extended the life of many freshwater aquifers, especially in the United States.
Australia
The Great Artesian Basin situated in Australia is arguably the largest groundwater aquifer in the world (over ). It plays a large part in water supplies for Queensland, and some remote parts of South Australia.
Canada
Discontinuous sand bodies at the base of the McMurray Formation in the Athabasca Oil Sands region of northeastern Alberta, Canada, are commonly referred to as the Basal Water Sand (BWS) aquifers. Saturated with water, they are confined beneath impermeable bitumen-saturated sands that are exploited to recover bitumen for synthetic crude oil production. Where they are deep-lying and recharge occurs from underlying Devonian formations they are saline, and where they are shallow and recharged by surface water they are non-saline. The BWS typically pose problems for the recovery of bitumen, whether by open-pit mining or by in situ methods such as steam-assisted gravity drainage (SAGD), and in some areas they are targets for waste-water injection.
South America
The Guarani Aquifer, located beneath the surface of Argentina, Brazil, Paraguay, and Uruguay, is one of the world's largest aquifer systems and is an important source of fresh water. Named after the Guarani people, it covers , with a volume of about , a thickness of between and a maximum depth of about .
United States
The Ogallala Aquifer of the central United States is one of the world's great aquifers, but in places it is being rapidly depleted by growing municipal use, and continuing agricultural use. This huge aquifer, which underlies portions of eight states, contains primarily fossil water from the time of the last glaciation. Annual recharge, in the more arid parts of the aquifer, is estimated to total only about 10 percent of annual withdrawals. According to a 2013 report by the United States Geological Survey (USGS), the depletion between 2001 and 2008, inclusive, is about 32 percent of the cumulative depletion during the entire 20th century.
In the United States, the biggest users of water from aquifers include agricultural irrigation and oil and coal extraction. "Cumulative total groundwater depletion in the United States accelerated in the late 1940s and continued at an almost steady linear rate through the end of the century. In addition to widely recognized environmental consequences, groundwater depletion also adversely impacts the long-term sustainability of groundwater supplies to help meet the Nation’s water needs."
An example of a significant and sustainable carbonate aquifer is the Edwards Aquifer in central Texas. This carbonate aquifer has historically been providing high quality water for nearly 2 million people, and even today, is full because of tremendous recharge from a number of area streams, rivers and lakes. The primary risk to this resource is human development over the recharge areas.
See also
References
External links
IGRAC International Groundwater Resources Assessment Centre
The Groundwater Project - Online platform for groundwater knowledge
Hydraulic engineering
Hydrology
Hydrogeology
Water and the environment
Bodies of water
Water supply | Aquifer | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 3,901 | [
"Hydrology",
"Water supply",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Aquifers",
"Environmental engineering",
"Hydraulic engineering",
"Hydrogeology"
] |
47,484 | https://en.wikipedia.org/wiki/Atmospheric%20pressure | Atmospheric pressure, also known as air pressure or barometric pressure (after the barometer), is the pressure within the atmosphere of Earth. The standard atmosphere (symbol: atm) is a unit of pressure defined as , which is equivalent to 1,013.25 millibars, 760mm Hg, 29.9212inchesHg, or 14.696psi. The atm unit is roughly equivalent to the mean sea-level atmospheric pressure on Earth; that is, the Earth's atmospheric pressure at sea level is approximately 1 atm.
In most circumstances, atmospheric pressure is closely approximated by the hydrostatic pressure caused by the weight of air above the measurement point. As elevation increases, there is less overlying atmospheric mass, so atmospheric pressure decreases with increasing elevation. Because the atmosphere is thin relative to the Earth's radius—especially the dense atmospheric layer at low altitudes—the Earth's gravitational acceleration as a function of altitude can be approximated as constant and contributes little to this fall-off. Pressure measures force per unit area, with SI units of pascals (1 pascal = 1 newton per square metre, 1N/m2). On average, a column of air with a cross-sectional area of 1 square centimetre (cm2), measured from the mean (average) sea level to the top of Earth's atmosphere, has a mass of about 1.03 kilogram and exerts a force or "weight" of about 10.1 newtons, resulting in a pressure of 10.1 N/cm2 or 101kN/m2 (101 kilopascals, kPa). A column of air with a cross-sectional area of 1in2 would have a weight of about 14.7lbf, resulting in a pressure of 14.7lbf/in2.
Mechanism
Atmospheric pressure is caused by the gravitational attraction of the planet on the atmospheric gases above the surface and is a function of the mass of the planet, the radius of the surface, and the amount and composition of the gases and their vertical distribution in the atmosphere. It is modified by the planetary rotation and local effects such as wind velocity, density variations due to temperature and variations in composition.
Mean sea-level pressure
The mean sea-level pressure (MSLP) is the atmospheric pressure at mean sea level. This is the atmospheric pressure normally given in weather reports on radio, television, and newspapers or on the Internet.
The altimeter setting in aviation is an atmospheric pressure adjustment.
Average sea-level pressure is . In aviation weather reports (METAR), QNH is transmitted around the world in hectopascals or millibars (1 hectopascal = 1 millibar), except in the United States, Canada, and Japan where it is reported in inches of mercury (to two decimal places). The United States and Canada also report sea-level pressure SLP, which is adjusted to sea level by a different method, in the remarks section, not in the internationally transmitted part of the code, in hectopascals or millibars. However, in Canada's public weather reports, sea level pressure is instead reported in kilopascals.
In the US weather code remarks, three digits are all that are transmitted; decimal points and the one or two most significant digits are omitted: is transmitted as 132; is transmitted as 000; 998.7hPa is transmitted as 987; etc. The highest sea-level pressure on Earth occurs in Siberia, where the Siberian High often attains a sea-level pressure above , with record highs close to . The lowest measurable sea-level pressure is found at the centres of tropical cyclones and tornadoes, with a record low of . A system transmitting the last three digits transmits the same code (800) for 1080.0 hPa as for 980.0 hPa.
Surface pressure
Surface pressure is the atmospheric pressure at a location on Earth's surface (terrain and oceans). It is directly proportional to the mass of air over that location.
For numerical reasons, atmospheric models such as general circulation models (GCMs) usually predict the nondimensional logarithm of surface pressure.
The average value of surface pressure on Earth is 985 hPa. This is in contrast to mean sea-level pressure, which involves the extrapolation of pressure to sea level for locations above or below sea level. The average pressure at mean sea level (MSL) in the International Standard Atmosphere (ISA) is 1,013.25 hPa, or 1 atmosphere (atm), or 29.92 inches of mercury.
Pressure (P), mass (m), and acceleration due to gravity (g) are related by P = F/A = (m*g)/A, where A is the surface area. Atmospheric pressure is thus proportional to the weight per unit area of the atmospheric mass above that location.
Altitude variation
Pressure on Earth varies with the altitude of the surface, so air pressure on mountains is usually lower than air pressure at sea level. Pressure varies smoothly from the Earth's surface to the top of the mesosphere. Although the pressure changes with the weather, NASA has averaged the conditions for all parts of the earth year-round. As altitude increases, atmospheric pressure decreases. One can calculate the atmospheric pressure at a given altitude. Temperature and humidity also affect the atmospheric pressure. Pressure is proportional to temperature and inversely related to humidity, and both of these are necessary to compute an accurate figure. The graph was developed for a temperature of 15 °C and a relative humidity of 0%.
At low altitudes above sea level, the pressure decreases by about for every 100 metres. For higher altitudes within the troposphere, the following equation (the barometric formula) relates atmospheric pressure p to altitude h:
The values in these equations are:
Local variation
Atmospheric pressure varies widely on Earth, and these changes are important in studying weather and climate. Atmospheric pressure shows a diurnal or semidiurnal (twice-daily) cycle caused by global atmospheric tides. This effect is strongest in tropical zones, with an amplitude of a few hectopascals, and almost zero in polar areas. These variations have two superimposed cycles, a circadian (24 h) cycle, and a semi-circadian (12 h) cycle.
Records
The highest adjusted-to-sea level barometric pressure ever recorded on Earth (above 750 meters) was measured in Tosontsengel, Mongolia on 19 December 2001. The highest adjusted-to-sea level barometric pressure ever recorded (below 750 meters) was at Agata in Evenk Autonomous Okrug, Russia (66°53'N, 93°28'E, elevation: ) on 31 December 1968 of . The discrimination is due to the problematic assumptions (assuming a standard lapse rate) associated with reduction of sea level from high elevations.
The Dead Sea, the lowest place on Earth at below sea level, has a correspondingly high typical atmospheric pressure of 1,065hPa. A below-sea-level surface pressure record of was set on 21 February 1961.
The lowest non-tornadic atmospheric pressure ever measured was 870 hPa (0.858 atm; 25.69 inHg), set on 12 October 1979, during Typhoon Tip in the western Pacific Ocean. The measurement was based on an instrumental observation made from a reconnaissance aircraft.
Measurement based on the depth of water
One atmosphere () is also the pressure caused by the weight of a column of freshwater of approximately . Thus, a diver 10.3 m under water experiences a pressure of about 2 atmospheres (1 atm of air plus 1 atm of water). Conversely, 10.3 m is the maximum height to which water can be raised using suction under standard atmospheric conditions.
Low pressures, such as natural gas lines, are sometimes specified in inches of water, typically written as w.c. (water column) gauge or w.g. (inches water) gauge. A typical gas-using residential appliance in the US is rated for a maximum of , which is approximately 14 w.g. Similar metric units with a wide variety of names and notation based on millimetres, centimetres or metres are now less commonly used.
Boiling point of liquids
Pure water boils at at earth's standard atmospheric pressure. The boiling point is the temperature at which the vapour pressure is equal to the atmospheric pressure around the liquid. Because of this, the boiling point of liquids is lower at lower pressure and higher at higher pressure. Cooking at high elevations, therefore, requires adjustments to recipes or pressure cooking. A rough approximation of elevation can be obtained by measuring the temperature at which water boils; in the mid-19th century, this method was used by explorers. Conversely, if one wishes to evaporate a liquid at a lower temperature, for example in distillation, the atmospheric pressure may be lowered by using a vacuum pump, as in a rotary evaporator.
Measurement and maps
An important application of the knowledge that atmospheric pressure varies directly with altitude was in determining the height of hills and mountains, thanks to reliable pressure measurement devices. In 1774, Maskelyne was confirming Newton's theory of gravitation at and on Schiehallion mountain in Scotland, and he needed to measure elevations on the mountain's sides accurately. William Roy, using barometric pressure, was able to confirm Maskelyne's height determinations; the agreement was within one meter (3.28 feet). This method became and continues to be useful for survey work and map making.
See also
– physical damage to body tissues caused by a difference in pressure between an air space inside or beside the body and the surrounding gas or liquid.
Collapsing can – an aluminium can is crushed by the atmospheric pressure surrounding it
, a tabulation of typical variations of principal thermodynamic variables of the atmosphere (pressure, density, temperature, etc.) with altitude, at middle latitudes.
, an empirical, global reference atmospheric model of the Earth from ground to space
References
External links
Current map of global mean sea-level pressure
1976 Standard Atmosphere from NASA
Source code and equations for the 1976 Standard Atmosphere
A mathematical model of the 1976 U.S. Standard Atmosphere
Calculator using multiple units and properties for the 1976 Standard Atmosphere
Calculator giving standard air pressure at a specified altitude, or altitude at which a pressure would be standard
Calculate pressure from altitude and vice versa
Experiments
Movies on atmospheric pressure experiments from Georgia State University's HyperPhysics website – requires QuickTime
Test showing a can being crushed after boiling water inside it, then moving it into a tub of ice-cold water. | Atmospheric pressure | [
"Physics"
] | 2,184 | [
"Physical quantities",
"Meteorological quantities",
"Atmospheric pressure"
] |
47,486 | https://en.wikipedia.org/wiki/Atoll | An atoll () is a ring-shaped island, including a coral rim that encircles a lagoon. There may be coral islands or cays on the rim. Atolls are located in warm tropical or subtropical parts of the oceans and seas where corals can develop. Most of the approximately 440 atolls in the world are in the Pacific Ocean.
Two different, well-cited models, the subsidence model and the antecedent karst model, have been used to explain the development of atolls. According to Charles Darwin's subsidence model, the formation of an atoll is explained by the sinking of a volcanic island around which a coral fringing reef has formed. Over geologic time, the volcanic island becomes extinct and eroded as it subsides completely beneath the surface of the ocean. As the volcanic island subsides, the coral fringing reef becomes a barrier reef that is detached from the island. Eventually, reef and the small coral islets on top of it are all that is left of the original island, and a lagoon has taken the place of the former volcano. The lagoon is not the former volcanic crater. For the atoll to persist, the coral reef must be maintained at the sea surface, with coral growth matching any relative change in sea level (sinking of the island or rising oceans).
An alternative model for the origin of atolls is called the antecedent karst model. In the antecedent karst model, the first step in the formation of an atoll is the development of a flat top, mound-like coral reef during the subsidence of an oceanic island of either volcanic or nonvolcanic origin below sea level. Then, when relative sea level drops below the level of the flat surface of coral reef, it is exposed to the atmosphere as a flat topped island which is dissolved by rainfall to form limestone karst. Because of hydrologic properties of this karst, the rate of dissolution of the exposed coral is lowest along its rim and the rate of dissolution increases inward to its maximum at the center of the island. As a result, a saucer shaped island with a raised rim forms. When relative sea level submerges the island again, the rim provides a rocky core on which coral grow again to form the islands of an atoll and the flooded bottom of the saucer forms the lagoon within them.
Usage
The word atoll comes from the Dhivehi word (, ). Dhivehi is an Indo-Aryan language spoken in the Maldives. The word's first recorded English use was in 1625 as atollon. Charles Darwin coined the term in his monograph, The Structure and Distribution of Coral Reefs. He recognized the word's indigenous origin and defined it as a "circular group of coral islets", synonymously with "lagoon-island".
More modern definitions of atoll describe them as "annular reefs enclosing a lagoon in which there are no promontories other than reefs and islets composed of reef detritus" or "in an exclusively morphological sense, [as] a ring-shaped ribbon reef enclosing a lagoon".
Distribution and size
There are approximately 440 atolls in the world. Most of the world's atolls are in the Pacific Ocean (with concentrations in the Caroline Islands, the Coral Sea Islands, the Marshall Islands, the Tuamotu Islands, Kiribati, Tokelau, and Tuvalu) and the Indian Ocean (the Chagos Archipelago, Lakshadweep, the atolls of the Maldives, and the Outer Islands of Seychelles). In addition, Indonesia also has several atolls spread across the archipelago, such as in the Thousand Islands, Taka Bonerate Islands, and atolls in the Raja Ampat Islands. The Atlantic Ocean has no large groups of atolls, other than eight atolls east of Nicaragua that belong to the Colombian department of San Andres and Providencia in the Caribbean.
Reef-building corals will thrive only in warm tropical and subtropical waters of oceans and seas, and therefore atolls are found only in the tropics and subtropics. The northernmost atoll in the world is Kure Atoll at 28°25′ N, along with other atolls of the Northwestern Hawaiian Islands. The southernmost atolls in the world are Elizabeth Reef at 29°57′ S, and nearby Middleton Reef at 29°27′ S, in the Tasman Sea, both of which are part of the Coral Sea Islands Territory. The next southerly atoll is Ducie Island in the Pitcairn Islands Group, at 24°41′ S.
The atoll closest to the Equator is Aranuka of Kiribati. Its southern tip is just north of the Equator.
Bermuda is sometimes claimed as the "northernmost atoll" at a latitude of 32°18′ N. At this latitude, coral reefs would not develop without the warming waters of the Gulf Stream. However, Bermuda is termed a pseudo-atoll because its general form, while resembling that of an atoll, has a very different origin of formation.
In most cases, the land area of an atoll is very small in comparison to the total area. Atoll islands are low lying, with their elevations less than . Measured by total area, Lifou () is the largest raised coral atoll of the world, followed by Rennell Island (). More sources, however, list Kiritimati as the largest atoll in the world in terms of land area. It is also a raised coral atoll ( land area; according to other sources even ), main lagoon, other lagoons (according to other sources total lagoon size).
The geological formation known as a reef knoll refers to the elevated remains of an ancient atoll within a limestone region, appearing as a hill. The second largest atoll by dry land area is Aldabra, with . Huvadhu Atoll, situated in the southern region of the Maldives, holds the distinction of being the largest atoll based on the sheer number of islands it comprises, with a total of 255 individual islands.
List of atolls
Gallery
Formation
In 1842, Charles Darwin explained the creation of coral atolls in the southern Pacific Ocean based upon observations made during a five-year voyage aboard HMS Beagle from 1831 to 1836. Darwin's explanation suggests that several tropical island types: from high volcanic island, through barrier reef island, to atoll, represented a sequence of gradual subsidence of what started as an oceanic volcano. He reasoned that a fringing coral reef surrounding a volcanic island in the tropical sea will grow upward as the island subsides (sinks), becoming an "almost atoll", or barrier reef island, as typified by an island such as Aitutaki in the Cook Islands, and Bora Bora and others in the Society Islands. The fringing reef becomes a barrier reef for the reason that the outer part of the reef maintains itself near sea level through biotic growth, while the inner part of the reef falls behind, becoming a lagoon because conditions are less favorable for the coral and calcareous algae responsible for most reef growth. In time, subsidence carries the old volcano below the ocean surface and the barrier reef remains. At this point, the island has become an atoll.
As formulated by J. E. Hoffmeister, F. S. McNeil, E. G. Prudy, and others, the antecedent karst model argues that atolls are Pleistocene features that are the direct result of the interaction between subsidence and preferential karst dissolution that occurred in the interior of flat topped coral reefs during exposure during glacial lowstands of sea level. The elevated rims along an island created by this preferential karst dissolution become the sites of coral growth and islands of atolls when flooded during interglacial highstands.
The research of A. W. Droxler and others supports the antecedent karst model as they found that the morphology of modern atolls are independent of any influence of an underlying submerged and buried island and are not rooted to an initial fringing reef/barrier reef attached to a slowly subsiding volcanic edifice. In fact, the Neogene reefs underlying the studied modern atolls overlie and completely bury the subsided island are all non-atoll, flat-topped reefs. In fact, they found that atolls did not form doing the subsidence of an island until MIS-11, Mid-Brunhes, long after the many the former islands had been completely submerged and buried by flat topped reefs during the Neogene.
Atolls are the product of the growth of tropical marine organisms, and so these islands are found only in warm tropical waters. Volcanic islands located beyond the warm water temperature requirements of hermatypic (reef-building) organisms become seamounts as they subside, and are eroded away at the surface. An island that is located where the ocean water temperatures are just sufficiently warm for upward reef growth to keep pace with the rate of subsidence is said to be at the Darwin Point. Islands in colder, more polar regions evolve toward seamounts or guyots; warmer, more equatorial islands evolve toward atolls, for example Kure Atoll. However, ancient atolls during the Mesozoic appear to exhibit different growth and evolution patterns.
Coral atolls are important as sites where dolomitization of calcite occurs. Several models have been proposed for the dolomitization of calcite and aragonite within them. They are the evaporative, seepage-reflux, mixing-zone, burial, and seawater models. Although the origin of replacement dolomites remains problematic and controversial, it is generally accepted that seawater was the source of magnesium for dolomitization and the fluid in which calcite was dolomitized to form the dolomites found within atolls. Various processes have been invoked to drive large amounts of seawater through an atoll in order for dolomitization to occur.
Investigation by the Royal Society of London
In 1896, 1897 and 1898, the Royal Society of London carried out drilling on Funafuti atoll in Tuvalu for the purpose of investigating the formation of coral reefs. They wanted to determine whether traces of shallow water organisms could be found at depth in the coral of Pacific atolls. This investigation followed the work on the structure and distribution of coral reefs conducted by Charles Darwin in the Pacific.
The first expedition in 1896 was led by Professor William Johnson Sollas of the University of Oxford. Geologists included Walter George Woolnough and Edgeworth David of the University of Sydney. Professor Edgeworth David led the expedition in 1897. The third expedition in 1898 was led by Alfred Edmund Finckh.
See also
Baratal limestone, sometimes described as the oldest known atoll
Coral island
References
Inline citations
Sources
Dobbs, David (2005). Reef Madness: Charles Darwin, Alexander Agassiz, and the Meaning of Coral. Pantheon. .
Fairbridge, R. W. (July 1950). "Recent and Pleistocene Coral Reefs of Australia". J. Geol., 58(4: Reef Issue): 330–401. . . .
McNeil, F. S. (July 1954). "Organic Reefs and Banks and Associated Detrital Sediments". Amer. J. Sci., 252(7): 385–401. .
External links
Formation of Bermuda reefs
Darwin's Volcano – A short video discussing Darwin and Agassiz' coral reef formation debate
NOAA National Ocean Service Education – Coral Atoll Animation
NOAA National Ocean Service – What are the three main types of coral reefs?
Research Article: Predicting Coral Recruitment in Palau's Complex Reef Archipelago;
World Atolls, Goldberg 2016: A global map containing all atolls
Biogeomorphology
Coastal and oceanic landforms
Islands by type
Oceanographical terminology | Atoll | [
"Biology"
] | 2,414 | [
"Biogeomorphology"
] |
47,487 | https://en.wikipedia.org/wiki/Azimuth | An azimuth (; from ) is the horizontal angle from a cardinal direction, most commonly north, in a local or observer-centric spherical coordinate system.
Mathematically, the relative position vector from an observer (origin) to a point of interest is projected perpendicularly onto a reference plane (the horizontal plane); the angle between the projected vector and a reference vector on the reference plane is called the azimuth.
When used as a celestial coordinate, the azimuth is the horizontal direction of a star or other astronomical object in the sky. The star is the point of interest, the reference plane is the local area (e.g. a circular area with a 5 km radius at sea level) around an observer on Earth's surface, and the reference vector points to true north. The azimuth is the angle between the north vector and the star's vector on the horizontal plane.
Azimuth is usually measured in degrees (°), in the positive range 0° to 360° or in the signed range -180° to +180°. The concept is used in navigation, astronomy, engineering, mapping, mining, and ballistics.
Etymology
The word azimuth is used in all European languages today. It originates from medieval Arabic السموت (al-sumūt, pronounced as-sumūt), meaning "the directions" (plural of Arabic السمت al-samt = "the direction"). The Arabic word entered late medieval Latin in an astronomy context and in particular in the use of the Arabic version of the astrolabe astronomy instrument. Its first recorded use in English is in the 1390s in Geoffrey Chaucer's Treatise on the Astrolabe. The first known record in any Western language is in Spanish in the 1270s in an astronomy book that was largely derived from Arabic sources, the Libros del saber de astronomía commissioned by King Alfonso X of Castile.
In astronomy
In the horizontal coordinate system, used in celestial navigation, azimuth is one of the two coordinates. The other is altitude, sometimes called elevation above the horizon.
It is also used for satellite dish installation (see also: sat finder).
In modern astronomy azimuth is nearly always measured from the north.
In navigation
In land navigation, azimuth is usually denoted alpha, α, and defined as a horizontal angle measured clockwise from a north base line or meridian. Azimuth has also been more generally defined as a horizontal angle measured clockwise from any fixed reference plane or easily established base direction line.
Today, the reference plane for an azimuth is typically true north, measured as a 0° azimuth, though other angular units (grad, mil) can be used. Moving clockwise on a 360 degree circle, east has azimuth 90°, south 180°, and west 270°. There are exceptions: some navigation systems use south as the reference vector. Any direction can be the reference vector, as long as it is clearly defined.
Quite commonly, azimuths or compass bearings are stated in a system in which either north or south can be the zero, and the angle may be measured clockwise or anticlockwise from the zero. For example, a bearing might be described as "(from) south, (turn) thirty degrees (toward the) east" (the words in brackets are usually omitted), abbreviated "S30°E", which is the bearing 30 degrees in the eastward direction from south, i.e. the bearing 150 degrees clockwise from north. The reference direction, stated first, is always north or south, and the turning direction, stated last, is east or west. The directions are chosen so that the angle, stated between them, is positive, between zero and 90 degrees. If the bearing happens to be exactly in the direction of one of the cardinal points, a different notation, e.g. "due east", is used instead.
True north-based azimuths
In geodesy
We are standing at latitude , longitude zero; we want to find the azimuth from our viewpoint to Point 2 at latitude , longitude L (positive eastward). We can get a fair approximation by assuming the Earth is a sphere, in which case the azimuth α is given by
A better approximation assumes the Earth is a slightly-squashed sphere (an oblate spheroid); azimuth then has at least two very slightly different meanings. Normal-section azimuth is the angle measured at our viewpoint by a theodolite whose axis is perpendicular to the surface of the spheroid; geodetic azimuth (or geodesic azimuth) is the angle between north and the ellipsoidal geodesic (the shortest path on the surface of the spheroid from our viewpoint to Point 2). The difference is usually negligible: less than 0.03 arc second for distances less than 100 km.
Normal-section azimuth can be calculated as follows:
where f is the flattening and e the eccentricity for the chosen spheroid (e.g., for WGS84).
If φ1 = 0 then
To calculate the azimuth of the Sun or a star given its declination and hour angle at a specific location, modify the formula for a spherical Earth. Replace φ2 with declination and longitude difference with hour angle, and change the sign (since the hour angle is positive westward instead of east).
In cartography
The cartographical azimuth or grid azimuth (in decimal degrees) can be calculated when the coordinates of 2 points are known in a flat plane (cartographical coordinates):
Remark that the reference axes are swapped relative to the (counterclockwise) mathematical polar coordinate system and that the azimuth is clockwise relative to the north.
This is the reason why the X and Y axis in the above formula are swapped.
If the azimuth becomes negative, one can always add 360°.
The formula in radians would be slightly easier:
Note the swapped in contrast to the normal atan2 input order.
The opposite problem occurs when the coordinates (X1, Y1) of one point, the distance D, and the azimuth α to another point (X2, Y2) are known, one can calculate its coordinates:
This is typically used in triangulation and azimuth identification (AzID), especially in radar applications.
Map projections
There is a wide variety of azimuthal map projections. They all have the property that directions (the azimuths) from a central point are preserved. Some navigation systems use south as the reference plane. However, any direction can serve as the plane of reference, as long as it is clearly defined for everyone using that system.
Related coordinates
Right ascension
If, instead of measuring from and along the horizon, the angles are measured from and along the celestial equator, the angles are called right ascension if referenced to the Vernal Equinox, or hour angle if referenced to the celestial meridian.
Polar coordinate
In mathematics, the azimuth angle of a point in cylindrical coordinates or spherical coordinates is the anticlockwise angle between the positive x-axis and the projection of the vector onto the xy-plane. A special case of an azimuth angle is the angle in polar coordinates of the component of the vector in the xy-plane, although this angle is normally measured in radians rather than degrees and denoted by θ rather than φ.
Other uses
For magnetic tape drives, azimuth refers to the angle between the tape head(s) and tape.
In sound localization experiments and literature, the azimuth refers to the angle the sound source makes compared to the imaginary straight line that is drawn from within the head through the area between the eyes.
An azimuth thruster in shipbuilding is a propeller that can be rotated horizontally.
See also
Altitude (astronomy)
Angular displacement
Azimuthal quantum number
Azimuthal equidistant projection
Azimuth recording
Bearing (navigation)
Clock position
Course (navigation)
Inclination
Longitude
Latitude
Magnetic declination
Panning (camera)
Relative bearing
Sextant
Solar azimuth angle
Sound Localization
Zenith
References
Further reading
Rutstrum, Carl, The Wilderness Route Finder, University of Minnesota Press (2000),
External links
Angle
Navigation
Surveying
Horizontal coordinate system
Geodesy
Cartography | Azimuth | [
"Physics",
"Astronomy",
"Mathematics",
"Engineering"
] | 1,733 | [
"Geometric measurement",
"Scalar physical quantities",
"Physical quantities",
"Applied mathematics",
"Horizontal coordinate system",
"Astronomical coordinate systems",
"Surveying",
"Civil engineering",
"Wikipedia categories named after physical quantities",
"Angle",
"Geodesy"
] |
47,488 | https://en.wikipedia.org/wiki/Barometer | A barometer is a scientific instrument that is used to measure air pressure in a certain environment. Pressure tendency can forecast short term changes in the weather. Many measurements of air pressure are used within surface weather analysis to help find surface troughs, pressure systems and frontal boundaries.
Barometers and pressure altimeters (the most basic and common type of altimeter) are essentially the same instrument, but used for different purposes. An altimeter is intended to be used at different levels matching the corresponding atmospheric pressure to the altitude, while a barometer is kept at the same level and measures subtle pressure changes caused by weather and elements of weather. The average atmospheric pressure on the Earth's surface varies between 940 and 1040 hPa (mbar). The average atmospheric pressure at sea level is 1013 hPa (mbar).
Etymology
The word barometer is derived from the Ancient Greek (), meaning "weight", and (), meaning "measure".
History
Evangelista Torricelli is usually credited with inventing the barometer in 1643, although the historian W. E. Knowles Middleton suggests the more likely date is 1644 (when Torricelli first reported his experiments; the 1643 date was only suggested after his death). Gasparo Berti, an Italian mathematician and astronomer, also built a rudimentary water barometer sometime between 1640 and 1644, but it was not a true barometer as it was not intended to move and record variable air pressure. French scientist and philosopher René Descartes described the design of an experiment to determine atmospheric pressure as early as 1631, but there is no evidence that he built a working barometer at that time.
Baliani's siphon experiment
On 27 July 1630, Giovanni Battista Baliani wrote a letter to Galileo Galilei explaining an experiment he had made in which a siphon, led over a hill about 21 m high, failed to work. When the end of the siphon was opened in a reservoir, the water level in that limb would sink to about 10 m above the reservoir. Galileo responded with an explanation of the phenomenon: he proposed that it was the power of a vacuum that held the water up, and at a certain height the amount of water simply became too much and the force could not hold any more, like a cord that can support only so much weight. This was a restatement of the theory of horror vacui ("nature abhors a vacuum"), which dates to Aristotle, and which Galileo restated as resistenza del vacuo.
Berti's vacuum experiment
Galileo's ideas, presented in his Discorsi (Two New Sciences), reached Rome in December 1638. Physicists Gasparo Berti and father Raffaello Magiotti were excited by these ideas, and decided to seek a better way to attempt to produce a vacuum other than with a siphon. Magiotti devised such an experiment. Four accounts of the experiment exist, all written some years later. No exact date was given, but since Two New Sciences reached Rome in December 1638, and Berti died before January 2, 1644, science historian W. E. Knowles Middleton places the event to sometime between 1639 and 1643. Present were Berti, Magiotti, Jesuit polymath Athanasius Kircher, and Jesuit physicist Niccolò Zucchi.
In brief, Berti's experiment consisted of filling with water a long tube that had both ends plugged, then standing the tube in a basin of water. The bottom end of the tube was opened, and water that had been inside of it poured out into the basin. However, only part of the water in the tube flowed out, and the level of the water inside the tube stayed at an exact level, which happened to be , the same height limit Baliani had observed in the siphon. What was most important about this experiment was that the lowering water had left a space above it in the tube which had no intermediate contact with air to fill it up. This seemed to suggest the possibility of a vacuum existing in the space above the water.
Evangelista Torricelli
Evangelista Torricelli, a friend and student of Galileo, interpreted the results of the experiments in a novel way. He proposed that the weight of the atmosphere, not an attracting force of the vacuum, held the water in the tube. In a letter to Michelangelo Ricci in 1644 concerning the experiments, he wrote:
Many have said that a vacuum does not exist, others that it does exist in spite of the repugnance of nature and with difficulty; I know of no one who has said that it exists without difficulty and without a resistance from nature. I argued thus: If there can be found a manifest cause from which the resistance can be derived which is felt if we try to make a vacuum, it seems to me foolish to try to attribute to vacuum those operations which follow evidently from some other cause; and so by making some very easy calculations, I found that the cause assigned by me (that is, the weight of the atmosphere) ought by itself alone to offer a greater resistance than it does when we try to produce a vacuum.
It was traditionally thought, especially by the Aristotelians, that the air did not have weight; that is, that the kilometers of air above the surface of the Earth did not exert any weight on the bodies below it. Even Galileo had accepted the weightlessness of air as a simple truth. Torricelli proposed that rather than an attractive force of the vacuum sucking up water, air did indeed have weight, which pushed on the water, holding up a column of it. He argued that the level that the water stayed at—c. 10.3 m above the water surface below—was reflective of the force of the air's weight pushing on the water in the basin, setting a limit for how far down the water level could sink in a tall, closed, water-filled tube. He viewed the barometer as a balance—an instrument for measurement—as opposed to merely an instrument for creating a vacuum, and since he was the first to view it this way, he is traditionally considered the inventor of the barometer, in the sense in which we now use the term.
Torricelli's mercury barometer
Because of rumors circulating in Torricelli's gossipy Italian neighbourhood, which included that he was engaged in some form of sorcery or witchcraft, Torricelli realized he had to keep his experiment secret to avoid the risk of being arrested. He needed to use a liquid that was heavier than water, and from his previous association and suggestions by Galileo, he deduced that by using mercury, a shorter tube could be used. With mercury, which is about 14 times denser than water, a tube only 80 cm was now needed, not 10.5 m.
Blaise Pascal
In 1646, Blaise Pascal along with Pierre Petit, had repeated and perfected Torricelli's experiment after hearing about it from Marin Mersenne, who himself had been shown the experiment by Torricelli toward the end of 1644. Pascal further devised an experiment to test the Aristotelian proposition that it was vapours from the liquid that filled the space in a barometer. His experiment compared water with wine, and since the latter was considered more "spiritous", the Aristotelians expected the wine to stand lower (since more vapours would mean more pushing down on the liquid column). Pascal performed the experiment publicly, inviting the Aristotelians to predict the outcome beforehand. The Aristotelians predicted the wine would stand lower. It did not.
First atmospheric pressure vs. altitude experiment
However, Pascal went even further to test the mechanical theory. If, as suspected by mechanical philosophers like Torricelli and Pascal, air had weight, the pressure would be less at higher altitudes. Therefore, Pascal wrote to his brother-in-law, Florin Perier, who lived near a mountain called the Puy de Dôme, asking him to perform a crucial experiment. Perier was to take a barometer up the Puy de Dôme and make measurements along the way of the height of the column of mercury. He was then to compare it to measurements taken at the foot of the mountain to see if those measurements taken higher up were in fact smaller. In September 1648, Perier carefully and meticulously carried out the experiment, and found that Pascal's predictions had been correct. The column of mercury stood lower as the barometer was carried to a higher altitude.
Types
Water barometers
The concept that decreasing atmospheric pressure predicts stormy weather, postulated by Lucien Vidi, provides the theoretical basis for a weather prediction device called a "weather glass" or a "Goethe barometer" (named for Johann Wolfgang von Goethe, the renowned German writer and polymath who developed a simple but effective weather ball barometer using the principles developed by Torricelli). The French name, le baromètre Liègeois, is used by some English speakers. This name reflects the origins of many early weather glasses – the glass blowers of Liège, Belgium.
The weather ball barometer consists of a glass container with a sealed body, half filled with water. A narrow spout connects to the body below the water level and rises above the water level. The narrow spout is open to the atmosphere. When the air pressure is lower than it was at the time the body was sealed, the water level in the spout will rise above the water level in the body; when the air pressure is higher, the water level in the spout will drop below the water level in the body. A variation of this type of barometer can be easily made at home.
Mercury barometers
A mercury barometer is an instrument used to measure atmospheric pressure in a certain location and has a vertical glass tube closed at the top sitting in an open mercury-filled basin at the bottom. Mercury in the tube adjusts until the weight of it balances the atmospheric force exerted on the reservoir. High atmospheric pressure places more force on the reservoir, forcing mercury higher in the column. Low pressure allows the mercury to drop to a lower level in the column by lowering the force placed on the reservoir. Since higher temperature levels around the instrument will reduce the density of the mercury, the scale for reading the height of the mercury is adjusted to compensate for this effect. The tube has to be at least as long as the amount dipping in the mercury + head space + the maximum length of the column.
Torricelli documented that the height of the mercury in a barometer changed slightly each day and concluded that this was due to the changing pressure in the atmosphere. He wrote: "We live submerged at the bottom of an ocean of elementary air, which is known by incontestable experiments to have weight". Inspired by Torricelli, Otto von Guericke on 5 December 1660 found that air pressure was unusually low and predicted a storm, which occurred the next day.
The mercury barometer's design gives rise to the expression of atmospheric pressure in inches or millimeters of mercury (mmHg). A torr was originally defined as 1 mmHg. The pressure is quoted as the level of the mercury's height in the vertical column. Typically, atmospheric pressure is measured between and of Hg. One atmosphere (1 atm) is equivalent to of mercury.
Design changes to make the instrument more sensitive, simpler to read, and easier to transport resulted in variations such as the basin, siphon, wheel, cistern, Fortin, multiple folded, stereometric, and balance barometers.
In 2007, a European Union directive was enacted to restrict the use of mercury in new measuring instruments intended for the general public, effectively ending the production of new mercury barometers in Europe. The repair and trade of antiques (produced before late 1957) remained unrestricted.
Fitzroy barometer
Fitzroy barometers combine the standard mercury barometer with a thermometer, as well as a guide of how to interpret pressure changes.
Fortin barometer
Fortin barometers use a variable displacement mercury cistern, usually constructed with a thumbscrew pressing on a leather diaphragm bottom (V in the diagram). This compensates for displacement of mercury in the column with varying pressure. To use a Fortin barometer, the level of mercury is set to zero by using the thumbscrew to make an ivory pointer (O in the diagram) just touch the surface of the mercury. The pressure is then read on the column by adjusting the vernier scale so that the mercury just touches the sightline at Z. Some models also employ a valve for closing the cistern, enabling the mercury column to be forced to the top of the column for transport. This prevents water-hammer damage to the column in transit.
Sympiesometer
A sympiesometer is a compact and lightweight barometer that was widely used on ships in the early 19th century. The sensitivity of this barometer was also used to measure altitude.
Sympiesometers have two parts. One is a traditional mercury thermometer that is needed to calculate the expansion or contraction of the fluid in the barometer. The other is the barometer, consisting of a J-shaped tube open at the lower end and closed at the top, with small reservoirs at both ends of the tube.
Wheel barometers
A wheel barometer uses a "J" tube sealed at the top of the longer limb. The shorter limb is open to the atmosphere, and floating on top of the mercury there is a small glass float. A fine silken thread is attached to the float which passes up over a wheel and then back down to a counterweight (usually protected in another tube). The wheel turns the point on the front of the barometer. As atmospheric pressure increases, mercury moves from the short to the long limb, the float falls, and the pointer moves. When pressure falls, the mercury moves back, lifting the float and turning the dial the other way.
Around 1810 the wheel barometer, which could be read from a great distance, became the first practical and commercial instrument favoured by farmers and the educated classes in the UK. The face of the barometer was circular with a simple dial pointing to an easily readable scale: "Rain - Change - Dry" with the "Change" at the top centre of the dial. Later models added a barometric scale with finer graduations: "Stormy (28 inches of mercury), Much Rain (28.5), Rain (29), Change (29.5), Fair (30), Set fair (30.5), very dry(31)".
Natalo Aiano is recognised as one of the finest makers of wheel barometers, an early pioneer in a wave of artisanal Italian instrument and barometer makers that were encouraged to emigrate to the UK. He listed as working in Holborn, London –1805. From 1770 onwards, a large number of Italians came to England because they were accomplished glass blowers or instrument makers. By 1840 it was fair to say that the Italians dominated the industry in England.
Vacuum pump oil barometer
Using vacuum pump oil as the working fluid in a barometer has led to the creation of the new "World's Tallest Barometer" in February 2013. The barometer at Portland State University (PSU) uses doubly distilled vacuum pump oil and has a nominal height of about 12.4 m for the oil column height; expected excursions are in the range of ±0.4 m over the course of a year. Vacuum pump oil has very low vapour pressure and is available in a range of densities; the lowest density vacuum oil was chosen for the PSU barometer to maximize the oil column height.
Aneroid barometers
An aneroid barometer is an instrument used for measuring air pressure via a method that does not involve liquid. Invented in 1844 by French scientist Lucien Vidi, the aneroid barometer uses a small, flexible metal box called an aneroid cell (capsule), which is made from an alloy of beryllium and copper. The evacuated capsule (or usually several capsules, stacked to add up their movements) is prevented from collapsing by a strong spring. Small changes in external air pressure cause the cell to expand or contract. This expansion and contraction drives mechanical levers such that the tiny movements of the capsule are amplified and displayed on the face of the aneroid barometer. Many models include a manually set needle which is used to mark the current measurement so that a relative change can be seen. This type of barometer is common in homes and in recreational boats. It is also used in meteorology, mostly in barographs, and as a pressure instrument in radiosondes.
Barographs
A barograph is a recording aneroid barometer where the changes in atmospheric pressure are recorded on a paper chart.
The principle of the barograph is same as that of the aneroid barometer. Whereas the barometer displays the pressure on a dial, the barograph uses the small movements of the box to transmit by a system of levers to a recording arm that has at its extreme end either a scribe or a pen. A scribe records on smoked foil while a pen records on paper using ink, held in a nib. The recording material is mounted on a cylindrical drum which is rotated slowly by a clock. Commonly, the drum makes one revolution per day, per week, or per month, and the rotation rate can often be selected by the user.
MEMS barometers
Microelectromechanical systems (or MEMS) barometers are extremely small devices between 1 and 100 micrometres in size (0.001 to 0.1 mm). They are created via photolithography or photochemical machining. Typical applications include miniaturized weather stations, electronic barometers and altimeters.
A barometer can also be found in smartphones such as the Samsung Galaxy Nexus, Samsung Galaxy S3-S6, Motorola Xoom, Apple iPhone 6 and newer iPhones, and Timex Expedition WS4 smartwatch, based on MEMS and piezoresistive pressure-sensing technologies. Inclusion of barometers on smartphones was originally intended to provide a faster GPS lock. However, third party researchers were unable to confirm additional GPS accuracy or lock speed due to barometric readings. The researchers suggest that the inclusion of barometers in smartphones may provide a solution for determining a user's elevation, but also suggest that several pitfalls must first be overcome.
More unusual barometers
There are many other more unusual types of barometer. From variations on the storm barometer, such as the Collins Patent Table Barometer, to more traditional-looking designs such as Hooke's Otheometer and the Ross Sympiesometer. Some, such as the Shark Oil barometer, work only in a certain temperature range, achieved in warmer climates.
Applications
Barometric pressure and the pressure tendency (the change of pressure over time) have been used in weather forecasting since the late 19th century. When used in combination with wind observations, reasonably accurate short-term forecasts can be made. Simultaneous barometric readings from across a network of weather stations allow maps of air pressure to be produced, which were the first form of the modern weather map when created in the 19th century. Isobars, lines of equal pressure, when drawn on such a map, give a contour map showing areas of high and low pressure. Localized high atmospheric pressure acts as a barrier to approaching weather systems, diverting their course. Atmospheric lift caused by low-level wind convergence into the surface brings clouds and sometimes precipitation. The larger the change in pressure, especially if more than 3.5 hPa (0.1 inHg), the greater the change in weather that can be expected. If the pressure drop is rapid, a low pressure system is approaching, and there is a greater chance of rain. Rapid pressure rises, such as in the wake of a cold front, are associated with improving weather conditions, such as clearing skies.
With falling air pressure, gases trapped within the coal in deep mines can escape more freely. Thus low pressure increases the risk of firedamp accumulating. Collieries therefore keep track of the pressure. In the case of the Trimdon Grange colliery disaster of 1882 the mines inspector drew attention to the records and in the report stated "the conditions of atmosphere and temperature may be taken to have reached a dangerous point".
Aneroid barometers are used in scuba diving. A submersible pressure gauge is used to keep track of the contents of the diver's air tank. Another gauge is used to measure the hydrostatic pressure, usually expressed as a depth of sea water. Either or both gauges may be replaced with electronic variants or a dive computer.
Compensations
Temperature
The density of mercury will change with increase or decrease in temperature, so a reading must be adjusted for the temperature of the instrument. For this purpose a mercury thermometer is usually mounted on the instrument. Temperature compensation of an aneroid barometer is accomplished by including a bi-metal element in the mechanical linkages. Aneroid barometers sold for domestic use typically have no compensation under the assumption that they will be used within a controlled room temperature range.
Altitude
As the air pressure decreases at altitudes above sea level (and increases below sea level) the uncorrected reading of the barometer will depend on its location. The reading is then adjusted to an equivalent sea-level pressure for purposes of reporting. For example, if a barometer located at sea level and under fair weather conditions is moved to an altitude of 1,000 feet (305 m), about 1 inch of mercury (~35 hPa) must be added on to the reading. The barometer readings at the two locations should be the same if there are negligible changes in time, horizontal distance, and temperature. If this were not done, there would be a false indication of an approaching storm at the higher elevation.
Aneroid barometers have a mechanical adjustment that allows the equivalent sea level pressure to be read directly and without further adjustment if the instrument is not moved to a different altitude. Setting an aneroid barometer is similar to resetting an analog clock that is not at the correct time. Its dial is rotated so that the current atmospheric pressure from a known accurate and nearby barometer (such as the local weather station) is displayed. No calculation is needed, as the source barometer reading has already been converted to equivalent sea-level pressure, and this is transferred to the barometer being set—regardless of its altitude. Though somewhat rare, a few aneroid barometers intended for monitoring the weather are calibrated to manually adjust for altitude. In this case, knowing either the altitude or the current atmospheric pressure would be sufficient for future accurate readings.
The table below shows examples for three locations in the city of San Francisco, California. Note the corrected barometer readings are identical, and based on equivalent sea-level pressure. (Assume a temperature of 15 °C.)
In 1787, during a scientific expedition on Mont Blanc, De Saussure undertook research and executed physical experiments on the boiling point of water at different heights. He calculated the height at each of his experiments by measuring how long it took an alcohol burner to boil an amount of water, and by these means he determined the height of the mountain to be 4775 metres. (This later turned out to be 32 metres less than the actual height of 4807 metres). For these experiments De Saussure brought specific scientific equipment, such as a barometer and thermometer. His calculated boiling temperature of water at the top of the mountain was fairly accurate, only off by 0.1 kelvin.
Based on his findings, the altimeter could be developed as a specific application of the barometer. In the mid-19th century, this method was used by explorers.
Equation
When atmospheric pressure is measured by a barometer, the pressure is also referred to as the "barometric pressure". Assume a barometer with a cross-sectional area A, a height h, filled with mercury from the bottom at Point B to the top at Point C. The pressure at the bottom of the barometer, Point B, is equal to the atmospheric pressure. The pressure at the very top, Point C, can be taken as zero because there is only mercury vapour above this point and its pressure is very low relative to the atmospheric pressure. Therefore, one can find the atmospheric pressure using the barometer and this equation:
Patm = ρgh
where ρ is the density of mercury, g is the gravitational acceleration, and h is the height of the mercury column above the free surface area. The physical dimensions (length of tube and cross-sectional area of the tube) of the barometer itself have no effect on the height of the fluid column in the tube.
In thermodynamic calculations, a commonly used pressure unit is the "standard atmosphere". This is the pressure resulting from a column of mercury of 760 mm in height at 0 °C. For the density of mercury, use ρHg = 13,595 kg/m3 and for gravitational acceleration use g = 9.807 m/s2.
If water were used (instead of mercury) to meet the standard atmospheric pressure, a water column of roughly 10.3 m (33.8 ft) would be needed.
Standard atmospheric pressure as a function of elevation:
Note: 1 torr = 133.3 Pa = 0.03937 inHg
See also
Altimeter
Anemoscope
Automated airport weather station
Barograph
Barometer question
Bert Bolle Barometer
Microbarometer
Storm glass
Surface weather analysis
Tempest prognosticator
Units of pressure
Pressure sensor
Weather forecasting
Zambretti Forecaster
References
Further reading
Burch, David F. The Barometer Handbook: A Modern Look at Barometers and Applications of Barometric Pressure. Seattle: Starpath Publications (2009), .
Middleton, W. E. Knowles. (1964). The History of the Barometer. Baltimore: Johns Hopkins Press. New edition (2002), .
Patents
: C. J. Ulrich: "Barometric instrument"
: H. J. Frank: "Barometric altimeter"
: D. C. W. T. Sharp: "Aneroid barometer"
: H. A. Klumb: "Motion amplifying mechanism for pressure responsive instrument movement"
: F. Lissau: "Fluid displacement pressure gauges"
: O. S. Sormunen: "Pressure measuring instrument"
: H. Dostmann: "Barometer"
: T. Fijimoto: "Weather forecasting device"
External links
1643 in science
17th-century inventions
Glass applications
Italian inventions
Meteorological instrumentation and equipment
Pressure gauges
Atmospheric pressure | Barometer | [
"Physics",
"Technology",
"Engineering"
] | 5,507 | [
"Meteorological instrumentation and equipment",
"Physical quantities",
"Measuring instruments",
"Meteorological quantities",
"Atmospheric pressure",
"Pressure gauges"
] |
47,490 | https://en.wikipedia.org/wiki/Biodegradation | Biodegradation is the breakdown of organic matter by microorganisms, such as bacteria and fungi. It is generally assumed to be a natural process, which differentiates it from composting. Composting is a human-driven process in which biodegradation occurs under a specific set of circumstances.
The process of biodegradation is threefold: first an object undergoes biodeterioration, which is the mechanical weakening of its structure; then follows biofragmentation, which is the breakdown of materials by microorganisms; and finally assimilation, which is the incorporation of the old material into new cells.
In practice, almost all chemical compounds and materials are subject to biodegradation, the key element being time. Things like vegetables may degrade within days, while glass and some plastics take many millennia to decompose. A standard for biodegradability used by the European Union is that greater than 90% of the original material must be converted into , water and minerals by biological processes within 6 months.
Mechanisms
The process of biodegradation can be divided into three stages: biodeterioration, biofragmentation, and assimilation. Biodeterioration is sometimes described as a surface-level degradation that modifies the mechanical, physical and chemical properties of the material. This stage occurs when the material is exposed to abiotic factors in the outdoor environment and allows for further degradation by weakening the material's structure. Some abiotic factors that influence these initial changes are compression (mechanical), light, temperature and chemicals in the environment. While biodeterioration typically occurs as the first stage of biodegradation, it can in some cases be parallel to biofragmentation. Hueck, however, defined Biodeterioration as the undesirable action of living organisms on Man's materials, involving such things as breakdown of stone facades of buildings, corrosion of metals by microorganisms or merely the esthetic changes induced on man-made structures by the growth of living organisms.
Biofragmentation of a polymer is the lytic process in which bonds within a polymer are cleaved, generating oligomers and monomers in its place. The steps taken to fragment these materials also differ based on the presence of oxygen in the system. The breakdown of materials by microorganisms when oxygen is present is aerobic digestion, and the breakdown of materials when oxygen is not present is anaerobic digestion. The main difference between these processes is that anaerobic reactions produce methane, while aerobic reactions do not (however, both reactions produce carbon dioxide, water, some type of residue, and a new biomass). In addition, aerobic digestion typically occurs more rapidly than anaerobic digestion, while anaerobic digestion does a better job reducing the volume and mass of the material. Due to anaerobic digestion's ability to reduce the volume and mass of waste materials and produce a natural gas, anaerobic digestion technology is widely used for waste management systems and as a source of local, renewable energy.
In the assimilation stage, the resulting products from biofragmentation are then integrated into microbial cells. Some of the products from fragmentation are easily transported within the cell by membrane carriers. However, others still have to undergo biotransformation reactions to yield products that can then be transported inside the cell. Once inside the cell, the products enter catabolic pathways that either lead to the production of adenosine triphosphate (ATP) or elements of the cells structure.
Aerobic biodegradation equation
C + O → C + C + CO + HO
Anaerobic biodegradation equation
C → C + C + CO + CH + HO
Factors affecting biodegradation rate
In practice, almost all chemical compounds and materials are subject to biodegradation processes. The significance, however, is in the relative rates of such processes, such as days, weeks, years or centuries. A number of factors determine the rate at which this degradation of organic compounds occurs. Factors include light, water, oxygen and temperature. The degradation rate of many organic compounds is limited by their bioavailability, which is the rate at which a substance is absorbed into a system or made available at the site of physiological activity, as compounds must be released into solution before organisms can degrade them. The rate of biodegradation can be measured in a number of ways. Respirometry tests can be used for aerobic microbes. First one places a solid waste sample in a container with microorganisms and soil, and then aerates the mixture. Over the course of several days, microorganisms digest the sample bit by bit and produce carbon dioxide – the resulting amount of CO2 serves as an indicator of degradation. Biodegradability can also be measured by anaerobic microbes and the amount of methane or alloy that they are able to produce.
It's important to note factors that affect biodegradation rates during product testing to ensure that the results produced are accurate and reliable. Several materials will test as being biodegradable under optimal conditions in a lab for approval but these results may not reflect real world outcomes where factors are more variable. For example, a material may have tested as biodegrading at a high rate in the lab may not degrade at a high rate in a landfill because landfills often lack light, water, and microbial activity that are necessary for degradation to occur. Thus, it is very important that there are standards for plastic biodegradable products, which have a large impact on the environment. The development and use of accurate standard test methods can help ensure that all plastics that are being produced and commercialized will actually biodegrade in natural environments. One test that has been developed for this purpose is DINV 54900.
Plastics
The term Biodegradable Plastics refers to materials that maintain their mechanical strength during practical use but break down into low-weight compounds and non-toxic byproducts after their use. This breakdown is made possible through an attack of microorganisms on the material, which is typically a non-water-soluble polymer. Such materials can be obtained through chemical synthesis, fermentation by microorganisms, and from chemically modified natural products.
Plastics biodegrade at highly variable rates. PVC-based plumbing is selected for handling sewage because PVC resists biodegradation. Some packaging materials on the other hand are being developed that would degrade readily upon exposure to the environment. Examples of synthetic polymers that biodegrade quickly include polycaprolactone, other polyesters and aromatic-aliphatic esters, due to their ester bonds being susceptible to attack by water. A prominent example is poly-3-hydroxybutyrate, the renewably derived polylactic acid. Others are the cellulose-based cellulose acetate and celluloid (cellulose nitrate).
Under low oxygen conditions plastics break down more slowly. The breakdown process can be accelerated in specially designed compost heap. Starch-based plastics will degrade within two to four months in a home compost bin, while polylactic acid is largely undecomposed, requiring higher temperatures. Polycaprolactone and polycaprolactone-starch composites decompose slower, but the starch content accelerates decomposition by leaving behind a porous, high surface area polycaprolactone. Nevertheless, it takes many months.
In 2016, a bacterium named Ideonella sakaiensis was found to biodegrade PET. In 2020, the PET degrading enzyme of the bacterium, PETase, has been genetically modified and combined with MHETase to break down PET faster, and also degrade PEF. In 2021, researchers reported that a mix of microorganisms from cow stomachs could break down three types of plastics.
Many plastic producers have gone so far even to say that their plastics are compostable, typically listing corn starch as an ingredient. However, these claims are questionable because the plastics industry operates under its own definition of compostable:
"that which is capable of undergoing biological decomposition in a compost site such that the material is not visually distinguishable and breaks down into carbon dioxide, water, inorganic compounds and biomass at a rate consistent with known compostable materials." (Ref: ASTM D 6002)
The term "composting" is often used informally to describe the biodegradation of packaging materials. Legal definitions exist for compostability, the process that leads to compost. Four criteria are offered by the European Union:
Chemical composition: volatile matter and heavy metals as well as fluorine should be limited.
Biodegradability: the conversion of >90% of the original material into , water and minerals by biological processes within 6 months.
Disintegrability: at least 90% of the original mass should be decomposed into particles that are able to pass through a 2x2 mm sieve.
Quality: absence of toxic substances and other substances that impede composting.
Biodegradable technology
Biodegradable technology is established technology with some applications in product packaging, production, and medicine. The chief barrier to widespread implementation is the trade-off between biodegradability and performance. For example, lactide-based plastics are inferior packaging properties in comparison to traditional materials.
Oxo-biodegradation is defined by CEN (the European Standards Organisation) as "degradation resulting from oxidative and cell-mediated phenomena, either simultaneously or successively." While sometimes described as "oxo-fragmentable," and "oxo-degradable" these terms describe only the first or oxidative phase and should not be used for material which degrades by the process of oxo-biodegradation defined by CEN: the correct description is "oxo-biodegradable." Oxo-biodegradable formulations accelerate the biodegradation process but it takes considerable skill and experience to balance the ingredients within the formulations so as to provide the product with a useful life for a set period, followed by degradation and biodegradation.
Biodegradable technology is especially utilized by the bio-medical community. Biodegradable polymers are classified into three groups:
medical, ecological, and dual application, while in terms of origin they are divided into two groups: natural and synthetic. The Clean Technology Group is exploiting the use of supercritical carbon dioxide, which under high pressure at room temperature is a solvent that can use biodegradable plastics to make polymer drug coatings. The polymer (meaning a material composed of molecules with repeating structural units that form a long chain) is used to encapsulate a drug prior to injection in the body and is based on lactic acid, a compound normally produced in the body, and is thus able to be excreted naturally. The coating is designed for controlled release over a period of time, reducing the number of injections required and maximizing the therapeutic benefit. Professor Steve Howdle states that biodegradable polymers are particularly attractive for use in drug delivery, as once introduced into the body they require no retrieval or further manipulation and are degraded into soluble, non-toxic by-products. Different polymers degrade at different rates within the body and therefore polymer selection can be tailored to achieve desired release rates.
Other biomedical applications include the use of biodegradable, elastic shape-memory polymers. Biodegradable implant materials can now be used for minimally invasive surgical procedures through degradable thermoplastic polymers. These polymers are now able to change their shape with increase of temperature, causing shape memory capabilities as well as easily degradable sutures. As a result, implants can now fit through small incisions, doctors can easily perform complex deformations, and sutures and other material aides can naturally biodegrade after a completed surgery.
Biodegradation vs. composting
There is no universal definition for biodegradation and there are various definitions of composting, which has led to much confusion between the terms. They are often lumped together; however, they do not have the same meaning. Biodegradation is the naturally-occurring breakdown of materials by microorganisms such as bacteria and fungi or other biological activity. Composting is a human-driven process in which biodegradation occurs under a specific set of circumstances. The predominant difference between the two is that one process is naturally-occurring and one is human-driven.
Biodegradable material is capable of decomposing without an oxygen source (anaerobically) into carbon dioxide, water, and biomass, but the timeline is not very specifically defined. Similarly, compostable material breaks down into carbon dioxide, water, and biomass; however, compostable material also breaks down into inorganic compounds. The process for composting is more specifically defined, as it is controlled by humans. Essentially, composting is an accelerated biodegradation process due to optimized circumstances. Additionally, the end product of composting not only returns to its previous state, but also generates and adds beneficial microorganisms to the soil called humus. This organic matter can be used in gardens and on farms to help grow healthier plants in the future. Composting more consistently occurs within a shorter time frame since it is a more defined process and is expedited by human intervention. Biodegradation can occur in different time frames under different circumstances, but is meant to occur naturally without human intervention.
Even within composting, there are different circumstances under which this can occur. The two main types of composting are at-home versus commercial. Both produce healthy soil to be reused – the main difference lies in what materials are able to go into the process. At-home composting is mostly used for food scraps and excess garden materials, such as weeds. Commercial composting is capable of breaking down more complex plant-based products, such as corn-based plastics and larger pieces of material, like tree branches. Commercial composting begins with a manual breakdown of the materials using a grinder or other machine to initiate the process. Because at-home composting usually occurs on a smaller scale and does not involve large machinery, these materials would not fully decompose in at-home composting. Furthermore, one study has compared and contrasted home and industrial composting, concluding that there are advantages and disadvantages to both.
The following studies provide examples in which composting has been defined as a subset of biodegradation in a scientific context. The first study, "Assessment of Biodegradability of Plastics Under Simulated Composting Conditions in a Laboratory Test Setting," clearly examines composting as a set of circumstances that falls under the category of degradation. Additionally, this next study looked at the biodegradation and composting effects of chemically and physically crosslinked polylactic acid. Notably discussing composting and biodegrading as two distinct terms. The third and final study reviews European standardization of biodegradable and compostable material in the packaging industry, again using the terms separately.
The distinction between these terms is crucial because waste management confusion leads to improper disposal of materials by people on a daily basis. Biodegradation technology has led to massive improvements in how we dispose of waste; there now exist trash, recycling, and compost bins in order to optimize the disposal process. However, if these waste streams are commonly and frequently confused, then the disposal process is not at all optimized. Biodegradable and compostable materials have been developed to ensure more of human waste is able to breakdown and return to its previous state, or in the case of composting even add nutrients to the ground. When a compostable product is thrown out as opposed to composted and sent to a landfill, these inventions and efforts are wasted. Therefore, it is important for citizens to understand the difference between these terms so that materials can be disposed of properly and efficiently.
Environmental and social effects
Plastic pollution from illegal dumping poses health risks to wildlife. Animals often mistake plastics for food, resulting in intestinal entanglement. Slow-degrading chemicals, like polychlorinated biphenyls (PCBs), nonylphenol (NP), and pesticides also found in plastics, can release into environments and subsequently also be ingested by wildlife.
These chemicals also play a role in human health, as consumption of tainted food (in processes called biomagnification and bioaccumulation) has been linked to issues such as cancers, neurological dysfunction, and hormonal changes. A well-known example of biomagnification impacting health in recent times is the increased exposure to dangerously high levels of mercury in fish, which can affect sex hormones in humans.
In efforts to remediate the damages done by slow-degrading plastics, detergents, metals, and other pollutants created by humans, economic costs have become a concern. Marine litter in particular is notably difficult to quantify and review. Researchers at the World Trade Institute estimate that cleanup initiatives' cost (specifically in ocean ecosystems) has hit close to thirteen billion dollars a year. The main concern stems from marine environments, with the biggest cleanup efforts centering around garbage patches in the ocean. The Great Pacific Garbage Patch, a garbage patch the size of Mexico, is located in the Pacific Ocean. It is estimated to be upwards of a million square miles in size. While the patch contains more obvious examples of litter (plastic bottles, cans, and bags), tiny microplastics are nearly impossible to clean up. National Geographic reports that even more non-biodegradable materials are finding their way into vulnerable environments – nearly thirty-eight million pieces a year.
Materials that have not degraded can also serve as shelter for invasive species, such as tube worms and barnacles. When the ecosystem changes in response to the invasive species, resident species and the natural balance of resources, genetic diversity, and species richness is altered. These factors may support local economies in way of hunting and aquaculture, which suffer in response to the change. Similarly, coastal communities which rely heavily on ecotourism lose revenue thanks to a buildup of pollution, as their beaches or shores are no longer desirable to travelers. The World Trade Institute also notes that the communities who often feel most of the effects of poor biodegradation are poorer countries without the means to pay for their cleanup. In a positive feedback loop effect, they in turn have trouble controlling their own pollution sources.
Etymology of "biodegradable"
The first known use of biodegradable in a biological context was in 1959 when it was employed to describe the breakdown of material into innocuous components by microorganisms. Now biodegradable is commonly associated with environmentally friendly products that are part of the earth's innate cycles like the carbon cycle and capable of decomposing back into natural elements.
See also
Notes
References
Standards by ASTM International
D5210- Standard Test Method for Determining the Anaerobic Biodegradation of Plastic Materials in the Presence of Municipal Sewage Sludge
D5526- Standard Test Method for Determining Anaerobic Biodegradation of Plastic Materials Under Accelerated Landfill Conditions
D5338- Standard Test Method for Determining Aerobic Biodegradation of Plastic Materials Under Controlled Composting Conditions, Incorporating Thermophilic Temperatures
D5511- Standard Test Method for Determining Anaerobic Biodegradation of Plastic Materials Under High-Solids Anaerobic-Digestion Conditions
D5864- Standard Test Method for Determining Aerobic Aquatic Biodegradation of Lubricants or Their Components
D5988- Standard Test Method for Determining Aerobic Biodegradation of Plastic Materials in Soil
D6139- Standard Test Method for Determining the Aerobic Aquatic Biodegradation of Lubricants or Their Components Using the Gledhill Shake Flask
D6006- Standard Guide for Assessing Biodegradability of Hydraulic Fluids
D6340- Standard Test Methods for Determining Aerobic Biodegradation of Radiolabeled Plastic Materials in an Aqueous or Compost Environment
D6691- Standard Test Method for Determining Aerobic Biodegradation of Plastic Materials in the Marine Environment by a Defined Microbial Consortium or Natural Sea Water Inoculum
D6731-Standard Test Method for Determining the Aerobic, Aquatic Biodegradability of Lubricants or Lubricant Components in a Closed Respirometer
D6954- Standard Guide for Exposing and Testing Plastics that Degrade in the Environment by a Combination of Oxidation and Biodegradation
D7044- Standard Specification for Biodegradable Fire Resistant Hydraulic Fluids
D7373-Standard Test Method for Predicting Biodegradability of Lubricants Using a Bio-kinetic Model
D7475- Standard Test Method for Determining the Aerobic Degradation and Anaerobic Biodegradation of Plastic Materials under Accelerated Bioreactor Landfill Conditions
D7665- Standard Guide for Evaluation of Biodegradable Heat Transfer Fluids
External links
European Bioplastics Association
The Science of Biodegradable Plastics: The Reality Behind Biodegradable Plastic Packaging Material
Biodegradable Plastic Definition
Anaerobic digestion
Biodegradable waste management | Biodegradation | [
"Chemistry",
"Engineering"
] | 4,421 | [
"Biodegradable waste management",
"Biodegradation",
"Anaerobic digestion",
"Environmental engineering",
"Water technology"
] |
47,492 | https://en.wikipedia.org/wiki/Biomass%20%28ecology%29 | Biomass is the mass of living biological organisms in a given area or ecosystem at a given time. Biomass can refer to species biomass, which is the mass of one or more species, or to community biomass, which is the mass of all species in the community. It can include microorganisms, plants or animals. The mass can be expressed as the average mass per unit area, or as the total mass in the community.
How biomass is measured depends on why it is being measured. Sometimes, the biomass is regarded as the natural mass of organisms in situ, just as they are. For example, in a salmon fishery, the salmon biomass might be regarded as the total wet weight the salmon would have if they were taken out of the water. In other contexts, biomass can be measured in terms of the dried organic mass, so perhaps only 30% of the actual weight might count, the rest being water. For other purposes, only biological tissues count, and teeth, bones and shells are excluded. In some applications, biomass is measured as the mass of organically bound carbon (C) that is present.
In 2018, Bar-On et al. estimated the total live biomass on Earth at about 550 billion (5.5×1011) tonnes C, most of it in plants. In 1998 Field et.al. estimated the total annual net primary production of biomass at just over 100 billion tonnes C/yr. The total live biomass of bacteria was once thought to be about the same as plants, but recent studies suggest it is significantly less. The total number of DNA base pairs on Earth, as a possible approximation of global biodiversity, is estimated at , and weighs 50 billion tonnes. Anthropogenic mass (human-made material) is expected to exceed all living biomass on earth at around the year 2020.
Ecological pyramids
An ecological pyramid is a graphical representation that shows, for a given ecosystem, the relationship between biomass or biological productivity and trophic levels.
A biomass pyramid shows the amount of biomass at each trophic level.
A productivity pyramid shows the production or turn-over in biomass at each trophic level.
An ecological pyramid provides a snapshot in time of an ecological community.
The bottom of the pyramid represents the primary producers (autotrophs). The primary producers take energy from the environment in the form of sunlight or inorganic chemicals and use it to create energy-rich molecules such as carbohydrates. This mechanism is called primary production. The pyramid then proceeds through the various trophic levels to the apex predators at the top.
When energy is transferred from one trophic level to the next, typically only ten percent is used to build new biomass. The remaining ninety percent goes to metabolic processes or is dissipated as heat. This energy loss means that productivity pyramids are never inverted, and generally limits food chains to about six levels. However, in oceans, biomass pyramids can be wholly or partially inverted, with more biomass at higher levels.
Terrestrial biomass
Terrestrial biomass generally decreases markedly at each higher trophic level (plants, herbivores, carnivores). Examples of terrestrial producers are grasses, trees and shrubs. These have a much higher biomass than the animals that consume them, such as deer, zebras and insects. The level with the least biomass are the highest predators in the food chain, such as foxes and eagles.
In a temperate grassland, grasses and other plants are the primary producers at the bottom of the pyramid. Then come the primary consumers, such as grasshoppers, voles and bison, followed by the secondary consumers, shrews, hawks and small cats. Finally the tertiary consumers, large cats and wolves. The biomass pyramid decreases markedly at each higher level.
Changes in plant species in the terrestrial ecosystem can result in changes in the biomass of soil decomposer communities. Biomass in C3 and C4 plant species can change in response to altered concentrations of CO2. C3 plant species have been observed to increase in biomass in response to increasing concentrations of CO2 of up to 900 ppm.
Ocean biomass
Ocean or marine biomass, in a reversal of terrestrial biomass, can increase at higher trophic levels. In the ocean, the food chain typically starts with phytoplankton, and follows the course:
Phytoplankton → zooplankton → predatory zooplankton → filter feeders → predatory fish
Phytoplankton are the main primary producers at the bottom of the marine food chain. Phytoplankton use photosynthesis to convert inorganic carbon into protoplasm. They are then consumed by zooplankton that range in size from a few micrometers in diameter in the case of protistan microzooplankton to macroscopic gelatinous and crustacean zooplankton.
Zooplankton comprise the second level in the food chain, and includes small crustaceans, such as copepods and krill, and the larva of fish, squid, lobsters and crabs.
In turn, small zooplankton are consumed by both larger predatory zooplankters, such as krill, and by forage fish, which are small, schooling, filter-feeding fish. This makes up the third level in the food chain.
A fourth trophic level can consist of predatory fish, marine mammals and seabirds that consume forage fish. Examples are swordfish, seals and gannets.
Apex predators, such as orcas, which can consume seals, and shortfin mako sharks, which can consume swordfish, make up a fifth trophic level. Baleen whales can consume zooplankton and krill directly, leading to a food chain with only three or four trophic levels.
Marine environments can have inverted biomass pyramids. In particular, the biomass of consumers (copepods, krill, shrimp, forage fish) is larger than the biomass of primary producers. This happens because the ocean's primary producers are tiny phytoplankton which are r-strategists that grow and reproduce rapidly, so a small mass can have a fast rate of primary production. In contrast, terrestrial primary producers, such as forests, are K-strategists that grow and reproduce slowly, so a much larger mass is needed to achieve the same rate of primary production.
Among the phytoplankton at the base of the marine food web are members from a phylum of bacteria called cyanobacteria. Marine cyanobacteria include the smallest known photosynthetic organisms. The smallest of all, Prochlorococcus, is just 0.5 to 0.8 micrometres across. In terms of individual numbers, Prochlorococcus is possibly the most plentiful species on Earth: a single millilitre of surface seawater can contain 100,000 cells or more. Worldwide, there are estimated to be several octillion (1027) individuals. Prochlorococcus is ubiquitous between 40°N and 40°S and dominates in the oligotrophic (nutrient poor) regions of the oceans. The bacterium accounts for an estimated 20% of the oxygen in the Earth's atmosphere, and forms part of the base of the ocean food chain.
Bacterial biomass
Bacteria and archaea are both classified as prokaryotes, and their biomass is commonly estimated together. The global biomass of prokaryotes is estimated at 30 billion tonnes C, dominated by bacteria.
The estimates for the global biomass of prokaryotes had changed significantly over recent decades, as more data became available. A much-cited study from 1998 collected data on abundances (number of cells) of bacteria and archaea in different natural environments, and estimated their total biomass at 350 to 550 billion tonnes C. This vast amount is similar to the biomass of carbon in all plants. The vast majority of bacteria and archaea were estimated to be in sediments deep below the seafloor or in the deep terrestrial biosphere (in deep continental aquifers). However, updated measurements reported in a 2012 study reduced the calculated prokaryotic biomass in deep subseafloor sediments from the original ≈300 billion tonnes C to ≈4 billion tonnes C (range 1.5–22 billion tonnes). This update originates from much lower estimates of both the prokaryotic abundance and their average weight.
A census published in PNAS in May 2018 estimated global bacterial biomass at ≈70 billion tonnes C, of which ≈60 billion tonnes are in the terrestrial deep subsurface. It also estimated the global biomass of archaea at ≈7 billion tonnes C. A later study by the Deep Carbon Observatory published in 2018 reported a much larger dataset of measurements, and updated the total biomass estimate in the deep terrestrial biosphere. It used this new knowledge and previous estimates to update the global biomass of bacteria and archaea to 23–31 billion tonnes C. Roughly 70% of the global biomass was estimated to be found in the deep subsurface. The estimated number of prokaryotic cells globally was estimated to be 11–15 × 1029. With this information, the authors of the May 2018 PNAS article revised their estimate for the global biomass of prokaryotes to ≈30 billion tonnes C, similar to the Deep Carbon Observatory estimate.
These estimates convert global abundance of prokaryotes into global biomass using average cellular biomass figures that are based on limited data. Recent estimates used an average cellular biomass of about 20–30 femtogram carbon (fgC) per cell in the subsurface and terrestrial habitats.
Global biomass
The total global biomass has been estimated at 550 billion tonnes C. A breakdown of the global biomass is given by kingdom in the table below, based on a 2018 study by Bar-On et. al.
Animals represent less than 0.5% of the total biomass on Earth, with about 2 billion tonnes C in total. Most animal biomass is found in the oceans, where arthropods, such as copepods, account for about 1 billion tonnes C and fish for another 0.7 billion tonnes C. Roughly half of the biomass of fish in the world are mesopelagic, such as lanternfish, spending most of the day in the deep, dark waters. Marine mammals such as whales and dolphins account for about 0.006 billion tonnes C.
Land animals account for about 500 million tonnes C, or about 20% of the biomass of animals on Earth. Terrestrial arthropods account for about 150 million tonnes C, most of which is found in the topsoil. Land mammals account for about 180 million tonnes C, most of which are humans (about 80 million tonnes C) and domesticated mammals (about 90 million tonnes C). Wild terrestrial mammals account for only about 3 million tonnes C, less than 2% of the total mammalian biomass on land.
Most of the global biomass is found on land, with only 5 to 10 billion tonnes C found in the oceans. On land, there is about 1,000 times more plant biomass (phytomass) than animal biomass (zoomass). About 18% of this plant biomass is eaten by the land animals. However, marine animals eat most of the marine autotrophs, and the biomass of marine animals is greater than that of marine autotrophs.
According to a 2020 study published in Nature, human-made materials, or anthropogenic mass, outweigh all living biomass on earth, with plastic alone exceeding the mass of all land and marine animals combined.
Global rate of production
Net primary production is the rate at which new biomass is generated, mainly due to photosynthesis. Global primary production can be estimated from satellite observations. Satellites scan the normalised difference vegetation index (NDVI) over terrestrial habitats, and scan sea-surface chlorophyll levels over oceans. This results in 56.4 billion tonnes C/yr (53.8%), for terrestrial primary production, and 48.5 billion tonnes C/yr for oceanic primary production. Thus, the total photoautotrophic primary production for the Earth is about 104.9 billion tonnes C/yr. This translates to about 426 gC/m2/yr for land production (excluding areas with permanent ice cover), and 140 gC/m2/yr for the oceans.
However, there is a much more significant difference in standing stocks—while accounting for almost half of total annual production, oceanic autotrophs account for only about 0.2% of the total biomass.
Terrestrial freshwater ecosystems generate about 1.5% of the global net primary production.
Some global producers of biomass in order of productivity rates are
See also
Biomass partitioning
Slash-and-burn
Stubble burning
- a biomass manipulation study
References
Further reading
External links
Biocubes: a visualization of biomass and technomass
The mass of all life on Earth is staggering — until you consider how much we've lost
Counting bacteria
Trophic levels
Biomass distributions for high trophic-level fishes in the North Atlantic, 1900–2000
Ecology terminology
Environmental terminology
Ecological metrics
Ecosystems
Fisheries science | Biomass (ecology) | [
"Mathematics",
"Biology"
] | 2,692 | [
"Ecology terminology",
"Symbiosis",
"Metrics",
"Ecological metrics",
"Quantity",
"Ecosystems"
] |
47,501 | https://en.wikipedia.org/wiki/Brightness%20temperature | Brightness temperature or radiance temperature is a measure of the intensity of electromagnetic energy coming from a source. In particular, it is the temperature at which a black body would have to be in order to duplicate the observed intensity of a grey body object at a frequency .
This concept is used in radio astronomy, planetary science, materials science and climatology.
The brightness temperature provides "a more physically recognizable way to describe intensity".
When the electromagnetic radiation observed is thermal radiation emitted by an object simply by virtue of its temperature, then the actual temperature of the object will always be equal to or higher than the brightness temperature. Since the emissivity is limited by 1, the brightness temperature is a lower bound of the object’s actual temperature.
For radiation emitted by a non-thermal source such as a pulsar, synchrotron, maser, or a laser, the brightness temperature may be far higher than the actual temperature of the source. In this case, the brightness temperature is simply a measure of the intensity of the radiation as it would be measured at the origin of that radiation.
In some applications, the brightness temperature of a surface is determined by an optical measurement, for example using a pyrometer, with the intention of determining the real temperature. As detailed below, the real temperature of a surface can in some cases be calculated by dividing the brightness temperature by the emissivity of the surface. Since the emissivity is a value between 0 and 1, the real temperature will be greater than or equal to the brightness temperature. At high frequencies (short wavelengths) and low temperatures, the conversion must proceed through Planck's law.
The brightness temperature is not a temperature as ordinarily understood. It characterizes radiation, and depending on the mechanism of radiation can differ considerably from the physical temperature of a radiating body (though it is theoretically possible to construct a device which will heat up by a source of radiation with some brightness temperature to the actual temperature equal to brightness temperature).
Nonthermal sources can have very high brightness temperatures. In pulsars the brightness temperature can reach 1030 K. For the radiation of a helium–neon laser with a power of 1 mW, a frequency spread Δf = 1 GHz, an output aperture of 1 mm, and a beam dispersion half-angle of 0.56 mrad, the brightness temperature would be .
For a black body, Planck's law gives:
where (the Intensity or Brightness) is the amount of energy emitted per unit surface area per unit time per unit solid angle and in the frequency range between and ; is the temperature of the black body; is the Planck constant; is frequency; is the speed of light; and is the Boltzmann constant.
For a grey body the spectral radiance is a portion of the black body radiance, determined by the emissivity .
That makes the reciprocal of the brightness temperature:
At low frequency and high temperatures, when , we can use the Rayleigh–Jeans law:
so that the brightness temperature can be simply written as:
In general, the brightness temperature is a function of , and only in the case of blackbody radiation it is the same at all frequencies. The brightness temperature can be used to calculate the spectral index of a body, in the case of non-thermal radiation.
Calculating by frequency
The brightness temperature of a source with known spectral radiance can be expressed as:
When we can use the Rayleigh–Jeans law:
For narrowband radiation with very low relative spectral linewidth and known radiance we can calculate the brightness temperature as:
Calculating by wavelength
Spectral radiance of black-body radiation is expressed by wavelength as:
So, the brightness temperature can be calculated as:
For long-wave radiation the brightness temperature is:
For almost monochromatic radiation, the brightness temperature can be expressed by the radiance and the coherence length :
In oceanography
In oceanography, the microwave brightness temperature, as measured by satellites looking at the ocean surface, depends on salinity as well as on the temperature and roughness (e.g. from wind-driven waves) of the water.
References
Temperature
Radio astronomy
Planetary science | Brightness temperature | [
"Physics",
"Chemistry",
"Astronomy"
] | 849 | [
"Scalar physical quantities",
"Temperature",
"Thermodynamic properties",
"Physical quantities",
"SI base quantities",
"Intensive quantities",
"Radio astronomy",
"Thermodynamics",
"Planetary science",
"Wikipedia categories named after physical quantities",
"Astronomical sub-disciplines"
] |
47,502 | https://en.wikipedia.org/wiki/Calibration | In measurement technology and metrology, calibration is the comparison of measurement values delivered by a device under test with those of a calibration standard of known accuracy. Such a standard could be another measurement device of known accuracy, a device generating the quantity to be measured such as a voltage, a sound tone, or a physical artifact, such as a meter ruler.
The outcome of the comparison can result in one of the following:
no significant error being noted on the device under test
a significant error being noted but no adjustment made
an adjustment made to correct the error to an acceptable level
Strictly speaking, the term "calibration" means just the act of comparison and does not include any subsequent adjustment.
The calibration standard is normally traceable to a national or international standard held by a metrology body.
BIPM Definition
The formal definition of calibration by the International Bureau of Weights and Measures (BIPM) is the following: "Operation that, under specified conditions, in a first step, establishes a relation between the quantity values with measurement uncertainties provided by measurement standards and corresponding indications with associated measurement uncertainties (of the calibrated instrument or secondary standard) and, in a second step, uses this information to establish a relation for obtaining a measurement result from an indication."
This definition states that the calibration process is purely a comparison, but introduces the concept of measurement uncertainty in relating the accuracies of the device under test and the standard.
Modern calibration processes
The increasing need for known accuracy and uncertainty and the need to have consistent and comparable standards internationally has led to the establishment of national laboratories. In many countries a National Metrology Institute (NMI) will exist which will maintain primary standards of measurement (the main SI units plus a number of derived units) which will be used to provide traceability to customer's instruments by calibration.
The NMI supports the metrological infrastructure in that country (and often others) by establishing an unbroken chain, from the top level of standards to an instrument used for measurement. Examples of National Metrology Institutes are NPL in the UK, NIST in the United States, PTB in Germany and many others. Since the Mutual Recognition Agreement was signed it is now straightforward to take traceability from any participating NMI and it is no longer necessary for a company to obtain traceability for measurements from the NMI of the country in which it is situated, such as the National Physical Laboratory in the UK.
Quality of calibration
To improve the quality of the calibration and have the results accepted by outside organizations it is desirable for the calibration and subsequent measurements to be "traceable" to the internationally defined measurement units. Establishing traceability is accomplished by a formal comparison to a standard which is directly or indirectly related to national standards (such as NIST in the USA), international standards, or certified reference materials. This may be done by national standards laboratories operated by the government or by private firms offering metrology services.
Quality management systems call for an effective metrology system which includes formal, periodic, and documented calibration of all measuring instruments. ISO 9000 and ISO 17025 standards require that these traceable actions are to a high level and set out how they can be quantified.
To communicate the quality of a calibration the calibration value is often accompanied by a traceable uncertainty statement to a stated confidence level. This is evaluated through careful uncertainty analysis.
Some times a DFS (Departure From Spec) is required to operate machinery in a degraded state. Whenever this does happen, it must be in writing and authorized by a manager with the technical assistance of a calibration technician.
Measuring devices and instruments are categorized according to the physical quantities they are designed to measure. These vary internationally, e.g., NIST 150-2G in the U.S. and NABL-141 in India. Together, these standards cover instruments that measure various physical quantities such as electromagnetic radiation (RF probes), sound (sound level meter or noise dosimeter), time and frequency (intervalometer), ionizing radiation (Geiger counter), light (light meter), mechanical quantities (limit switch, pressure gauge, pressure switch), and, thermodynamic or thermal properties (thermometer, temperature controller). The standard instrument for each test device varies accordingly, e.g., a dead weight tester for pressure gauge calibration and a dry block temperature tester for temperature gauge calibration.
Instrument calibration prompts
Calibration may be required for the following reasons:
a new instrument
after an instrument has been repaired or modified
moving from one location to another location
when a specified time period has elapsed
when a specified usage (operating hours) has elapsed
before and/or after a critical measurement
after an event, for example
after an instrument has been exposed to a shock, vibration, or physical damage, which might potentially have compromised the integrity of its calibration
sudden changes in weather
whenever observations appear questionable or instrument indications do not match the output of surrogate instruments
as specified by a requirement, e.g., customer specification, instrument manufacturer recommendation.
In general use, calibration is often regarded as including the process of adjusting the output or indication on a measurement instrument to agree with value of the applied standard, within a specified accuracy. For example, a thermometer could be calibrated so the error of indication or the correction is determined, and adjusted (e.g. via calibration constants) so that it shows the true temperature in Celsius at specific points on the scale. This is the perception of the instrument's end-user. However, very few instruments can be adjusted to exactly match the standards they are compared to. For the vast majority of calibrations, the calibration process is actually the comparison of an unknown to a known and recording the results.
Basic calibration process
Purpose and scope
The calibration process begins with the design of the measuring instrument that needs to be calibrated. The design has to be able to "hold a calibration" through its calibration interval. In other words, the design has to be capable of measurements that are "within engineering tolerance" when used within the stated environmental conditions over some reasonable period of time. Having a design with these characteristics increases the likelihood of the actual measuring instruments performing as expected.
Basically, the purpose of calibration is for maintaining the quality of measurement as well as to ensure the proper working of particular instrument.
Frequency
The exact mechanism for assigning tolerance values varies by country and as per the industry type. The measuring of equipment is manufacturer generally assigns the measurement tolerance, suggests a calibration interval (CI) and specifies the environmental range of use and storage. The using organization generally assigns the actual calibration interval, which is dependent on this specific measuring equipment's likely usage level. The assignment of calibration intervals can be a formal process based on the results of previous calibrations. The standards themselves are not clear on recommended CI values:
ISO 17025
"A calibration certificate (or calibration label) shall not contain any recommendation on the calibration interval except where this has been agreed with the customer. This requirement may be superseded by legal regulations.”
ANSI/NCSL Z540
"...shall be calibrated or verified at periodic intervals established and maintained to assure acceptable reliability..."
ISO-9001
"Where necessary to ensure valid results, measuring equipment shall...be calibrated or verified at specified intervals, or prior to use...”
MIL-STD-45662A
"... shall be calibrated at periodic intervals established and maintained to assure acceptable accuracy and reliability...Intervals shall be shortened or may be lengthened, by the contractor, when the results of previous calibrations indicate that such action is appropriate to maintain acceptable reliability."
Standards required and accuracy
The next step is defining the calibration process. The selection of a standard or standards is the most visible part of the calibration process. Ideally, the standard has less than 1/4 of the measurement uncertainty of the device being calibrated. When this goal is met, the accumulated measurement uncertainty of all of the standards involved is considered to be insignificant when the final measurement is also made with the 4:1 ratio. This ratio was probably first formalized in Handbook 52 that accompanied MIL-STD-45662A, an early US Department of Defense metrology program specification. It was 10:1 from its inception in the 1950s until the 1970s, when advancing technology made 10:1 impossible for most electronic measurements.
Maintaining a 4:1 accuracy ratio with modern equipment is difficult. The test equipment being calibrated can be just as accurate as the working standard. If the accuracy ratio is less than 4:1, then the calibration tolerance can be reduced to compensate. When 1:1 is reached, only an exact match between the standard and the device being calibrated is a completely correct calibration. Another common method for dealing with this capability mismatch is to reduce the accuracy of the device being calibrated.
For example, a gauge with 3% manufacturer-stated accuracy can be changed to 4% so that a 1% accuracy standard can be used at 4:1. If the gauge is used in an application requiring 16% accuracy, having the gauge accuracy reduced to 4% will not affect the accuracy of the final measurements. This is called a limited calibration. But if the final measurement requires 10% accuracy, then the 3% gauge never can be better than 3.3:1. Then perhaps adjusting the calibration tolerance for the gauge would be a better solution. If the calibration is performed at 100 units, the 1% standard would actually be anywhere between 99 and 101 units. The acceptable values of calibrations where the test equipment is at the 4:1 ratio would be 96 to 104 units, inclusive. Changing the acceptable range to 97 to 103 units would remove the potential contribution of all of the standards and preserve a 3.3:1 ratio. Continuing, a further change to the acceptable range to 98 to 102 restores more than a 4:1 final ratio.
This is a simplified example. The mathematics of the example can be challenged. It is important that whatever thinking guided this process in an actual calibration be recorded and accessible. Informality contributes to tolerance stacks and other difficult to diagnose post calibration problems.
Also in the example above, ideally the calibration value of 100 units would be the best point in the gauge's range to perform a single-point calibration. It may be the manufacturer's recommendation or it may be the way similar devices are already being calibrated. Multiple point calibrations are also used. Depending on the device, a zero unit state, the absence of the phenomenon being measured, may also be a calibration point. Or zero may be resettable by the user-there are several variations possible. Again, the points to use during calibration should be recorded.
There may be specific connection techniques between the standard and the device being calibrated that may influence the calibration. For example, in electronic calibrations involving analog phenomena, the impedance of the cable connections can directly influence the result.
Manual and automatic calibrations
Calibration methods for modern devices can be manual or automatic.
As an example, a manual process may be used for calibration of a pressure gauge. The procedure requires multiple steps, to connect the gauge under test to a reference master gauge and an adjustable pressure source, to apply fluid pressure to both reference and test gauges at definite points over the span of the gauge, and to compare the readings of the two. The gauge under test may be adjusted to ensure its zero point and response to pressure comply as closely as possible to the intended accuracy. Each step of the process requires manual record keeping.
An automatic pressure calibrator is a device that combines an electronic control unit, a pressure intensifier used to compress a gas such as Nitrogen, a pressure transducer used to detect desired levels in a hydraulic accumulator, and accessories such as liquid traps and gauge fittings. An automatic system may also include data collection facilities to automate the gathering of data for record keeping.
Process description and documentation
All of the information above is collected in a calibration procedure, which is a specific test method. These procedures capture all of the steps needed to perform a successful calibration. The manufacturer may provide one or the organization may prepare one that also captures all of the organization's other requirements. There are clearinghouses for calibration procedures such as the Government-Industry Data Exchange Program (GIDEP) in the United States.
This exact process is repeated for each of the standards used until transfer standards, certified reference materials and/or natural physical constants, the measurement standards with the least uncertainty in the laboratory, are reached. This establishes the traceability of the calibration.
See Metrology for other factors that are considered during calibration process development.
After all of this, individual instruments of the specific type discussed above can finally be calibrated. The process generally begins with a basic damage check. Some organizations such as nuclear power plants collect "as-found" calibration data before any routine maintenance is performed. After routine maintenance and deficiencies detected during calibration are addressed, an "as-left" calibration is performed.
More commonly, a calibration technician is entrusted with the entire process and signs the calibration certificate, which documents the completion of a successful calibration.
The basic process outlined above is a difficult and expensive challenge. The cost for ordinary equipment support is generally about 10% of the original purchase price on a yearly basis, as a commonly accepted rule-of-thumb. Exotic devices such as scanning electron microscopes, gas chromatograph systems and laser interferometer devices can be even more costly to maintain.
The 'single measurement' device used in the basic calibration process description above does exist. But, depending on the organization, the majority of the devices that need calibration can have several ranges and many functionalities in a single instrument. A good example is a common modern oscilloscope. There easily could be 200,000 combinations of settings to completely calibrate and limitations on how much of an all-inclusive calibration can be automated.
To prevent unauthorized access to an instrument tamper-proof seals are usually applied after calibration. The picture of the oscilloscope rack shows these, and prove that the instrument has not been removed since it was last calibrated as they will possible unauthorized to the adjusting elements of the instrument. There also are labels showing the date of the last calibration and when the calibration interval dictates when the next one is needed. Some organizations also assign unique identification to each instrument to standardize the record keeping and keep track of accessories that are integral to a specific calibration condition.
When the instruments being calibrated are integrated with computers, the integrated computer programs and any calibration corrections are also under control.
Historical development
Origins
The words "calibrate" and "calibration" entered the English language as recently as the American Civil War, in descriptions of artillery, thought to be derived from a measurement of the calibre of a gun.
Some of the earliest known systems of measurement and calibration seem to have been created between the ancient civilizations of Egypt, Mesopotamia and the Indus Valley, with excavations revealing the use of angular gradations for construction. The term "calibration" was likely first associated with the precise division of linear distance and angles using a dividing engine and the measurement of gravitational mass using a weighing scale. These two forms of measurement alone and their direct derivatives supported nearly all commerce and technology development from the earliest civilizations until about AD 1800.
Calibration of weights and distances ()
Early measurement devices were direct, i.e. they had the same units as the quantity being measured. Examples include length using a yardstick and mass using a weighing scale. At the beginning of the twelfth century, during the reign of Henry I (1100-1135), it was decreed that a yard be "the distance from the tip of the King's nose to the end of his outstretched thumb." However, it wasn't until the reign of Richard I (1197) that we find documented evidence.
Assize of Measures
"Throughout the realm there shall be the same yard of the same size and it should be of iron."
Other standardization attempts followed, such as the Magna Carta (1225) for liquid measures, until the Mètre des Archives from France and the establishment of the Metric system.
The early calibration of pressure instruments
One of the earliest pressure measurement devices was the Mercury barometer, credited to Torricelli (1643), which read atmospheric pressure using Mercury. Soon after, water-filled manometers were designed. All these would have linear calibrations using gravimetric principles, where the difference in levels was proportional to pressure. The normal units of measure would be the convenient inches of mercury or water.
In the direct reading hydrostatic manometer design on the right, applied pressure Pa pushes the liquid down the right side of the manometer U-tube, while a length scale next to the tube measures the difference of levels. The resulting height difference "H" is a direct measurement of the pressure or vacuum with respect to atmospheric pressure. In the absence of differential pressure both levels would be equal, and this would be used as the zero point.
The Industrial Revolution saw the adoption of "indirect" pressure measuring devices, which were more practical than the manometer.
An example is in high pressure (up to 50 psi) steam engines, where mercury was used to reduce the scale length to about 60 inches, but such a manometer was expensive and prone to damage. This stimulated the development of indirect reading instruments, of which the Bourdon tube invented by Eugène Bourdon is a notable example.
In the front and back views of a Bourdon gauge on the right, applied pressure at the bottom fitting reduces the curl on the flattened pipe proportionally to pressure. This moves the free end of the tube which is linked to the pointer. The instrument would be calibrated against a manometer, which would be the calibration standard. For measurement of indirect quantities of pressure per unit area, the calibration uncertainty would be dependent on the density of the manometer fluid, and the means of measuring the height difference. From this other units such as pounds per square inch could be inferred and marked on the scale.
See also
Calibration curve
Calibrated geometry
Calibration (statistics)
Color calibration – used to calibrate a computer monitor or display.
Deadweight tester
EURAMET Association of European NMIs
Measurement Microphone Calibration
Measurement uncertainty
Musical tuning – tuning, in music, means calibrating musical instruments into playing the right pitch.
Precision measurement equipment laboratory
Scale test car – a device used to calibrate weighing scales that weigh railroad cars.
Systems of measurement
References
Sources
Crouch, Stanley & Skoog, Douglas A. (2007). Principles of Instrumental Analysis. Pacific Grove: Brooks Cole. .
Accuracy and precision
Standards
Measurement
Metrology | Calibration | [
"Physics",
"Mathematics"
] | 3,980 | [
"Quantity",
"Physical quantities",
"Measurement",
"Size"
] |
47,503 | https://en.wikipedia.org/wiki/Carbon%20cycle | The carbon cycle is that part of the biogeochemical cycle by which carbon is exchanged among the biosphere, pedosphere, geosphere, hydrosphere, and atmosphere of Earth. Other major biogeochemical cycles include the nitrogen cycle and the water cycle. Carbon is the main component of biological compounds as well as a major component of many rocks such as limestone. The carbon cycle comprises a sequence of events that are key to making Earth capable of sustaining life. It describes the movement of carbon as it is recycled and reused throughout the biosphere, as well as long-term processes of carbon sequestration (storage) to and release from carbon sinks.
To describe the dynamics of the carbon cycle, a distinction can be made between the fast and slow carbon cycle. The fast cycle is also referred to as the biological carbon cycle. Fast cycles can complete within years, moving substances from atmosphere to biosphere, then back to the atmosphere. Slow or geological cycles (also called deep carbon cycle) can take millions of years to complete, moving substances through the Earth's crust between rocks, soil, ocean and atmosphere.
Humans have disturbed the carbon cycle for many centuries. They have done so by modifying land use and by mining and burning carbon from ancient organic remains (coal, petroleum and gas). Carbon dioxide in the atmosphere has increased nearly 52% over pre-industrial levels by 2020, resulting in global warming. The increased carbon dioxide has also caused a reduction in the ocean's pH value and is fundamentally altering marine chemistry. Carbon dioxide is critical for photosynthesis.
Main compartments of the Carbon Cycle
The carbon cycle was first described by Antoine Lavoisier and Joseph Priestley, and popularised by Humphry Davy. The global carbon cycle is now usually divided into the following major reservoirs of carbon (also called carbon pools) interconnected by pathways of exchange:
Atmosphere
Terrestrial biosphere
Ocean, including dissolved inorganic carbon and living and non-living marine biota
Sediments, including fossil fuels, freshwater systems, and non-living organic material.
Earth's interior (mantle and crust). These carbon stores interact with the other components through geological processes.
The carbon exchanges between reservoirs occur as the result of various chemical, physical, geological, and biological processes. The ocean contains the largest active pool of carbon near the surface of the Earth.
The natural flows of carbon between the atmosphere, ocean, terrestrial ecosystems, and sediments are fairly balanced; so carbon levels would be roughly stable without human influence.
Atmosphere
Carbon in the Earth's atmosphere exists in two main forms: carbon dioxide and methane. Both of these gases absorb and retain heat in the atmosphere and are partially responsible for the greenhouse effect. Methane produces a larger greenhouse effect per volume as compared to carbon dioxide, but it exists in much lower concentrations and is more short-lived than carbon dioxide. Thus, carbon dioxide contributes more to the global greenhouse effect than methane.
Carbon dioxide is removed from the atmosphere primarily through photosynthesis and enters the terrestrial and oceanic biospheres. Carbon dioxide also dissolves directly from the atmosphere into bodies of water (ocean, lakes, etc.), as well as dissolving in precipitation as raindrops fall through the atmosphere. When dissolved in water, carbon dioxide reacts with water molecules and forms carbonic acid, which contributes to ocean acidity. It can then be absorbed by rocks through weathering. It also can acidify other surfaces it touches or be washed into the ocean.
Human activities over the past two centuries have increased the amount of carbon in the atmosphere by nearly 50% as of year 2020, mainly in the form of carbon dioxide, both by modifying ecosystems' ability to extract carbon dioxide from the atmosphere and by emitting it directly, e.g., by burning fossil fuels and manufacturing concrete.
In the far future (2 to 3 billion years), the rate at which carbon dioxide is absorbed into the soil via the carbonate–silicate cycle will likely increase due to expected changes in the sun as it ages. The expected increased luminosity of the Sun will likely speed up the rate of surface weathering. This will eventually cause most of the carbon dioxide in the atmosphere to be squelched into the Earth's crust as carbonate. Once the concentration of carbon dioxide in the atmosphere falls below approximately 50 parts per million (tolerances vary among species), C3 photosynthesis will no longer be possible. This has been predicted to occur 600 million years from the present, though models vary.
Once the oceans on the Earth evaporate in about 1.1 billion years from now, plate tectonics will very likely stop due to the lack of water to lubricate them. The lack of volcanoes pumping out carbon dioxide will cause the carbon cycle to end between 1 billion and 2 billion years into the future.
Terrestrial biosphere
The terrestrial biosphere includes the organic carbon in all land-living organisms, both alive and dead, as well as carbon stored in soils. About 500 gigatons of carbon are stored above ground in plants and other living organisms, while soil holds approximately 1,500 gigatons of carbon. Most carbon in the terrestrial biosphere is organic carbon, while about a third of soil carbon is stored in inorganic forms, such as calcium carbonate. Organic carbon is a major component of all organisms living on Earth. Autotrophs extract it from the air in the form of carbon dioxide, converting it to organic carbon, while heterotrophs receive carbon by consuming other organisms.
Because carbon uptake in the terrestrial biosphere is dependent on biotic factors, it follows a diurnal and seasonal cycle. In CO2 measurements, this feature is apparent in the Keeling curve. It is strongest in the northern hemisphere because this hemisphere has more land mass than the southern hemisphere and thus more room for ecosystems to absorb and emit carbon.
Carbon leaves the terrestrial biosphere in several ways and on different time scales. The combustion or respiration of organic carbon releases it rapidly into the atmosphere. It can also be exported into the ocean through rivers or remain sequestered in soils in the form of inert carbon. Carbon stored in soil can remain there for up to thousands of years before being washed into rivers by erosion or released into the atmosphere through soil respiration. Between 1989 and 2008 soil respiration increased by about 0.1% per year. In 2008, the global total of CO2 released by soil respiration was roughly 98 billion tonnes, about 3 times more carbon than humans are now putting into the atmosphere each year by burning fossil fuel (this does not represent a net transfer of carbon from soil to atmosphere, as the respiration is largely offset by inputs to soil carbon). There are a few plausible explanations for this trend, but the most likely explanation is that increasing temperatures have increased rates of decomposition of soil organic matter, which has increased the flow of CO2. The length of carbon sequestering in soil is dependent on local climatic conditions and thus changes in the course of climate change.
Ocean
The ocean can be conceptually divided into a surface layer within which water makes frequent (daily to annual) contact with the atmosphere, and a deep layer below the typical mixed layer depth of a few hundred meters or less, within which the time between consecutive contacts may be centuries. The dissolved inorganic carbon (DIC) in the surface layer is exchanged rapidly with the atmosphere, maintaining equilibrium. Partly because its concentration of DIC is about 15% higher but mainly due to its larger volume, the deep ocean contains far more carbon—it is the largest pool of actively cycled carbon in the world, containing 50 times more than the atmosphere—but the timescale to reach equilibrium with the atmosphere is hundreds of years: the exchange of carbon between the two layers, driven by thermohaline circulation, is slow.
Carbon enters the ocean mainly through the dissolution of atmospheric carbon dioxide, a small fraction of which is converted into carbonate. It can also enter the ocean through rivers as dissolved organic carbon. It is converted by organisms into organic carbon through photosynthesis and can either be exchanged throughout the food chain or precipitated into the oceans' deeper, more carbon-rich layers as dead soft tissue or in shells as calcium carbonate. It circulates in this layer for long periods of time before either being deposited as sediment or, eventually, returned to the surface waters through thermohaline circulation.
Oceans are basic (with a current pH value of 8.1 to 8.2). The increase in atmospheric CO2 shifts the pH of the ocean towards neutral in a process called ocean acidification. Oceanic absorption of CO2 is one of the most important forms of carbon sequestering. The projected rate of pH reduction could slow the biological precipitation of calcium carbonates, thus decreasing the ocean's capacity to absorb CO2.
Geosphere
The geologic component of the carbon cycle operates slowly in comparison to the other parts of the global carbon cycle. It is one of the most important determinants of the amount of carbon in the atmosphere, and thus of global temperatures.
Most of the Earth's carbon is stored inertly in the Earth's lithosphere. Much of the carbon stored in the Earth's mantle was stored there when the Earth formed. Some of it was deposited in the form of organic carbon from the biosphere. Of the carbon stored in the geosphere, about 80% is limestone and its derivatives, which form from the sedimentation of calcium carbonate stored in the shells of marine organisms. The remaining 20% is stored as kerogens formed through the sedimentation and burial of terrestrial organisms under high heat and pressure. Organic carbon stored in the geosphere can remain there for millions of years.
Carbon can leave the geosphere in several ways. Carbon dioxide is released during the metamorphism of carbonate rocks when they are subducted into the Earth's mantle. This carbon dioxide can be released into the atmosphere and ocean through volcanoes and hotspots. It can also be removed by humans through the direct extraction of kerogens in the form of fossil fuels. After extraction, fossil fuels are burned to release energy and emit the carbon they store into the atmosphere.
Types of dynamic
There is a fast and a slow carbon cycle. The fast cycle operates in the biosphere and the slow cycle operates in rocks. The fast or biological cycle can complete within years, moving carbon from atmosphere to biosphere, then back to the atmosphere. The slow or geological cycle may extend deep into the mantle and can take millions of years to complete, moving carbon through the Earth's crust between rocks, soil, ocean and atmosphere.
The fast carbon cycle involves relatively short-term biogeochemical processes between the environment and living organisms in the biosphere (see diagram at start of article). It includes movements of carbon between the atmosphere and terrestrial and marine ecosystems, as well as soils and seafloor sediments. The fast cycle includes annual cycles involving photosynthesis and decadal cycles involving vegetative growth and decomposition. The reactions of the fast carbon cycle to human activities will determine many of the more immediate impacts of climate change.
The slow (or deep) carbon cycle involves medium to long-term geochemical processes belonging to the rock cycle (see diagram on the right). The exchange between the ocean and atmosphere can take centuries, and the weathering of rocks can take millions of years. Carbon in the ocean precipitates to the ocean floor where it can form sedimentary rock and be subducted into the Earth's mantle. Mountain building processes result in the return of this geologic carbon to the Earth's surface. There the rocks are weathered and carbon is returned to the atmosphere by degassing and to the ocean by rivers. Other geologic carbon returns to the ocean through the hydrothermal emission of calcium ions. In a given year between 10 and 100 million tonnes of carbon moves around this slow cycle. This includes volcanoes returning geologic carbon directly to the atmosphere in the form of carbon dioxide. However, this is less than one percent of the carbon dioxide put into the atmosphere by burning fossil fuels.
Processes within fast carbon cycle
Terrestrial carbon in the water cycle
The movement of terrestrial carbon in the water cycle is shown in the diagram on the right and explained below:
Atmospheric particles act as cloud condensation nuclei, promoting cloud formation.
Raindrops absorb organic and inorganic carbon through particle scavenging and adsorption of organic vapors while falling toward Earth.
Burning and volcanic eruptions produce highly condensed polycyclic aromatic molecules (i.e. black carbon) that is returned to the atmosphere along with greenhouse gases such as CO2.
Terrestrial plants fix atmospheric CO2 through photosynthesis, returning a fraction back to the atmosphere through respiration. Lignin and celluloses represent as much as 80% of the organic carbon in forests and 60% in pastures.
Litterfall and root organic carbon mix with sedimentary material to form organic soils where plant-derived and petrogenic organic carbon is both stored and transformed by microbial and fungal activity.
Water absorbs plant and settled aerosol-derived dissolved organic carbon (DOC) and dissolved inorganic carbon (DIC) as it passes over forest canopies (i.e. throughfall) and along plant trunks/stems (i.e. stemflow). Biogeochemical transformations take place as water soaks into soil solution and groundwater reservoirs and overland flow occurs when soils are completely saturated, or rainfall occurs more rapidly than saturation into soils.
Organic carbon derived from the terrestrial biosphere and in situ primary production is decomposed by microbial communities in rivers and streams along with physical decomposition (i.e. photo-oxidation), resulting in a flux of CO2 from rivers to the atmosphere that are the same order of magnitude as the amount of carbon sequestered annually by the terrestrial biosphere. Terrestrially-derived macromolecules such as lignin and black carbon are decomposed into smaller components and monomers, ultimately being converted to CO2, metabolic intermediates, or biomass.
Lakes, reservoirs, and floodplains typically store large amounts of organic carbon and sediments, but also experience net heterotrophy in the water column, resulting in a net flux of CO2 to the atmosphere that is roughly one order of magnitude less than rivers. Methane production is also typically high in the anoxic sediments of floodplains, lakes, and reservoirs.
Primary production is typically enhanced in river plumes due to the export of fluvial nutrients. Nevertheless, estuarine waters are a source of CO2 to the atmosphere, globally.
Coastal marshes both store and export blue carbon. Marshes and wetlands are suggested to have an equivalent flux of CO2 to the atmosphere as rivers, globally.
Continental shelves and the open ocean typically absorb CO2 from the atmosphere.
The marine biological pump sequesters a small but significant fraction of the absorbed CO2 as organic carbon in marine sediments (see below).
Terrestrial runoff to the ocean
Terrestrial and marine ecosystems are chiefly connected through riverine transport, which acts as the main channel through which erosive terrestrially derived substances enter into oceanic systems. Material and energy exchanges between the terrestrial biosphere and the lithosphere as well as organic carbon fixation and oxidation processes together regulate ecosystem carbon and dioxygen (O2) pools.
Riverine transport, being the main connective channel of these pools, will act to transport net primary productivity (primarily in the form of dissolved organic carbon (DOC) and particulate organic carbon (POC)) from terrestrial to oceanic systems. During transport, part of DOC will rapidly return to the atmosphere through redox reactions, causing "carbon degassing" to occur between land-atmosphere storage layers. The remaining DOC and dissolved inorganic carbon (DIC) are also exported to the ocean. In 2015, inorganic and organic carbon export fluxes from global rivers were assessed as 0.50–0.70 Pg C y−1 and 0.15–0.35 Pg C y−1 respectively. On the other hand, POC can remain buried in sediment over an extensive period, and the annual global terrestrial to oceanic POC flux has been estimated at 0.20 (+0.13,-0.07) Gg C y−1.
Biological pump in the ocean
The ocean biological pump is the ocean's biologically driven sequestration of carbon from the atmosphere and land runoff to the deep ocean interior and seafloor sediments. The biological pump is not so much the result of a single process, but rather the sum of a number of processes each of which can influence biological pumping. The pump transfers about 11 billion tonnes of carbon every year into the ocean's interior. An ocean without the biological pump would result in atmospheric CO2 levels about 400 ppm higher than the present day.
Most carbon incorporated in organic and inorganic biological matter is formed at the sea surface where it can then start sinking to the ocean floor. The deep ocean gets most of its nutrients from the higher water column when they sink down in the form of marine snow. This is made up of dead or dying animals and microbes, fecal matter, sand and other inorganic material.
The biological pump is responsible for transforming dissolved inorganic carbon (DIC) into organic biomass and pumping it in particulate or dissolved form into the deep ocean. Inorganic nutrients and carbon dioxide are fixed during photosynthesis by phytoplankton, which both release dissolved organic matter (DOM) and are consumed by herbivorous zooplankton. Larger zooplankton - such as copepods, egest fecal pellets - which can be reingested, and sink or collect with other organic detritus into larger, more-rapidly-sinking aggregates. DOM is partially consumed by bacteria and respired; the remaining refractory DOM is advected and mixed into the deep sea. DOM and aggregates exported into the deep water are consumed and respired, thus returning organic carbon into the enormous deep ocean reservoir of DIC.
A single phytoplankton cell has a sinking rate around one metre per day. Given that the average depth of the ocean is about four kilometres, it can take over ten years for these cells to reach the ocean floor. However, through processes such as coagulation and expulsion in predator fecal pellets, these cells form aggregates. These aggregates have sinking rates orders of magnitude greater than individual cells and complete their journey to the deep in a matter of days.
About 1% of the particles leaving the surface ocean reach the seabed and are consumed, respired, or buried in the sediments. The net effect of these processes is to remove carbon in organic form from the surface and return it to DIC at greater depths, maintaining a surface-to-deep ocean gradient of DIC. Thermohaline circulation returns deep-ocean DIC to the atmosphere on millennial timescales. The carbon buried in the sediments can be subducted into the earth's mantle and stored for millions of years as part of the slow carbon cycle (see next section).
Viruses as regulators
Viruses act as "regulators" of the fast carbon cycle because they impact the material cycles and energy flows of food webs and the microbial loop. The average contribution of viruses to the Earth ecosystem carbon cycle is 8.6%, of which its contribution to marine ecosystems (1.4%) is less than its contribution to terrestrial (6.7%) and freshwater (17.8%) ecosystems. Over the past 2,000 years, anthropogenic activities and climate change have gradually altered the regulatory role of viruses in ecosystem carbon cycling processes. This has been particularly conspicuous over the past 200 years due to rapid industrialization and the attendant population growth.
Processes within slow carbon cycle
Slow or deep carbon cycling is an important process, though it is not as well-understood as the relatively fast carbon movement through the atmosphere, terrestrial biosphere, ocean, and geosphere. The deep carbon cycle is intimately connected to the movement of carbon in the Earth's surface and atmosphere. If the process did not exist, carbon would remain in the atmosphere, where it would accumulate to extremely high levels over long periods of time. Therefore, by allowing carbon to return to the Earth, the deep carbon cycle plays a critical role in maintaining the terrestrial conditions necessary for life to exist.
Furthermore, the process is also significant simply due to the massive quantities of carbon it transports through the planet. In fact, studying the composition of basaltic magma and measuring carbon dioxide flux out of volcanoes reveals that the amount of carbon in the mantle is actually greater than that on the Earth's surface by a factor of one thousand. Drilling down and physically observing deep-Earth carbon processes is evidently extremely difficult, as the lower mantle and core extend from 660 to 2,891 km and 2,891 to 6,371 km deep into the Earth respectively. Accordingly, not much is conclusively known regarding the role of carbon in the deep Earth. Nonetheless, several pieces of evidence—many of which come from laboratory simulations of deep Earth conditions—have indicated mechanisms for the element's movement down into the lower mantle, as well as the forms that carbon takes at the extreme temperatures and pressures of said layer. Furthermore, techniques like seismology have led to a greater understanding of the potential presence of carbon in the Earth's core.
Carbon in the lower mantle
Carbon principally enters the mantle in the form of carbonate-rich sediments on tectonic plates of ocean crust, which pull the carbon into the mantle upon undergoing subduction. Not much is known about carbon circulation in the mantle, especially in the deep Earth, but many studies have attempted to augment our understanding of the element's movement and forms within the region. For instance, a 2011 study demonstrated that carbon cycling extends all the way to the lower mantle. The study analyzed rare, super-deep diamonds at a site in Juina, Brazil, determining that the bulk composition of some of the diamonds' inclusions matched the expected result of basalt melting and crystallisation under lower mantle temperatures and pressures. Thus, the investigation's findings indicate that pieces of basaltic oceanic lithosphere act as the principle transport mechanism for carbon to Earth's deep interior. These subducted carbonates can interact with lower mantle silicates, eventually forming super-deep diamonds like the one found.
However, carbonates descending to the lower mantle encounter other fates in addition to forming diamonds. In 2011, carbonates were subjected to an environment similar to that of 1800 km deep into the Earth, well within the lower mantle. Doing so resulted in the formations of magnesite, siderite, and numerous varieties of graphite. Other experiments—as well as petrologic observations—support this claim, indicating that magnesite is actually the most stable carbonate phase in most part of the mantle. This is largely a result of its higher melting temperature. Consequently, scientists have concluded that carbonates undergo reduction as they descend into the mantle before being stabilised at depth by low oxygen fugacity environments. Magnesium, iron, and other metallic compounds act as buffers throughout the process. The presence of reduced, elemental forms of carbon like graphite would indicate that carbon compounds are reduced as they descend into the mantle.
Polymorphism alters carbonate compounds' stability at different depths within the Earth. To illustrate, laboratory simulations and density functional theory calculations suggest that tetrahedrally coordinated carbonates are most stable at depths approaching the core–mantle boundary. A 2015 study indicates that the lower mantle's high pressure causes carbon bonds to transition from sp2 to sp3 hybridised orbitals, resulting in carbon tetrahedrally bonding to oxygen. CO3 trigonal groups cannot form polymerisable networks, while tetrahedral CO4 can, signifying an increase in carbon's coordination number, and therefore drastic changes in carbonate compounds' properties in the lower mantle. As an example, preliminary theoretical studies suggest that high pressure causes carbonate melt viscosity to increase; the melts' lower mobility as a result of its increased viscosity causes large deposits of carbon deep into the mantle.
Accordingly, carbon can remain in the lower mantle for long periods of time, but large concentrations of carbon frequently find their way back to the lithosphere. This process, called carbon outgassing, is the result of carbonated mantle undergoing decompression melting, as well as mantle plumes carrying carbon compounds up towards the crust. Carbon is oxidised upon its ascent towards volcanic hotspots, where it is then released as CO2. This occurs so that the carbon atom matches the oxidation state of the basalts erupting in such areas.
Carbon in the core
Although the presence of carbon in the Earth's core is well-constrained, recent studies suggest large inventories of carbon could be stored in this region. Shear (S) waves moving through the inner core travel at about fifty percent of the velocity expected for most iron-rich alloys. Because the core's composition is believed to be an alloy of crystalline iron and a small amount of nickel, this seismic anomaly indicates the presence of light elements, including carbon, in the core. In fact, studies using diamond anvil cells to replicate the conditions in the Earth's core indicate that iron carbide (Fe7C3) matches the inner core's wave speed and density. Therefore, the iron carbide model could serve as an evidence that the core holds as much as 67% of the Earth's carbon. Furthermore, another study found that in the pressure and temperature condition of the Earth's inner core, carbon dissolved in iron and formed a stable phase with the same Fe7C3 composition—albeit with a different structure from the one previously mentioned. In summary, although the amount of carbon potentially stored in the Earth's core is not known, recent studies indicate that the presence of iron carbides can explain some of the geophysical observations.
Human influence on fast carbon cycle
Since the Industrial Revolution, and especially since the end of WWII, human activity has substantially disturbed the global carbon cycle by redistributing massive amounts of carbon from the geosphere. Humans have also continued to shift the natural component functions of the terrestrial biosphere with changes to vegetation and other land use. Man-made (synthetic) carbon compounds have been designed and mass-manufactured that will persist for decades to millennia in air, water, and sediments as pollutants. Climate change is amplifying and forcing further indirect human changes to the carbon cycle as a consequence of various positive and negative feedbacks.
Climate change
Current trends in climate change lead to higher ocean temperatures and acidity, thus modifying marine ecosystems. Also, acid rain and polluted runoff from agriculture and industry change the ocean's chemical composition. Such changes can have dramatic effects on highly sensitive ecosystems such as coral reefs, thus limiting the ocean's ability to absorb carbon from the atmosphere on a regional scale and reducing oceanic biodiversity globally.
The exchanges of carbon between the atmosphere and other components of the Earth system, collectively known as the carbon cycle, currently constitute important negative (dampening) feedbacks on the effect of anthropogenic carbon emissions on climate change. Carbon sinks in the land and the ocean each currently take up about one-quarter of anthropogenic carbon emissions each year.
These feedbacks are expected to weaken in the future, amplifying the effect of anthropogenic carbon emissions on climate change. The degree to which they will weaken, however, is highly uncertain, with Earth system models predicting a wide range of land and ocean carbon uptakes even under identical atmospheric concentration or emission scenarios. Arctic methane emissions indirectly caused by anthropogenic global warming also affect the carbon cycle and contribute to further warming.
Fossil carbon extraction and burning
The largest and one of the fastest growing human impacts on the carbon cycle and biosphere is the extraction and burning of fossil fuels, which directly transfer carbon from the geosphere into the atmosphere. Carbon dioxide is also produced and released during the calcination of limestone for clinker production. Clinker is an industrial precursor of cement.
, about 450 gigatons of fossil carbon have been extracted in total; an amount approaching the carbon contained in all of Earth's living terrestrial biomass. Recent rates of global emissions directly into the atmosphere have exceeded the uptake by vegetation and the oceans. These sinks have been expected and observed to remove about half of the added atmospheric carbon within about a century. Nevertheless, sinks like the ocean have evolving saturation properties, and a substantial fraction (20–35%, based on coupled models) of the added carbon is projected to remain in the atmosphere for centuries to millennia.
Halocarbons
Halocarbons are less prolific compounds developed for diverse uses throughout industry; for example as solvents and refrigerants. Nevertheless, the buildup of relatively small concentrations (parts per trillion) of chlorofluorocarbon, hydrofluorocarbon, and perfluorocarbon gases in the atmosphere is responsible for about 10% of the total direct radiative forcing from all long-lived greenhouse gases (year 2019); which includes forcing from the much larger concentrations of carbon dioxide and methane. Chlorofluorocarbons also cause stratospheric ozone depletion. International efforts are ongoing under the Montreal Protocol and Kyoto Protocol to control rapid growth in the industrial manufacturing and use of these environmentally potent gases. For some applications more benign alternatives such as hydrofluoroolefins have been developed and are being gradually introduced.
Land use changes
Since the invention of agriculture, humans have directly and gradually influenced the carbon cycle over century-long timescales by modifying the mixture of vegetation in the terrestrial biosphere. Over the past several centuries, direct and indirect human-caused land use and land cover change (LUCC) has led to the loss of biodiversity, which lowers ecosystems' resilience to environmental stresses and decreases their ability to remove carbon from the atmosphere. More directly, it often leads to the release of carbon from terrestrial ecosystems into the atmosphere.
Deforestation for agricultural purposes removes forests, which hold large amounts of carbon, and replaces them, generally with agricultural or urban areas. Both of these replacement land cover types store comparatively small amounts of carbon so that the net result of the transition is that more carbon stays in the atmosphere. However, the effects on the atmosphere and overall carbon cycle can be intentionally and/or naturally reversed with reforestation.
See also
References
External links
Carbon Cycle Science Program – an interagency partnership.
NOAA's Carbon Cycle Greenhouse Gases Group
Global Carbon Project – initiative of the Earth System Science Partnership
UNEP – The present carbon cycle – Climate Change carbon levels and flows
Chemical oceanography
Photosynthesis
Soil biology
Soil chemistry
Numerical climate and weather models
Effects of climate change | Carbon cycle | [
"Chemistry",
"Biology"
] | 6,277 | [
"Photosynthesis",
"Chemical oceanography",
"Soil chemistry",
"Soil biology",
"Biochemistry"
] |
47,512 | https://en.wikipedia.org/wiki/Climate%20variability%20and%20change | Climate variability includes all the variations in the climate that last longer than individual weather events, whereas the term climate change only refers to those variations that persist for a longer period of time, typically decades or more. Climate change may refer to any time in Earth's history, but the term is now commonly used to describe contemporary climate change, often popularly referred to as global warming. Since the Industrial Revolution, the climate has increasingly been affected by human activities.
The climate system receives nearly all of its energy from the sun and radiates energy to outer space. The balance of incoming and outgoing energy and the passage of the energy through the climate system is Earth's energy budget. When the incoming energy is greater than the outgoing energy, Earth's energy budget is positive and the climate system is warming. If more energy goes out, the energy budget is negative and Earth experiences cooling.
The energy moving through Earth's climate system finds expression in weather, varying on geographic scales and time. Long-term averages and variability of weather in a region constitute the region's climate. Such changes can be the result of "internal variability", when natural processes inherent to the various parts of the climate system alter the distribution of energy. Examples include variability in ocean basins such as the Pacific decadal oscillation and Atlantic multidecadal oscillation. Climate variability can also result from external forcing, when events outside of the climate system's components produce changes within the system. Examples include changes in solar output and volcanism.
Climate variability has consequences for sea level changes, plant life, and mass extinctions; it also affects human societies.
Terminology
Climate variability is the term to describe variations in the mean state and other characteristics of climate (such as chances or possibility of extreme weather, etc.) "on all spatial and temporal scales beyond that of individual weather events." Some of the variability does not appear to be caused by known systems and occurs at seemingly random times. Such variability is called random variability or noise. On the other hand, periodic variability occurs relatively regularly and in distinct modes of variability or climate patterns.
The term climate change is often used to refer specifically to anthropogenic climate change. Anthropogenic climate change is caused by human activity, as opposed to changes in climate that may have resulted as part of Earth's natural processes. Global warming became the dominant popular term in 1988, but within scientific journals global warming refers to surface temperature increases while climate change includes global warming and everything else that increasing greenhouse gas levels affect.
A related term, climatic change, was proposed by the World Meteorological Organization (WMO) in 1966 to encompass all forms of climatic variability on time-scales longer than 10 years, but regardless of cause. During the 1970s, the term climate change replaced climatic change to focus on anthropogenic causes, as it became clear that human activities had a potential to drastically alter the climate. Climate change was incorporated in the title of the Intergovernmental Panel on Climate Change (IPCC) and the UN Framework Convention on Climate Change (UNFCCC). Climate change is now used as both a technical description of the process, as well as a noun used to describe the problem.
Causes
On the broadest scale, the rate at which energy is received from the Sun and the rate at which it is lost to space determine the equilibrium temperature and climate of Earth. This energy is distributed around the globe by winds, ocean currents, and other mechanisms to affect the climates of different regions.
Factors that can shape climate are called climate forcings or "forcing mechanisms". These include processes such as variations in solar radiation, variations in the Earth's orbit, variations in the albedo or reflectivity of the continents, atmosphere, and oceans, mountain-building and continental drift and changes in greenhouse gas concentrations. External forcing can be either anthropogenic (e.g. increased emissions of greenhouse gases and dust) or natural (e.g., changes in solar output, the Earth's orbit, volcano eruptions). There are a variety of climate change feedbacks that can either amplify or diminish the initial forcing. There are also key thresholds which when exceeded can produce rapid or irreversible change.
Some parts of the climate system, such as the oceans and ice caps, respond more slowly in reaction to climate forcings, while others respond more quickly. An example of fast change is the atmospheric cooling after a volcanic eruption, when volcanic ash reflects sunlight. Thermal expansion of ocean water after atmospheric warming is slow, and can take thousands of years. A combination is also possible, e.g., sudden loss of albedo in the Arctic Ocean as sea ice melts, followed by more gradual thermal expansion of the water.
Climate variability can also occur due to internal processes. Internal unforced processes often involve changes in the distribution of energy in the ocean and atmosphere, for instance, changes in the thermohaline circulation.
Internal variability
Climatic changes due to internal variability sometimes occur in cycles or oscillations. For other types of natural climatic change, we cannot predict when it happens; the change is called random or stochastic. From a climate perspective, the weather can be considered random. If there are little clouds in a particular year, there is an energy imbalance and extra heat can be absorbed by the oceans. Due to climate inertia, this signal can be 'stored' in the ocean and be expressed as variability on longer time scales than the original weather disturbances. If the weather disturbances are completely random, occurring as white noise, the inertia of glaciers or oceans can transform this into climate changes where longer-duration oscillations are also larger oscillations, a phenomenon called red noise. Many climate changes have a random aspect and a cyclical aspect. This behavior is dubbed stochastic resonance. Half of the 2021 Nobel prize on physics was awarded for this work to Klaus Hasselmann jointly with Syukuro Manabe for related work on climate modelling. While Giorgio Parisi who with collaborators introduced the concept of stochastic resonance was awarded the other half but mainly for work on theoretical physics.
Ocean-atmosphere variability
The ocean and atmosphere can work together to spontaneously generate internal climate variability that can persist for years to decades at a time. These variations can affect global average surface temperature by redistributing heat between the deep ocean and the atmosphere and/or by altering the cloud/water vapor/sea ice distribution which can affect the total energy budget of the Earth.
Oscillations and cycles
A climate oscillation or climate cycle is any recurring cyclical oscillation within global or regional climate. They are quasiperiodic (not perfectly periodic), so a Fourier analysis of the data does not have sharp peaks in the spectrum. Many oscillations on different time-scales have been found or hypothesized:
the El Niño–Southern Oscillation (ENSO) – A large scale pattern of warmer (El Niño) and colder (La Niña) tropical sea surface temperatures in the Pacific Ocean with worldwide effects. It is a self-sustaining oscillation, whose mechanisms are well-studied. ENSO is the most prominent known source of inter-annual variability in weather and climate around the world. The cycle occurs every two to seven years, with El Niño lasting nine months to two years within the longer term cycle. The cold tongue of the equatorial Pacific Ocean is not warming as fast as the rest of the ocean, due to increased upwelling of cold waters off the west coast of South America.
the Madden–Julian oscillation (MJO) – An eastward moving pattern of increased rainfall over the tropics with a period of 30 to 60 days, observed mainly over the Indian and Pacific Oceans.
the North Atlantic oscillation (NAO) – Indices of the NAO are based on the difference of normalized sea-level pressure (SLP) between Ponta Delgada, Azores and Stykkishólmur/Reykjavík, Iceland. Positive values of the index indicate stronger-than-average westerlies over the middle latitudes.
the Quasi-biennial oscillation – a well-understood oscillation in wind patterns in the stratosphere around the equator. Over a period of 28 months the dominant wind changes from easterly to westerly and back.
Pacific Centennial Oscillation - a climate oscillation predicted by some climate models
the Pacific decadal oscillation – The dominant pattern of sea surface variability in the North Pacific on a decadal scale. During a "warm", or "positive", phase, the west Pacific becomes cool and part of the eastern ocean warms; during a "cool" or "negative" phase, the opposite pattern occurs. It is thought not as a single phenomenon, but instead a combination of different physical processes.
the Interdecadal Pacific oscillation (IPO) – Basin wide variability in the Pacific Ocean with a period between 20 and 30 years.
the Atlantic multidecadal oscillation – A pattern of variability in the North Atlantic of about 55 to 70 years, with effects on rainfall, droughts and hurricane frequency and intensity.
North African climate cycles – climate variation driven by the North African Monsoon, with a period of tens of thousands of years.
the Arctic oscillation (AO) and Antarctic oscillation (AAO) – The annular modes are naturally occurring, hemispheric-wide patterns of climate variability. On timescales of weeks to months they explain 20–30% of the variability in their respective hemispheres. The Northern Annular Mode or Arctic oscillation (AO) in the Northern Hemisphere, and the Southern Annular Mode or Antarctic oscillation (AAO) in the southern hemisphere. The annular modes have a strong influence on the temperature and precipitation of mid-to-high latitude land masses, such as Europe and Australia, by altering the average paths of storms. The NAO can be considered a regional index of the AO/NAM. They are defined as the first EOF of sea level pressure or geopotential height from 20°N to 90°N (NAM) or 20°S to 90°S (SAM).
Dansgaard–Oeschger cycles – occurring on roughly 1,500-year cycles during the Last Glacial Maximum
Ocean current changes
The oceanic aspects of climate variability can generate variability on centennial timescales due to the ocean having hundreds of times more mass than in the atmosphere, and thus very high thermal inertia. For example, alterations to ocean processes such as thermohaline circulation play a key role in redistributing heat in the world's oceans.
Ocean currents transport a lot of energy from the warm tropical regions to the colder polar regions. Changes occurring around the last ice age (in technical terms, the last glacial period) show that the circulation in the North Atlantic can change suddenly and substantially, leading to global climate changes, even though the total amount of energy coming into the climate system did not change much. These large changes may have come from so called Heinrich events where internal instability of ice sheets caused huge ice bergs to be released into the ocean. When the ice sheet melts, the resulting water is very low in salt and cold, driving changes in circulation.
Life
Life affects climate through its role in the carbon and water cycles and through such mechanisms as albedo, evapotranspiration, cloud formation, and weathering. Examples of how life may have affected past climate include:
glaciation 2.3 billion years ago triggered by the evolution of oxygenic photosynthesis, which depleted the atmosphere of the greenhouse gas carbon dioxide and introduced free oxygen
another glaciation 300 million years ago ushered in by long-term burial of decomposition-resistant detritus of vascular land-plants (creating a carbon sink and forming coal)
termination of the Paleocene–Eocene Thermal Maximum 55 million years ago by flourishing marine phytoplankton
reversal of global warming 49 million years ago by 800,000 years of arctic azolla blooms
global cooling over the past 40 million years driven by the expansion of grass-grazer ecosystems
External climate forcing
Greenhouse gases
Whereas greenhouse gases released by the biosphere is often seen as a feedback or internal climate process, greenhouse gases emitted from volcanoes are typically classified as external by climatologists. Greenhouse gases, such as , methane and nitrous oxide, heat the climate system by trapping infrared light. Volcanoes are also part of the extended carbon cycle. Over very long (geological) time periods, they release carbon dioxide from the Earth's crust and mantle, counteracting the uptake by sedimentary rocks and other geological carbon dioxide sinks.
Since the Industrial Revolution, humanity has been adding to greenhouse gases by emitting CO2 from fossil fuel combustion, changing land use through deforestation, and has further altered the climate with aerosols (particulate matter in the atmosphere), release of trace gases (e.g. nitrogen oxides, carbon monoxide, or methane). Other factors, including land use, ozone depletion, animal husbandry (ruminant animals such as cattle produce methane), and deforestation, also play a role.
The US Geological Survey estimates are that volcanic emissions are at a much lower level than the effects of current human activities, which generate 100–300 times the amount of carbon dioxide emitted by volcanoes. The annual amount put out by human activities may be greater than the amount released by supereruptions, the most recent of which was the Toba eruption in Indonesia 74,000 years ago.
Orbital variations
Slight variations in Earth's motion lead to changes in the seasonal distribution of sunlight reaching the Earth's surface and how it is distributed across the globe. There is very little change to the area-averaged annually averaged sunshine; but there can be strong changes in the geographical and seasonal distribution. The three types of kinematic change are variations in Earth's eccentricity, changes in the tilt angle of Earth's axis of rotation, and precession of Earth's axis. Combined, these produce Milankovitch cycles which affect climate and are notable for their correlation to glacial and interglacial periods, their correlation with the advance and retreat of the Sahara, and for their appearance in the stratigraphic record.
During the glacial cycles, there was a high correlation between concentrations and temperatures. Early studies indicated that concentrations lagged temperatures, but it has become clear that this is not always the case. When ocean temperatures increase, the solubility of decreases so that it is released from the ocean. The exchange of between the air and the ocean can also be impacted by further aspects of climatic change. These and other self-reinforcing processes allow small changes in Earth's motion to have a large effect on climate.
Solar output
The Sun is the predominant source of energy input to the Earth's climate system. Other sources include geothermal energy from the Earth's core, tidal energy from the Moon and heat from the decay of radioactive compounds. Both long term variations in solar intensity are known to affect global climate. Solar output varies on shorter time scales, including the 11-year solar cycle and longer-term modulations. Correlation between sunspots and climate and tenuous at best.
Three to four billion years ago, the Sun emitted only 75% as much power as it does today. If the atmospheric composition had been the same as today, liquid water should not have existed on the Earth's surface. However, there is evidence for the presence of water on the early Earth, in the Hadean and Archean eons, leading to what is known as the faint young Sun paradox. Hypothesized solutions to this paradox include a vastly different atmosphere, with much higher concentrations of greenhouse gases than currently exist. Over the following approximately 4 billion years, the energy output of the Sun increased. Over the next five billion years, the Sun's ultimate death as it becomes a red giant and then a white dwarf will have large effects on climate, with the red giant phase possibly ending any life on Earth that survives until that time.
Volcanism
The volcanic eruptions considered to be large enough to affect the Earth's climate on a scale of more than 1 year are the ones that inject over 100,000 tons of SO2 into the stratosphere. This is due to the optical properties of SO2 and sulfate aerosols, which strongly absorb or scatter solar radiation, creating a global layer of sulfuric acid haze. On average, such eruptions occur several times per century, and cause cooling (by partially blocking the transmission of solar radiation to the Earth's surface) for a period of several years. Although volcanoes are technically part of the lithosphere, which itself is part of the climate system, the IPCC explicitly defines volcanism as an external forcing agent.
Notable eruptions in the historical records are the 1991 eruption of Mount Pinatubo which lowered global temperatures by about 0.5 °C (0.9 °F) for up to three years, and the 1815 eruption of Mount Tambora causing the Year Without a Summer.
At a larger scale—a few times every 50 million to 100 million years—the eruption of large igneous provinces brings large quantities of igneous rock from the mantle and lithosphere to the Earth's surface. Carbon dioxide in the rock is then released into the atmosphere.
Small eruptions, with injections of less than 0.1 Mt of sulfur dioxide into the stratosphere, affect the atmosphere only subtly, as temperature changes are comparable with natural variability. However, because smaller eruptions occur at a much higher frequency, they too significantly affect Earth's atmosphere.
Plate tectonics
Over the course of millions of years, the motion of tectonic plates reconfigures global land and ocean areas and generates topography. This can affect both global and local patterns of climate and atmosphere-ocean circulation.
The position of the continents determines the geometry of the oceans and therefore influences patterns of ocean circulation. The locations of the seas are important in controlling the transfer of heat and moisture across the globe, and therefore, in determining global climate. A recent example of tectonic control on ocean circulation is the formation of the Isthmus of Panama about 5 million years ago, which shut off direct mixing between the Atlantic and Pacific Oceans. This strongly affected the ocean dynamics of what is now the Gulf Stream and may have led to Northern Hemisphere ice cover. During the Carboniferous period, about 300 to 360 million years ago, plate tectonics may have triggered large-scale storage of carbon and increased glaciation. Geologic evidence points to a "megamonsoonal" circulation pattern during the time of the supercontinent Pangaea, and climate modeling suggests that the existence of the supercontinent was conducive to the establishment of monsoons.
The size of continents is also important. Because of the stabilizing effect of the oceans on temperature, yearly temperature variations are generally lower in coastal areas than they are inland. A larger supercontinent will therefore have more area in which climate is strongly seasonal than will several smaller continents or islands.
Other mechanisms
It has been postulated that ionized particles known as cosmic rays could impact cloud cover and thereby the climate. As the sun shields the Earth from these particles, changes in solar activity were hypothesized to influence climate indirectly as well. To test the hypothesis, CERN designed the CLOUD experiment, which showed the effect of cosmic rays is too weak to influence climate noticeably.
Evidence exists that the Chicxulub asteroid impact some 66 million years ago had severely affected the Earth's climate. Large quantities of sulfate aerosols were kicked up into the atmosphere, decreasing global temperatures by up to 26 °C and producing sub-freezing temperatures for a period of 3–16 years. The recovery time for this event took more than 30 years. The large-scale use of nuclear weapons has also been investigated for its impact on the climate. The hypothesis is that soot released by large-scale fires blocks a significant fraction of sunlight for as much as a year, leading to a sharp drop in temperatures for a few years. This possible event is described as nuclear winter.
Humans' use of land impact how much sunlight the surface reflects and the concentration of dust. Cloud formation is not only influenced by how much water is in the air and the temperature, but also by the amount of aerosols in the air such as dust. Globally, more dust is available if there are many regions with dry soils, little vegetation and strong winds.
Evidence and measurement of climate changes
Paleoclimatology is the study of changes in climate through the entire history of Earth. It uses a variety of proxy methods from the Earth and life sciences to obtain data preserved within things such as rocks, sediments, ice sheets, tree rings, corals, shells, and microfossils. It then uses the records to determine the past states of the Earth's various climate regions and its atmospheric system. Direct measurements give a more complete overview of climate variability.
Direct measurements
Climate changes that occurred after the widespread deployment of measuring devices can be observed directly. Reasonably complete global records of surface temperature are available beginning from the mid-late 19th century. Further observations are derived indirectly from historical documents. Satellite cloud and precipitation data has been available since the 1970s.
Historical climatology is the study of historical changes in climate and their effect on human history and development. The primary sources include written records such as sagas, chronicles, maps and local history literature as well as pictorial representations such as paintings, drawings and even rock art. Climate variability in the recent past may be derived from changes in settlement and agricultural patterns. Archaeological evidence, oral history and historical documents can offer insights into past changes in the climate. Changes in climate have been linked to the rise and the collapse of various civilizations.
Proxy measurements
Various archives of past climate are present in rocks, trees and fossils. From these archives, indirect measures of climate, so-called proxies, can be derived. Quantification of climatological variation of precipitation in prior centuries and epochs is less complete but approximated using proxies such as marine sediments, ice cores, cave stalagmites, and tree rings. Stress, too little precipitation or unsuitable temperatures, can alter the growth rate of trees, which allows scientists to infer climate trends by analyzing the growth rate of tree rings. This branch of science studying this called dendroclimatology. Glaciers leave behind moraines that contain a wealth of material—including organic matter, quartz, and potassium that may be dated—recording the periods in which a glacier advanced and retreated.
Analysis of ice in cores drilled from an ice sheet such as the Antarctic ice sheet, can be used to show a link between temperature and global sea level variations. The air trapped in bubbles in the ice can also reveal the CO2 variations of the atmosphere from the distant past, well before modern environmental influences. The study of these ice cores has been a significant indicator of the changes in CO2 over many millennia, and continues to provide valuable information about the differences between ancient and modern atmospheric conditions. The 18O/16O ratio in calcite and ice core samples used to deduce ocean temperature in the distant past is an example of a temperature proxy method.
The remnants of plants, and specifically pollen, are also used to study climatic change. Plant distributions vary under different climate conditions. Different groups of plants have pollen with distinctive shapes and surface textures, and since the outer surface of pollen is composed of a very resilient material, they resist decay. Changes in the type of pollen found in different layers of sediment indicate changes in plant communities. These changes are often a sign of a changing climate. As an example, pollen studies have been used to track changing vegetation patterns throughout the Quaternary glaciations and especially since the last glacial maximum. Remains of beetles are common in freshwater and land sediments. Different species of beetles tend to be found under different climatic conditions. Given the extensive lineage of beetles whose genetic makeup has not altered significantly over the millennia, knowledge of the present climatic range of the different species, and the age of the sediments in which remains are found, past climatic conditions may be inferred.
Analysis and uncertainties
One difficulty in detecting climate cycles is that the Earth's climate has been changing in non-cyclic ways over most paleoclimatological timescales. Currently we are in a period of anthropogenic global warming. In a larger timeframe, the Earth is emerging from the latest ice age, cooling from the Holocene climatic optimum and warming from the "Little Ice Age", which means that climate has been constantly changing over the last 15,000 years or so. During warm periods, temperature fluctuations are often of a lesser amplitude. The Pleistocene period, dominated by repeated glaciations, developed out of more stable conditions in the Miocene and Pliocene climate. Holocene climate has been relatively stable. All of these changes complicate the task of looking for cyclical behavior in the climate.
Positive feedback, negative feedback, and ecological inertia from the land-ocean-atmosphere system often attenuate or reverse smaller effects, whether from orbital forcings, solar variations or changes in concentrations of greenhouse gases. Certain feedbacks involving processes such as clouds are also uncertain; for contrails, natural cirrus clouds, oceanic dimethyl sulfide and a land-based equivalent, competing theories exist concerning effects on climatic temperatures, for example contrasting the Iris hypothesis and CLAW hypothesis.
Impacts
Life
Vegetation
A change in the type, distribution and coverage of vegetation may occur given a change in the climate. Some changes in climate may result in increased precipitation and warmth, resulting in improved plant growth and the subsequent sequestration of airborne CO2. Though an increase in CO2 may benefit plants, some factors can diminish this increase. If there is an environmental change such as drought, increased CO2 concentrations will not benefit the plant. So even though climate change does increase CO2 emissions, plants will often not use this increase as other environmental stresses put pressure on them. However, sequestration of CO2 is expected to affect the rate of many natural cycles like plant litter decomposition rates. A gradual increase in warmth in a region will lead to earlier flowering and fruiting times, driving a change in the timing of life cycles of dependent organisms. Conversely, cold will cause plant bio-cycles to lag.
Larger, faster or more radical changes, however, may result in vegetation stress, rapid plant loss and desertification in certain circumstances. An example of this occurred during the Carboniferous Rainforest Collapse (CRC), an extinction event 300 million years ago. At this time vast rainforests covered the equatorial region of Europe and America. Climate change devastated these tropical rainforests, abruptly fragmenting the habitat into isolated 'islands' and causing the extinction of many plant and animal species.
Wildlife
One of the most important ways animals can deal with climatic change is migration to warmer or colder regions. On a longer timescale, evolution makes ecosystems including animals better adapted to a new climate. Rapid or large climate change can cause mass extinctions when creatures are stretched too far to be able to adapt.
Humanity
Collapses of past civilizations such as the Maya may be related to cycles of precipitation, especially drought, that in this example also correlates to the Western Hemisphere Warm Pool. Around 70 000 years ago the Toba supervolcano eruption created an especially cold period during the ice age, leading to a possible genetic bottleneck in human populations.
Changes in the cryosphere
Glaciers and ice sheets
Glaciers are considered among the most sensitive indicators of a changing climate. Their size is determined by a mass balance between snow input and melt output. As temperatures increase, glaciers retreat unless snow precipitation increases to make up for the additional melt. Glaciers grow and shrink due both to natural variability and external forcings. Variability in temperature, precipitation and hydrology can strongly determine the evolution of a glacier in a particular season.
The most significant climate processes since the middle to late Pliocene (approximately 3 million years ago) are the glacial and interglacial cycles. The present interglacial period (the Holocene) has lasted about 11,700 years. Shaped by orbital variations, responses such as the rise and fall of continental ice sheets and significant sea-level changes helped create the climate. Other changes, including Heinrich events, Dansgaard–Oeschger events and the Younger Dryas, however, illustrate how glacial variations may also influence climate without the orbital forcing.
Sea level change
During the Last Glacial Maximum, some 25,000 years ago, sea levels were roughly 130 m lower than today. The deglaciation afterwards was characterized by rapid sea level change. In the early Pliocene, global temperatures were 1–2˚C warmer than the present temperature, yet sea level was 15–25 meters higher than today.
Sea ice
Sea ice plays an important role in Earth's climate as it affects the total amount of sunlight that is reflected away from the Earth. In the past, the Earth's oceans have been almost entirely covered by sea ice on a number of occasions, when the Earth was in a so-called Snowball Earth state, and completely ice-free in periods of warm climate. When there is a lot of sea ice present globally, especially in the tropics and subtropics, the climate is more sensitive to forcings as the ice–albedo feedback is very strong.
Climate history
Various climate forcings are typically in flux throughout geologic time, and some processes of the Earth's temperature may be self-regulating. For example, during the Snowball Earth period, large glacial ice sheets spanned to Earth's equator, covering nearly its entire surface, and very high albedo created extremely low temperatures, while the accumulation of snow and ice likely removed carbon dioxide through atmospheric deposition. However, the absence of plant cover to absorb atmospheric CO2 emitted by volcanoes meant that the greenhouse gas could accumulate in the atmosphere. There was also an absence of exposed silicate rocks, which use CO2 when they undergo weathering. This created a warming that later melted the ice and brought Earth's temperature back up.
Paleo-eocene thermal maximum
The Paleocene–Eocene Thermal Maximum (PETM) was a time period with more than 5–8 °C global average temperature rise across the event. This climate event occurred at the time boundary of the Paleocene and Eocene geological epochs. During the event large amounts of methane was released, a potent greenhouse gas. The PETM represents a "case study" for modern climate change as in the greenhouse gases were released in a geologically relatively short amount of time. During the PETM, a mass extinction of organisms in the deep ocean took place.
The Cenozoic
Throughout the Cenozoic, multiple climate forcings led to warming and cooling of the atmosphere, which led to the early formation of the Antarctic ice sheet, subsequent melting, and its later reglaciation. The temperature changes occurred somewhat suddenly, at carbon dioxide concentrations of about 600–760 ppm and temperatures approximately 4 °C warmer than today. During the Pleistocene, cycles of glaciations and interglacials occurred on cycles of roughly 100,000 years, but may stay longer within an interglacial when orbital eccentricity approaches zero, as during the current interglacial. Previous interglacials such as the Eemian phase created temperatures higher than today, higher sea levels, and some partial melting of the West Antarctic ice sheet.
Climatological temperatures substantially affect cloud cover and precipitation. At lower temperatures, air can hold less water vapour, which can lead to decreased precipitation. During the Last Glacial Maximum of 18,000 years ago, thermal-driven evaporation from the oceans onto continental landmasses was low, causing large areas of extreme desert, including polar deserts (cold but with low rates of cloud cover and precipitation). In contrast, the world's climate was cloudier and wetter than today near the start of the warm Atlantic Period of 8000 years ago.
The Holocene
The Holocene is characterized by a long-term cooling starting after the Holocene Optimum, when temperatures were probably only just below current temperatures (second decade of the 21st century), and a strong African Monsoon created grassland conditions in the Sahara during the Neolithic Subpluvial. Since that time, several cooling events have occurred, including:
the Piora Oscillation
the Middle Bronze Age Cold Epoch
the Iron Age Cold Epoch
the Little Ice Age
the phase of cooling c. 1940–1970, which led to global cooling hypothesis
In contrast, several warm periods have also taken place, and they include but are not limited to:
a warm period during the apex of the Minoan civilization
the Roman Warm Period
the Medieval Warm Period
Modern warming during the 20th century
Certain effects have occurred during these cycles. For example, during the Medieval Warm Period, the American Midwest was in drought, including the Sand Hills of Nebraska which were active sand dunes. The black death plague of Yersinia pestis also occurred during Medieval temperature fluctuations, and may be related to changing climates.
Solar activity may have contributed to part of the modern warming that peaked in the 1930s. However, solar cycles fail to account for warming observed since the 1980s to the present day. Events such as the opening of the Northwest Passage and recent record low ice minima of the modern Arctic shrinkage have not taken place for at least several centuries, as early explorers were all unable to make an Arctic crossing, even in summer. Shifts in biomes and habitat ranges are also unprecedented, occurring at rates that do not coincide with known climate oscillations .
Modern climate change and global warming
As a consequence of humans emitting greenhouse gases, global surface temperatures have started rising. Global warming is an aspect of modern climate change, a term that also includes the observed changes in precipitation, storm tracks and cloudiness. As a consequence, glaciers worldwide have been found to be shrinking significantly. Land ice sheets in both Antarctica and Greenland have been losing mass since 2002 and have seen an acceleration of ice mass loss since 2009. Global sea levels have been rising as a consequence of thermal expansion and ice melt. The decline in Arctic sea ice, both in extent and thickness, over the last several decades is further evidence for rapid climate change.
Variability between regions
In addition to global climate variability and global climate change over time, numerous climatic variations occur contemporaneously across different physical regions.
The oceans' absorption of about 90% of excess heat has helped to cause land surface temperatures to grow more rapidly than sea surface temperatures. The Northern Hemisphere, having a larger landmass-to-ocean ratio than the Southern Hemisphere, shows greater average temperature increases. Variations across different latitude bands also reflect this divergence in average temperature increase, with the temperature increase of northern extratropics exceeding that of the tropics, which in turn exceeds that of the southern extratropics.
Upper regions of the atmosphere have been cooling contemporaneously with a warming in the lower atmosphere, confirming the action of the greenhouse effect and ozone depletion.
Observed regional climatic variations confirm predictions concerning ongoing changes, for example, by contrasting (smoother) year-to-year global variations with (more volatile) year-to-year variations in localized regions. Conversely, comparing different regions' warming patterns to their respective historical variabilities, allows the raw magnitudes of temperature changes to be placed in the perspective of what is normal variability for each region.
Regional variability observations permit study of regionalized climate tipping points such as rainforest loss, ice sheet and sea ice melt, and permafrost thawing. Such distinctions underlie research into a possible global cascade of tipping points.
See also
Climatological normal
Anthropocene
Notes
References
(pb: ).
External links
Global Climate Change from NASA (US)
Intergovernmental Panel on Climate Change (IPCC)
Climate Variability – NASA Science
Climate Change and Variability, National Centers for Environmental Information
Climate and weather statistics
History of climate variability and change
Articles containing video clips
Climatology | Climate variability and change | [
"Physics"
] | 7,387 | [
"Weather",
"Physical phenomena",
"Climate and weather statistics"
] |
47,520 | https://en.wikipedia.org/wiki/Coccolithophore | Coccolithophores, or coccolithophorids, are single-celled organisms which are part of the phytoplankton, the autotrophic (self-feeding) component of the plankton community. They form a group of about 200 species, and belong either to the kingdom Protista, according to Robert Whittaker's five-kingdom system, or clade Hacrobia, according to a newer biological classification system. Within the Hacrobia, the coccolithophores are in the phylum or division Haptophyta, class Prymnesiophyceae (or Coccolithophyceae). Coccolithophores are almost exclusively marine, are photosynthetic and mixotrophic, and exist in large numbers throughout the sunlight zone of the ocean.
Coccolithophores are the most productive calcifying organisms on the planet, covering themselves with a calcium carbonate shell called a coccosphere. However, the reasons they calcify remain elusive. One key function may be that the coccosphere offers protection against microzooplankton predation, which is one of the main causes of phytoplankton death in the ocean.
Coccolithophores are ecologically important, and biogeochemically they play significant roles in the marine biological pump and the carbon cycle. Depending on habitat, they can produce up to 40 percent of the local marine primary production. They are of particular interest to those studying global climate change because, as ocean acidity increases, their coccoliths may become even more important as a carbon sink. Management strategies are being employed to prevent eutrophication-related coccolithophore blooms, as these blooms lead to a decrease in nutrient flow to lower levels of the ocean.
The most abundant species of coccolithophore, Emiliania huxleyi, belongs to the order Isochrysidales and family Noëlaerhabdaceae. It is found in temperate, subtropical, and tropical oceans. This makes E. huxleyi an important part of the planktonic base of a large proportion of marine food webs. It is also the fastest growing coccolithophore in laboratory cultures. It is studied for the extensive blooms it forms in nutrient depleted waters after the reformation of the summer thermocline. and for its production of molecules known as alkenones that are commonly used by earth scientists as a means to estimate past sea surface temperatures.
Overview
Coccolithophores (or coccolithophorids, from the adjective) form a group of about 200 phytoplankton species. They belong either to the kingdom Protista, according to Robert Whittaker's Five kingdom classification, or clade Hacrobia, according to the newer biological classification system. Within the Hacrobia, the coccolithophores are in the phylum or division Haptophyta, class Prymnesiophyceae (or Coccolithophyceae). Coccolithophores are distinguished by special calcium carbonate plates (or scales) of uncertain function called coccoliths, which are also important microfossils. However, there are Prymnesiophyceae species lacking coccoliths (e.g. in genus Prymnesium), so not every member of Prymnesiophyceae is a coccolithophore.
Coccolithophores are single-celled phytoplankton that produce small calcium carbonate (CaCO3) scales (coccoliths) which cover the cell surface in the form of a spherical coating, called a coccosphere. Many species are also mixotrophs, and are able to photosynthesise as well as ingest prey.
Coccolithophores have been an integral part of marine plankton communities since the Jurassic. Today, coccolithophores contribute ~1–10% to inorganic carbon fixation (calcification) to total carbon fixation (calcification plus photosynthesis) in the surface ocean and ~50% to pelagic CaCO3 sediments. Their calcareous shell increases the sinking velocity of photosynthetically fixed into the deep ocean by ballasting organic matter. At the same time, the biogenic precipitation of calcium carbonate during coccolith formation reduces the total alkalinity of seawater and releases . Thus, coccolithophores play an important role in the marine carbon cycle by influencing the efficiency of the biological carbon pump and the oceanic uptake of atmospheric .
As of 2021, it is not known why coccolithophores calcify and how their ability to produce coccoliths is associated with their ecological success. The most plausible benefit of having a coccosphere seems to be a protection against predators or viruses. Viral infection is an important cause of phytoplankton death in the oceans, and it has recently been shown that calcification can influence the interaction between a coccolithophore and its virus. The major predators of marine phytoplankton are microzooplankton like ciliates and dinoflagellates. These are estimated to consume about two-thirds of the primary production in the ocean and microzooplankton can exert a strong grazing pressure on coccolithophore populations. Although calcification does not prevent predation, it has been argued that the coccosphere reduces the grazing efficiency by making it more difficult for the predator to utilise the organic content of coccolithophores. Heterotrophic protists are able to selectively choose prey on the basis of its size or shape and through chemical signals and may thus favor other prey that is available and not protected by coccoliths.
Structure
Coccolithophores are spherical cells about 5–100 micrometres across, enclosed by calcareous plates called coccoliths, which are about 2–25 micrometres across. Each cell contains two brown chloroplasts which surround the nucleus.
Enclosed in each coccosphere is a single cell with membrane bound organelles. Two large chloroplasts with brown pigment are located on either side of the cell and surround the nucleus, mitochondria, golgi apparatus, endoplasmic reticulum, and other organelles. Each cell also has two flagellar structures, which are involved not only in motility, but also in mitosis and formation of the cytoskeleton. In some species, a functional or vestigial haptonema is also present. This structure, which is unique to haptophytes, coils and uncoils in response to environmental stimuli. Although poorly understood, it has been proposed to be involved in prey capture.
Ecology
Life history strategy
The complex life cycle of coccolithophores is known as a haplodiplontic life cycle, and is characterized by an alternation of both asexual and sexual phases. The asexual phase is known as the haploid phase, while the sexual phase is known as the diploid phase. During the haploid phase, coccolithophores produce haploid cells through mitosis. These haploid cells can then divide further through mitosis or undergo sexual reproduction with other haploid cells. The resulting diploid cell goes through meiosis to produce haploid cells again, starting the cycle over. With coccolithophores, asexual reproduction by mitosis is possible in both phases of the life cycle, which is a contrast with most other organisms that have alternating life cycles. Both abiotic and biotic factors may affect the frequency with which each phase occurs.
Coccolithophores reproduce asexually through binary fission. In this process the coccoliths from the parent cell are divided between the two daughter cells. There have been suggestions stating the possible presence of a sexual reproduction process due to the diploid stages of the coccolithophores, but this process has never been observed.
K or r- selected strategies of coccolithophores depend on their life cycle stage. When coccolithophores are diploid, they are r-selected. In this phase they tolerate a wider range of nutrient compositions. When they are haploid they are K- selected and are often more competitive in stable low nutrient environments. Most coccolithophores are K strategist and are usually found on nutrient-poor surface waters. They are poor competitors when compared to other phytoplankton and thrive in habitats where other phytoplankton would not survive. These two stages in the life cycle of coccolithophores occur seasonally, where more nutrition is available in warmer seasons and less is available in cooler seasons. This type of life cycle is known as a complex heteromorphic life cycle.
Global distribution
Coccolithophores occur throughout the world's oceans. Their distribution varies vertically by stratified layers in the ocean and geographically by different temporal zones. While most modern coccolithophores can be located in their associated stratified oligotrophic conditions, the most abundant areas of coccolithophores where there is the highest species diversity are located in subtropical zones with a temperate climate. While water temperature and the amount of light intensity entering the water's surface are the more influential factors in determining where species are located, the ocean currents also can determine the location where certain species of coccolithophores are found.
Although motility and colony formation vary according to the life cycle of different coccolithophore species, there is often alternation between a motile, haploid phase, and a non-motile diploid phase. In both phases, the organism's dispersal is largely due to ocean currents and circulation patterns.
Within the Pacific Ocean, approximately 90 species have been identified with six separate zones relating to different Pacific currents that contain unique groupings of different species of coccolithophores. The highest diversity of coccolithophores in the Pacific Ocean was in an area of the ocean considered the Central North Zone which is an area between 30 oN and 5 oN, composed of the North Equatorial Current and the Equatorial Countercurrent. These two currents move in opposite directions, east and west, allowing for a strong mixing of waters and allowing a large variety of species to populate the area.
In the Atlantic Ocean, the most abundant species are E. huxleyi and Florisphaera profunda with smaller concentrations of the species Umbellosphaera irregularis, Umbellosphaera tenuis and different species of Gephyrocapsa. Deep-dwelling coccolithophore species abundance is greatly affected by nutricline and thermocline depths. These coccolithophores increase in abundance when the nutricline and thermocline are deep and decrease when they are shallow.
The complete distribution of coccolithophores is currently not known and some regions, such as the Indian Ocean, are not as well studied as other locations in the Pacific and Atlantic Oceans. It is also very hard to explain distributions due to multiple constantly changing factors involving the ocean's properties, such as coastal and equatorial upwelling, frontal systems, benthic environments, unique oceanic topography, and pockets of isolated high or low water temperatures.
The upper photic zone is low in nutrient concentration, high in light intensity and penetration, and usually higher in temperature. The lower photic zone is high in nutrient concentration, low in light intensity and penetration and relatively cool. The middle photic zone is an area that contains the same values in between that of the lower and upper photic zones.
Great Calcite Belt
The Great Calcite Belt of the Southern Ocean is a region of elevated summertime upper ocean calcite concentration derived from coccolithophores, despite the region being known for its diatom predominance. The overlap of two major phytoplankton groups, coccolithophores and diatoms, in the dynamic frontal systems characteristic of this region provides an ideal setting to study environmental
influences on the distribution of different species within these taxonomic groups.
The Great Calcite Belt, defined as an elevated particulate inorganic carbon (PIC) feature occurring alongside seasonally elevated chlorophyll a in austral spring and summer in the Southern Ocean, plays an important role in climate fluctuations, accounting for over 60% of the Southern Ocean area (30–60° S). The region between 30° and 50° S has the highest uptake of anthropogenic carbon dioxide (CO2) alongside the North Atlantic and North Pacific oceans.
Effect of global climate change on distribution
Recent studies show that climate change has direct and indirect impacts on Coccolithophore distribution and productivity. They will inevitably be affected by the increasing temperatures and thermal stratification of the top layer of the ocean, since these are prime controls on their ecology, although it is not clear whether global warming would result in net increase or decrease of coccolithophores. As they are calcifying organisms, it has been suggested that ocean acidification due to increasing carbon dioxide could severely affect coccolithophores. Recent increases have seen a sharp increase in the population of coccolithophores.
Role in the food web
Coccolithophores are one of the more abundant primary producers in the ocean. As such, they are a large contributor to the primary productivity of the tropical and subtropical oceans, however, exactly how much has yet to have been recorded.
Dependence on nutrients
The ratio between the concentrations of nitrogen, phosphorus and silicate in particular areas of the ocean dictates competitive dominance within phytoplankton communities. Each ratio essentially tips the odds in favor of either diatoms or other groups of phytoplankton, such as coccolithophores. A low silicate to nitrogen and phosphorus ratio allows coccolithophores to outcompete other phytoplankton species; however, when silicate to phosphorus to nitrogen ratios are high coccolithophores are outcompeted by diatoms. The increase in agricultural processes lead to eutrophication of waters and thus, coccolithophore blooms in these high nitrogen and phosphorus, low silicate environments.
Impact on water column productivity
The calcite in calcium carbonate allows coccoliths to scatter more light than they absorb. This has two important consequences: 1) Surface waters become brighter, meaning they have a higher albedo, and 2) there is induced photoinhibition, meaning photosythetic production is diminished due to an excess of light. In case 1), a high concentration of coccoliths leads to a simultaneous increase in surface water temperature and decrease in the temperature of deeper waters. This results in more stratification in the water column and a decrease in the vertical mixing of nutrients. However, a 2012 study estimated that the overall effect of coccolithophores on the increase in radiative forcing of the ocean is less than that from anthropogenic factors. Therefore, the overall result of large blooms of coccolithophores is a decrease in water column productivity, rather than a contribution to global warming.
Predator-prey interactions
Their predators include the common predators of all phytoplankton including small fish, zooplankton, and shellfish larvae. Viruses specific to this species have been isolated from several locations worldwide and appear to play a major role in spring bloom dynamics.
Toxicity
No environmental evidence of coccolithophore toxicity has been reported, but they belong to the class Prymnesiophyceae which contain orders with toxic species. Toxic species have been found in the genera Prymnesium Massart and Chrysochromulina Lackey. Members of the genus Prymnesium have been found to produce haemolytic compounds, the agent responsible for toxicity. Some of these toxic species are responsible for large fish kills and can be accumulated in organisms such as shellfish; transferring it through the food chain. In laboratory tests for toxicity members of the oceanic coccolithophore genera Emiliania, Gephyrocapsa, Calcidiscus and Coccolithus were shown to be non-toxic as were species of the coastal genus Hymenomonas, however several species of Pleurochrysis and Jomonlithus, both coastal genera were toxic to Artemia.
Community interactions
Coccolithophorids are predominantly found as single, free-floating haploid or diploid cells.
Competition
Most phytoplankton need sunlight and nutrients from the ocean to survive, so they thrive in areas with large inputs of nutrient rich water upwelling from the lower levels of the ocean. Most coccolithophores require sunlight only for energy production, and have a higher ratio of nitrate uptake over ammonium uptake (nitrogen is required for growth and can be used directly from nitrate but not ammonium). Because of this they thrive in still, nutrient-poor environments where other phytoplankton are starving. Trade-offs associated with these faster growth rates include a smaller cell radius and lower cell volume than other types of phytoplankton.
Viral infection and coevolution
Giant DNA-containing viruses are known to lytically infect coccolithophores, particularly E. huxleyi. These viruses, known as E. huxleyi viruses (EhVs), appear to infect the coccosphere coated diploid phase of the life cycle almost exclusively. It has been proposed that as the haploid organism is not infected and therefore not affected by the virus, the co-evolutionary "arms race" between coccolithophores and these viruses does not follow the classic Red Queen evolutionary framework, but instead a "Cheshire Cat" ecological dynamic. More recent work has suggested that viral synthesis of sphingolipids and induction of programmed cell death provides a more direct link to study a Red Queen-like coevolutionary arms race at least between the coccolithoviruses and diploid organism.
Evolution and diversity
Coccolithophores are members of the clade Haptophyta, which is a sister clade to Centrohelida, which are both in Haptista. The oldest known coccolithophores are known from the Late Triassic, around the Norian-Rhaetian boundary. Diversity steadily increased over the course of the Mesozoic, reaching its apex during the Late Cretaceous. However, there was a sharp drop during the Cretaceous-Paleogene extinction event, when more than 90% of coccolithophore species became extinct. Coccoliths reached another, lower apex of diversity during the Paleocene-Eocene thermal maximum, but have subsequently declined since the Oligocene due to decreasing global temperatures, with species that produced large and heavily calcified coccoliths most heavily affected.
Coccolithophore shells
Exoskeleton: coccospheres and coccoliths
Each coccolithophore encloses itself in a protective shell of coccoliths, calcified scales which make up its exoskeleton or coccosphere. The coccoliths are created inside the coccolithophore cell and while some species maintain a single layer throughout life only producing new coccoliths as the cell grows, others continually produce and shed coccoliths.
Composition
The primary constituent of coccoliths is calcium carbonate, or chalk. Calcium carbonate is transparent, so the organisms' photosynthetic activity is not compromised by encapsulation in a coccosphere.
Formation
Coccoliths are produced by a biomineralization process known as coccolithogenesis. Generally, calcification of coccoliths occurs in the presence of light, and these scales are produced much more during the exponential phase of growth than the stationary phase. Although not yet entirely understood, the biomineralization process is tightly regulated by calcium signaling. Calcite formation begins in the golgi complex where protein templates nucleate the formation of CaCO3 crystals and complex acidic polysaccharides control the shape and growth of these crystals. As each scale is produced, it is exported in a Golgi-derived vesicle and added to the inner surface of the coccosphere. This means that the most recently produced coccoliths may lie beneath older coccoliths.
Depending upon the phytoplankton's stage in the life cycle, two different types of coccoliths may be formed. Holococcoliths are produced only in the haploid phase, lack radial symmetry, and are composed of anywhere from hundreds to thousands of similar minute (ca 0.1 μm) rhombic calcite crystals. These crystals are thought to form at least partially outside the cell. Heterococcoliths occur only in the diploid phase, have radial symmetry, and are composed of relatively few complex crystal units (fewer than 100). Although they are rare, combination coccospheres, which contain both holococcoliths and heterococcoliths, have been observed in the plankton recording coccolithophore life cycle transitions. Finally, the coccospheres of some species are highly modified with various appendages made of specialized coccoliths.
Function
While the exact function of the coccosphere is unclear, many potential functions have been proposed. Most obviously coccoliths may protect the phytoplankton from predators. It also appears that it helps them to create a more stable pH. During photosynthesis carbon dioxide is removed from the water, making it more basic. Also calcification removes carbon dioxide, but chemistry behind it leads to the opposite pH reaction; it makes the water more acidic. The combination of photosynthesis and calcification therefore even out each other regarding pH changes. In addition, these exoskeletons may confer an advantage in energy production, as coccolithogenesis seems highly coupled with photosynthesis. Organic precipitation of calcium carbonate from bicarbonate solution produces free carbon dioxide directly within the cellular body of the alga, this additional source of gas is then available to the Coccolithophore for photosynthesis. It has been suggested that they may provide a cell-wall like barrier to isolate intracellular chemistry from the marine environment. More specific, defensive properties of coccoliths may include protection from osmotic changes, chemical or mechanical shock, and short-wavelength light. It has also been proposed that the added weight of multiple layers of coccoliths allows the organism to sink to lower, more nutrient rich layers of the water and conversely, that coccoliths add buoyancy, stopping the cell from sinking to dangerous depths. Coccolith appendages have also been proposed to serve several functions, such as inhibiting grazing by zooplankton.
Uses
Coccoliths are the main component of the Chalk, a Late Cretaceous rock formation which outcrops widely in southern England and forms the White Cliffs of Dover, and of other similar rocks in many other parts of the world. At the present day sedimented coccoliths are a major component of the calcareous oozes that cover up to 35% of the ocean floor and is kilometres thick in places. Because of their abundance and wide geographic ranges, the coccoliths which make up the layers of this ooze and the chalky sediment formed as it is compacted serve as valuable microfossils.
Calcification, the biological production of calcium carbonate (CaCO3), is a key process in the marine carbon cycle. Coccolithophores are the major planktonic group responsible for pelagic CaCO3 production. The diagram on the right shows the energetic costs of coccolithophore calcification:
(A) Transport processes include the transport into the cell from the surrounding seawater of primary calcification substrates Ca2+ and HCO3− (black arrows) and the removal of the end product H+ from the cell (gray arrow). The transport of Ca2+ through the cytoplasm to the CV is the dominant cost associated with calcification.
(B) Metabolic processes include the synthesis of CAPs (gray rectangles) by the Golgi complex (white rectangles) that regulate the nucleation and geometry of CaCO3 crystals. The completed coccolith (gray plate) is a complex structure of intricately arranged CAPs and CaCO3 crystals.
(C) Mechanical and structural processes account for the secretion of the completed coccoliths that are transported from their original position adjacent to the nucleus to the cell periphery, where they are transferred to the surface of the cell. The costs associated with these processes are likely to be comparable to organic-scale exocytosis in noncalcifying haptophyte algae.
The diagram on the left shows the benefits of coccolithophore calcification. (A) Accelerated photosynthesis includes CCM (1) and enhanced light uptake via scattering of scarce photons for deep-dwelling species (2). (B) Protection from photodamage includes sunshade protection from ultraviolet (UV) light and photosynthetic active radiation (PAR) (1) and energy dissipation under high-light conditions (2). (C) Armor protection includes protection against viral/bacterial infections (1) and grazing by selective (2) and nonselective (3) grazers.
The degree by which calcification can adapt to ocean acidification is presently unknown. Cell physiological examinations found the essential H+ efflux (stemming from the use of HCO3− for intra-cellular calcification) to become more costly with ongoing ocean acidification as the electrochemical H+ inside-out gradient is reduced and passive proton outflow impeded. Adapted cells would have to activate proton channels more frequently, adjust their membrane potential, and/or lower their internal pH. Reduced intra-cellular pH would severely affect the entire cellular machinery and require other processes (e.g. photosynthesis) to co-adapt in order to keep H+ efflux alive. The obligatory H+ efflux associated with calcification may therefore pose a fundamental constraint on adaptation which may potentially explain why "calcification crisis" were possible during long-lasting (thousands of years) CO2 perturbation events even though evolutionary adaption to changing carbonate chemistry conditions is possible within one year. Unraveling these fundamental constraints and the limits of adaptation should be a focus in future coccolithophore studies because knowing them is the key information required to understand to what extent the calcification response to carbonate chemistry perturbations can be compensated by evolution.
Silicate- or cellulose-armored functional groups such as diatoms and dinoflagellates do not need to sustain the calcification-related H+ efflux. Thus, they probably do not need to adapt in order to keep costs for the production of structural elements low. On the contrary, dinoflagellates (except for calcifying species; with generally inefficient CO2-fixing RuBisCO enzymes may even profit from chemical changes since photosynthetic carbon fixation as their source of structural elements in the form of cellulose should be facilitated by the ocean acidification-associated CO2 fertilization. Under the assumption that any form of shell/exoskeleton protects phytoplankton against predation non-calcareous armors may be the preferable solution to realize protection in a future ocean.
The diagram on the right is a representation of how the comparative energetic effort for armor construction in diatoms, dinoflagellates and coccolithophores appear to operate. The frustule (diatom shell) seems to be the most inexpensive armor under all circumstances because diatoms typically outcompete all other groups when silicate is available. The coccosphere is relatively inexpensive under sufficient [CO2], high [HCO3−], and low [H+] because the substrate is saturating and protons are easily released into seawater. In contrast, the construction of thecal elements, which are organic (cellulose) plates that constitute the dinoflagellate shell, should rather be favored at high H+ concentrations because these usually coincide with high [CO2]. Under these conditions dinoflagellates could down-regulate the energy-consuming operation of carbon concentrating mechanisms to fuel the production of organic source material for their shell. Therefore, a shift in carbonate chemistry conditions toward high [CO2] may promote their competitiveness relative to coccolithophores. However, such a hypothetical gain in competitiveness due to altered carbonate chemistry conditions would not automatically lead to dinoflagellate dominance because a huge number of factors other than carbonate chemistry have an influence on species composition as well.
Defence against predation
Currently, the evidence supporting or refuting a protective function of the coccosphere against predation is limited. Some researchers found that overall microzooplankton predation rates were reduced during blooms of the coccolithophore Emiliania huxleyi, while others found high microzooplankton grazing rates on natural coccolithophore communities. In 2020, researchers found that in situ ingestion rates of microzooplankton on E. huxleyi did not differ significantly from those on similar sized non-calcifying phytoplankton. In laboratory experiments the heterotrophic dinoflagellate Oxyrrhis marina preferred calcified over non-calcified cells of E. huxleyi, which was hypothesised to be due to size selective feeding behaviour, since calcified cells are larger than non-calcified E. huxleyi. In 2015, Harvey et al. investigated predation by the dinoflagellate O. marina on different genotypes of non-calcifying E. huxleyi as well as calcified strains that differed in the degree of calcification. They found that the ingestion rate of O. marina was dependent on the genotype of E. huxleyi that was offered, rather than on their degree of calcification. In the same study, however, the authors found that predators which preyed on non-calcifying genotypes grew faster than those fed with calcified cells. In 2018, Strom et al. compared predation rates of the dinoflagellate Amphidinium longum on calcified relative to naked E. huxleyi prey and found no evidence that the coccosphere prevents ingestion by the grazer. Instead, ingestion rates were dependent on the offered genotype of E. huxleyi. Altogether, these two studies suggest that the genotype has a strong influence on ingestion by the microzooplankton species, but if and how calcification protects coccolithophores from microzooplankton predation could not be fully clarified.
Importance in global climate change
Impact on the carbon cycle
Coccolithophores have both long and short term effects on the carbon cycle. The production of coccoliths requires the uptake of dissolved inorganic carbon and calcium. Calcium carbonate and carbon dioxide are produced from calcium and bicarbonate by the following chemical reaction:
Because coccolithophores are photosynthetic organisms, they are able to use some of the released in the calcification reaction for photosynthesis.
However, the production of calcium carbonate drives surface alkalinity down, and in conditions of low alkalinity the is instead released back into the atmosphere.
As a result of this, researchers have postulated that large blooms of coccolithophores may contribute to global warming in the short term. A more widely accepted idea, however, is that over the long term coccolithophores contribute to an overall decrease in atmospheric concentrations. During calcification two carbon atoms are taken up and one of them becomes trapped as calcium carbonate. This calcium carbonate sinks to the bottom of the ocean in the form of coccoliths and becomes part of sediment; thus, coccolithophores provide a sink for emitted carbon, mediating the effects of greenhouse gas emissions.
Evolutionary responses to ocean acidification
Research also suggests that ocean acidification due to increasing concentrations of in the atmosphere may affect the calcification machinery of coccolithophores. This may not only affect immediate events such as increases in population or coccolith production, but also may induce evolutionary adaptation of coccolithophore species over longer periods of time. For example, coccolithophores use H+ ion channels in to constantly pump H+ ions out of the cell during coccolith production. This allows them to avoid acidosis, as coccolith production would otherwise produce a toxic excess of H+ ions. When the function of these ion channels is disrupted, the coccolithophores stop the calcification process to avoid acidosis, thus forming a feedback loop. Low ocean alkalinity, impairs ion channel function and therefore places evolutionary selective pressure on coccolithophores and makes them (and other ocean calcifiers) vulnerable to ocean acidification. In 2008, field evidence indicating an increase in calcification of newly formed ocean sediments containing coccolithophores bolstered the first ever experimental data showing that an increase in ocean concentration results in an increase in calcification of these organisms.
Decreasing coccolith mass is related to both the increasing concentrations of and decreasing concentrations of in the world's oceans. This lower calcification is assumed to put coccolithophores at ecological disadvantage. Some species like Calcidiscus leptoporus, however, are not affected in this way, while the most abundant coccolithophore species, E. huxleyi might be (study results are mixed). Also, highly calcified coccolithophorids have been found in conditions of low CaCO3 saturation contrary to predictions. Understanding the effects of increasing ocean acidification on coccolithophore species is absolutely essential to predicting the future chemical composition of the ocean, particularly its carbonate chemistry. Viable conservation and management measures will come from future research in this area. Groups like the European-based CALMARO are monitoring the responses of coccolithophore populations to varying pH's and working to determine environmentally sound measures of control.
Impact on microfossil record
Coccolith fossils are prominent and valuable calcareous microfossils. They are the largest global source of biogenic calcium carbonate, and significantly contribute to the global carbon cycle. They are the main constituent of chalk deposits such as the white cliffs of Dover.
Of particular interest are fossils dating back to the Palaeocene-Eocene Thermal Maximum 55 million years ago. This period is thought to correspond most directly to the current levels of in the ocean. Finally, field evidence of coccolithophore fossils in rock were used to show that the deep-sea fossil record bears a rock record bias similar to the one that is widely accepted to affect the land-based fossil record.
Impact on the oceans
The coccolithophorids help in regulating the temperature of the oceans. They thrive in warm seas and release dimethyl sulfide (DMS) into the air whose nuclei help to produce thicker clouds to block the sun. When the oceans cool, the number of coccolithophorids decrease and the amount of clouds also decrease. When there are fewer clouds blocking the sun, the temperature also rises. This, therefore, maintains the balance and equilibrium of nature.
See also
CLAW hypothesis
Dimethyl sulfide
Dimethylsulfoniopropionate
Emiliania huxleyi virus 86
Pleurochrysis carterae
References
External links
Sources of detailed information
Nannotax3 – illustrated guide to the taxonomy of coccolithophores and other nannofossils.
INA — International Nannoplankton Association
Emiliania huxleyi Home Page
Introductions to coccolithophores
University of California, Berkeley. Museum of Paleontology: "Introduction to the Prymnesiophyta".
The Paleontology Portal: Calcareous Nanoplankton
RadioLab – podcast on coccolithophores
Haptophytes
Microfossils
Extant Late Triassic first appearances
Planktology
Sedimentology
de:Haptophyta | Coccolithophore | [
"Chemistry"
] | 7,632 | [
"Microfossils",
"Microscopy"
] |
47,521 | https://en.wikipedia.org/wiki/Condensation | Condensation is the change of the state of matter from the gas phase into the liquid phase, and is the reverse of vaporization. The word most often refers to the water cycle. It can also be defined as the change in the state of water vapor to liquid water when in contact with a liquid or solid surface or cloud condensation nuclei within the atmosphere. When the transition happens from the gaseous phase into the solid phase directly, the change is called deposition.
Initiation
Condensation is initiated by the formation of atomic/molecular clusters of that species within its gaseous volume—like rain drop or snow flake formation within clouds—or at the contact between such gaseous phase and a liquid or solid surface. In clouds, this can be catalyzed by water-nucleating proteins, produced by atmospheric microbes, which are capable of binding gaseous or liquid water molecules.
Reversibility scenarios
A few distinct reversibility scenarios emerge here with respect to the nature of the surface.
absorption into the surface of a liquid (either of the same substance or one of its solvents)—is reversible as evaporation.
adsorption (as dew droplets) onto solid surface at pressures and temperatures higher than the species' triple point—also reversible as evaporation.
adsorption onto solid surface (as supplemental layers of solid) at pressures and temperatures lower than the species' triple point—is reversible as sublimation.
Most common scenarios
Condensation commonly occurs when a vapor is cooled and/or compressed to its saturation limit when the molecular density in the gas phase reaches its maximal threshold. Vapor cooling and compressing equipment that collects condensed liquids is called a "condenser".
Measurement
Psychrometry measures the rates of condensation through evaporation into the air moisture at various atmospheric pressures and temperatures. Water is the product of its vapor condensation—condensation is the process of such phase conversion.
Applications of condensation
Condensation is a crucial component of distillation, an important laboratory and industrial chemistry application.
Because condensation is a naturally occurring phenomenon, it can often be used to generate water in large quantities for human use. Many structures are made solely for the purpose of collecting water from condensation, such as air wells and fog fences. Such systems can often be used to retain soil moisture in areas where active desertification is occurring—so much so that some organizations educate people living in affected areas about water condensers to help them deal effectively with the situation.
It is also a crucial process in forming particle tracks in a cloud chamber. In this case, ions produced by an incident particle act as nucleation centers for the condensation of the vapor producing the visible "cloud" trails.
Commercial applications of condensation, by consumers as well as industry, include power generation, water desalination, thermal management, refrigeration, and air conditioning.
Biological adaptation
Numerous living beings use water made accessible by condensation. A few examples of these are the Australian thorny devil, the darkling beetles of the Namibian coast, and the coast redwoods of the West Coast of the United States.
Condensation in building construction
Condensation in building construction is an unwanted phenomenon as it may cause dampness, mold health issues, wood rot, corrosion, weakening of mortar and masonry walls, and energy penalties due to increased heat transfer. To alleviate these issues, the indoor air humidity needs to be lowered, or air ventilation in the building needs to be improved. This can be done in a number of ways, for example opening windows, turning on extractor fans, using dehumidifiers, drying clothes outside and covering pots and pans whilst cooking. Air conditioning or ventilation systems can be installed that help remove moisture from the air, and move air throughout a building. The amount of water vapor that can be stored in the air can be increased simply by increasing the temperature. However, this can be a double edged sword as most condensation in the home occurs when warm, moisture heavy air comes into contact with a cool surface. As the air is cooled, it can no longer hold as much water vapor. This leads to deposition of water on the cool surface. This is very apparent when central heating is used in combination with single glazed windows in winter.
Interstructure condensation may be caused by thermal bridges, insufficient or lacking insulation, damp proofing or insulated glazing.
Table
See also
Air well (condenser)
Bose–Einstein condensate
Cloud physics
Condenser (heat transfer)
DNA condensation
Dropwise condensation
Groasis Waterboxx
Kelvin equation
Liquefaction of gases
Phase diagram
Phase transition
Retrograde condensation
Surface condenser
References
Sources
Phase transitions | Condensation | [
"Physics",
"Chemistry"
] | 983 | [
"Physical phenomena",
"Phase transitions",
"Critical phenomena",
"Phases of matter",
"Statistical mechanics",
"Matter"
] |
47,526 | https://en.wikipedia.org/wiki/Convection | Convection is single or multiphase fluid flow that occurs spontaneously through the combined effects of material property heterogeneity and body forces on a fluid, most commonly density and gravity (see buoyancy). When the cause of the convection is unspecified, convection due to the effects of thermal expansion and buoyancy can be assumed. Convection may also take place in soft solids or mixtures where particles can flow.
Convective flow may be transient (such as when a multiphase mixture of oil and water separates) or steady state (see convection cell). The convection may be due to gravitational, electromagnetic or fictitious body forces. Heat transfer by natural convection plays a role in the structure of Earth's atmosphere, its oceans, and its mantle. Discrete convective cells in the atmosphere can be identified by clouds, with stronger convection resulting in thunderstorms. Natural convection also plays a role in stellar physics. Convection is often categorised or described by the main effect causing the convective flow; for example, thermal convection.
Convection cannot take place in most solids because neither bulk current flows nor significant diffusion of matter can take place.
Granular convection is a similar phenomenon in granular material instead of fluids.
Advection is fluid motion created by velocity instead of thermal gradients.
Convective heat transfer is the intentional use of convection as a method for heat transfer. Convection is a process in which heat is carried from place to place by the bulk movement of a fluid and gases.
History
In the 1830s, in The Bridgewater Treatises, the term convection is attested in a scientific sense. In treatise VIII by William Prout, in the book on chemistry, it says:
[...] This motion of heat takes place in three ways, which a common fire-place very well illustrates. If, for instance, we place a thermometer directly before a fire, it soon begins to rise, indicating an increase of temperature. In this case the heat has made its way through the space between the fire and the thermometer, by the process termed radiation. If we place a second thermometer in contact with any part of the grate, and away from the direct influence of the fire, we shall find that this thermometer also denotes an increase of temperature; but here the heat must have travelled through the metal of the grate, by what is termed conduction. Lastly, a third thermometer placed in the chimney, away from the direct influence of the fire, will also indicate a considerable increase of temperature; in this case a portion of the air, passing through and near the fire, has become heated, and has carried up the chimney the temperature acquired from the fire. There is at present no single term in our language employed to denote this third mode of the propagation of heat; but we venture to propose for that purpose, the term convection, [in footnote: [Latin] Convectio, a carrying or conveying] which not only expresses the leading fact, but also accords very well with the two other terms.
Later, in the same treatise VIII, in the book on meteorology, the concept of convection is also applied to "the process by which heat is communicated through water".
Terminology
Today, the word convection has different but related usages in different scientific or engineering contexts or applications.
In fluid mechanics, convection has a broader sense: it refers to the motion of fluid driven by density (or other property) difference.
In thermodynamics, convection often refers to heat transfer by convection, where the prefixed variant Natural Convection is used to distinguish the fluid mechanics concept of Convection (covered in this article) from convective heat transfer.
Some phenomena which result in an effect superficially similar to that of a convective cell may also be (inaccurately) referred to as a form of convection; for example, thermo-capillary convection and granular convection.
Mechanisms
Convection may happen in fluids at all scales larger than a few atoms. There are a variety of circumstances in which the forces required for convection arise, leading to different types of convection, described below. In broad terms, convection arises because of body forces acting within the fluid, such as gravity.
Natural convection
Natural convection is a flow whose motion is caused by some parts of a fluid being heavier than other parts. In most cases this leads to natural circulation: the ability of a fluid in a system to circulate continuously under gravity, with transfer of heat energy.
The driving force for natural convection is gravity. In a column of fluid, pressure increases with depth from the weight of the overlying fluid. The pressure at the bottom of a submerged object then exceeds that at the top, resulting in a net upward buoyancy force equal to the weight of the displaced fluid. Objects of higher density than that of the displaced fluid then sink. For example, regions of warmer low-density air rise, while those of colder high-density air sink. This creates a circulating flow: convection.
Gravity drives natural convection. Without gravity, convection does not occur, so there is no convection in free-fall (inertial) environments, such as that of the orbiting International Space Station. Natural convection can occur when there are hot and cold regions of either air or water, because both water and air become less dense as they are heated. But, for example, in the world's oceans it also occurs due to salt water being heavier than fresh water, so a layer of salt water on top of a layer of fresher water will also cause convection.
Natural convection has attracted a great deal of attention from researchers because of its presence both in nature and engineering applications. In nature, convection cells formed from air raising above sunlight-warmed land or water are a major feature of all weather systems. Convection is also seen in the rising plume of hot air from fire, plate tectonics, oceanic currents (thermohaline circulation) and sea-wind formation (where upward convection is also modified by Coriolis forces). In engineering applications, convection is commonly visualized in the formation of microstructures during the cooling of molten metals, and fluid flows around shrouded heat-dissipation fins, and solar ponds. A very common industrial application of natural convection is free air cooling without the aid of fans: this can happen on small scales (computer chips) to large scale process equipment.
Natural convection will be more likely and more rapid with a greater variation in density between the two fluids, a larger acceleration due to gravity that drives the convection or a larger distance through the convecting medium. Natural convection will be less likely and less rapid with more rapid diffusion (thereby diffusing away the thermal gradient that is causing the convection) or a more viscous (sticky) fluid.
The onset of natural convection can be determined by the Rayleigh number (Ra).
Differences in buoyancy within a fluid can arise for reasons other than temperature variations, in which case the fluid motion is called gravitational convection (see below). However, all types of buoyant convection, including natural convection, do not occur in microgravity environments. All require the presence of an environment which experiences g-force (proper acceleration).
The difference of density in the fluid is the key driving mechanism. If the differences of density are caused by heat, this force is called as "thermal head" or "thermal driving head." A fluid system designed for natural circulation will have a heat source and a heat sink. Each of these is in contact with some of the fluid in the system, but not all of it. The heat source is positioned lower than the heat sink.
Most fluids expand when heated, becoming less dense, and contract when cooled, becoming denser. At the heat source of a system of natural circulation, the heated fluid becomes lighter than the fluid surrounding it, and thus rises. At the heat sink, the nearby fluid becomes denser as it cools, and is drawn downward by gravity. Together, these effects create a flow of fluid from the heat source to the heat sink and back again.
Gravitational or buoyant convection
Gravitational convection is a type of natural convection induced by buoyancy variations resulting from material properties other than temperature. Typically this is caused by a variable composition of the fluid. If the varying property is a concentration gradient, it is known as solutal convection. For example, gravitational convection can be seen in the diffusion of a source of dry salt downward into wet soil due to the buoyancy of fresh water in saline.
Variable salinity in water and variable water content in air masses are frequent causes of convection in the oceans and atmosphere which do not involve heat, or else involve additional compositional density factors other than the density changes from thermal expansion (see thermohaline circulation). Similarly, variable composition within the Earth's interior which has not yet achieved maximal stability and minimal energy (in other words, with densest parts deepest) continues to cause a fraction of the convection of fluid rock and molten metal within the Earth's interior (see below).
Gravitational convection, like natural thermal convection, also requires a g-force environment in order to occur.
Solid-state convection in ice
Ice convection on Pluto is believed to occur in a soft mixture of nitrogen ice and carbon monoxide ice. It has also been proposed for Europa, and other bodies in the outer Solar System.
Thermomagnetic convection
Thermomagnetic convection can occur when an external magnetic field is imposed on a ferrofluid with varying magnetic susceptibility. In the presence of a temperature gradient this results in a nonuniform magnetic body force, which leads to fluid movement. A ferrofluid is a liquid which becomes strongly magnetized in the presence of a magnetic field.
Combustion
In a zero-gravity environment, there can be no buoyancy forces, and thus no convection possible, so flames in many circumstances without gravity smother in their own waste gases. Thermal expansion and chemical reactions resulting in expansion and contraction gases allows for ventilation of the flame, as waste gases are displaced by cool, fresh, oxygen-rich gas. moves in to take up the low pressure zones created when flame-exhaust water condenses.
Examples and applications
Systems of natural circulation include tornadoes and other weather systems, ocean currents, and household ventilation. Some solar water heaters use natural circulation. The Gulf Stream circulates as a result of the evaporation of water. In this process, the water increases in salinity and density. In the North Atlantic Ocean, the water becomes so dense that it begins to sink down.
Convection occurs on a large scale in atmospheres, oceans, planetary mantles, and it provides the mechanism of heat transfer for a large fraction of the outermost interiors of the Sun and all stars. Fluid movement during convection may be invisibly slow, or it may be obvious and rapid, as in a hurricane. On astronomical scales, convection of gas and dust is thought to occur in the accretion disks of black holes, at speeds which may closely approach that of light.
Demonstration experiments
Thermal convection in liquids can be demonstrated by placing a heat source (for example, a Bunsen burner) at the side of a container with a liquid. Adding a dye to the water (such as food colouring) will enable visualisation of the flow.
Another common experiment to demonstrate thermal convection in liquids involves submerging open containers of hot and cold liquid coloured with dye into a large container of the same liquid without dye at an intermediate temperature (for example, a jar of hot tap water coloured red, a jar of water chilled in a fridge coloured blue, lowered into a clear tank of water at room temperature).
A third approach is to use two identical jars, one filled with hot water dyed one colour, and cold water of another colour. One jar is then temporarily sealed (for example, with a piece of card), inverted and placed on top of the other. When the card is removed, if the jar containing the warmer liquid is placed on top no convection will occur. If the jar containing colder liquid is placed on top, a convection current will form spontaneously.
Convection in gases can be demonstrated using a candle in a sealed space with an inlet and exhaust port. The heat from the candle will cause a strong convection current which can be demonstrated with a flow indicator, such as smoke from another candle, being released near the inlet and exhaust areas respectively.
Double diffusive convection
Convection cells
A convection cell, also known as a Bénard cell, is a characteristic fluid flow pattern in many convection systems. A rising body of fluid typically loses heat because it encounters a colder surface. In liquid, this occurs because it exchanges heat with colder liquid through direct exchange. In the example of the Earth's atmosphere, this occurs because it radiates heat. Because of this heat loss the fluid becomes denser than the fluid underneath it, which is still rising. Since it cannot descend through the rising fluid, it moves to one side. At some distance, its downward force overcomes the rising force beneath it, and the fluid begins to descend. As it descends, it warms again and the cycle repeats itself. Additionally, convection cells can arise due to density variations resulting from differences in the composition of electrolytes.
Atmospheric convection
Atmospheric circulation
Atmospheric circulation is the large-scale movement of air, and is a means by which thermal energy is distributed on the surface of the Earth, together with the much slower (lagged) ocean circulation system. The large-scale structure of the atmospheric circulation varies from year to year, but the basic climatological structure remains fairly constant.
Latitudinal circulation occurs because incident solar radiation per unit area is highest at the heat equator, and decreases as the latitude increases, reaching minima at the poles. It consists of two primary convection cells, the Hadley cell and the polar vortex, with the Hadley cell experiencing stronger convection due to the release of latent heat energy by condensation of water vapor at higher altitudes during cloud formation.
Longitudinal circulation, on the other hand, comes about because the ocean has a higher specific heat capacity than land (and also thermal conductivity, allowing the heat to penetrate further beneath the surface ) and thereby absorbs and releases more heat, but the temperature changes less than land. This brings the sea breeze, air cooled by the water, ashore in the day, and carries the land breeze, air cooled by contact with the ground, out to sea during the night. Longitudinal circulation consists of two cells, the Walker circulation and El Niño / Southern Oscillation.
Weather
Some more localized phenomena than global atmospheric movement are also due to convection, including wind and some of the hydrologic cycle. For example, a foehn wind is a down-slope wind which occurs on the downwind side of a mountain range. It results from the adiabatic warming of air which has dropped most of its moisture on windward slopes. Because of the different adiabatic lapse rates of moist and dry air, the air on the leeward slopes becomes warmer than at the same height on the windward slopes.
A thermal column (or thermal) is a vertical section of rising air in the lower altitudes of the Earth's atmosphere. Thermals are created by the uneven heating of the Earth's surface from solar radiation. The Sun warms the ground, which in turn warms the air directly above it. The warmer air expands, becoming less dense than the surrounding air mass, and creating a thermal low. The mass of lighter air rises, and as it does, it cools by expansion at lower air pressures. It stops rising when it has cooled to the same temperature as the surrounding air. Associated with a thermal is a downward flow surrounding the thermal column. The downward moving exterior is caused by colder air being displaced at the top of the thermal. Another convection-driven weather effect is the sea breeze.
Warm air has a lower density than cool air, so warm air rises within cooler air, similar to hot air balloons. Clouds form as relatively warmer air carrying moisture rises within cooler air. As the moist air rises, it cools, causing some of the water vapor in the rising packet of air to condense. When the moisture condenses, it releases energy known as latent heat of condensation which allows the rising packet of air to cool less than its surrounding air, continuing the cloud's ascension. If enough instability is present in the atmosphere, this process will continue long enough for cumulonimbus clouds to form, which support lightning and thunder. Generally, thunderstorms require three conditions to form: moisture, an unstable airmass, and a lifting force (heat).
All thunderstorms, regardless of type, go through three stages: the developing stage, the mature stage, and the dissipation stage. The average thunderstorm has a diameter. Depending on the conditions present in the atmosphere, these three stages take an average of 30 minutes to go through.
Oceanic circulation
Solar radiation affects the oceans: warm water from the Equator tends to circulate toward the poles, while cold polar water heads towards the Equator. The surface currents are initially dictated by surface wind conditions. The trade winds blow westward in the tropics, and the westerlies blow eastward at mid-latitudes. This wind pattern applies a stress to the subtropical ocean surface with negative curl across the Northern Hemisphere, and the reverse across the Southern Hemisphere. The resulting Sverdrup transport is equatorward. Because of conservation of potential vorticity caused by the poleward-moving winds on the subtropical ridge's western periphery and the increased relative vorticity of poleward moving water, transport is balanced by a narrow, accelerating poleward current, which flows along the western boundary of the ocean basin, outweighing the effects of friction with the cold western boundary current which originates from high latitudes. The overall process, known as western intensification, causes currents on the western boundary of an ocean basin to be stronger than those on the eastern boundary.
As it travels poleward, warm water transported by strong warm water current undergoes evaporative cooling. The cooling is wind driven: wind moving over water cools the water and also causes evaporation, leaving a saltier brine. In this process, the water becomes saltier and denser and decreases in temperature. Once sea ice forms, salts are left out of the ice, a process known as brine exclusion. These two processes produce water that is denser and colder. The water across the northern Atlantic Ocean becomes so dense that it begins to sink down through less salty and less dense water. (This open ocean convection is not unlike that of a lava lamp.) This downdraft of heavy, cold and dense water becomes a part of the North Atlantic Deep Water, a south-going stream.
Mantle convection
Mantle convection is the slow creeping motion of Earth's rocky mantle caused by convection currents carrying heat from the interior of the Earth to the surface. It is one of 3 driving forces that causes tectonic plates to move around the Earth's surface.
The Earth's surface is divided into a number of tectonic plates that are continuously being created and consumed at their opposite plate boundaries. Creation (accretion) occurs as mantle is added to the growing edges of a plate. This hot added material cools down by conduction and convection of heat. At the consumption edges of the plate, the material has thermally contracted to become dense, and it sinks under its own weight in the process of subduction at an ocean trench. This subducted material sinks to some depth in the Earth's interior where it is prohibited from sinking further. The subducted oceanic crust triggers volcanism.
Convection within Earth's mantle is the driving force for plate tectonics. Mantle convection is the result of a thermal gradient: the lower mantle is hotter than the upper mantle, and is therefore less dense. This sets up two primary types of instabilities. In the first type, plumes rise from the lower mantle, and corresponding unstable regions of lithosphere drip back into the mantle. In the second type, subducting oceanic plates (which largely constitute the upper thermal boundary layer of the mantle) plunge back into the mantle and move downwards towards the core-mantle boundary. Mantle convection occurs at rates of centimeters per year, and it takes on the order of hundreds of millions of years to complete a cycle of convection.
Neutrino flux measurements from the Earth's core (see kamLAND) show the source of about two-thirds of the heat in the inner core is the radioactive decay of 40K, uranium and thorium. This has allowed plate tectonics on Earth to continue far longer than it would have if it were simply driven by heat left over from Earth's formation; or with heat produced from gravitational potential energy, as a result of physical rearrangement of denser portions of the Earth's interior toward the center of the planet (that is, a type of prolonged falling and settling).
Stack effect
The Stack effect or chimney effect is the movement of air into and out of buildings, chimneys, flue gas stacks, or other containers due to buoyancy. Buoyancy occurs due to a difference in indoor-to-outdoor air density resulting from temperature and moisture differences. The greater the thermal difference and the height of the structure, the greater the buoyancy force, and thus the stack effect. The stack effect helps drive natural ventilation and infiltration. Some cooling towers operate on this principle; similarly the solar updraft tower is a proposed device to generate electricity based on the stack effect.
Stellar physics
The convection zone of a star is the range of radii in which energy is transported outward from the core region primarily by convection rather than radiation. This occurs at radii which are sufficiently opaque that convection is more efficient than radiation at transporting energy.
Granules on the photosphere of the Sun are the visible tops of convection cells in the photosphere, caused by convection of plasma in the photosphere. The rising part of the granules is located in the center where the plasma is hotter. The outer edge of the granules is darker due to the cooler descending plasma. A typical granule has a diameter on the order of 1,000 kilometers and each lasts 8 to 20 minutes before dissipating. Below the photosphere is a layer of much larger "supergranules" up to 30,000 kilometers in diameter, with lifespans of up to 24 hours.
Water convection at freezing temperatures
Water is a fluid that does not obey the Boussinesq approximation. This is because its density varies nonlinearly with temperature, which causes its thermal expansion coefficient to be inconsistent near freezing temperatures. The density of water reaches a maximum at 4 °C and decreases as the temperature deviates. This phenomenon is investigated by experiment and numerical methods. Water is initially stagnant at 10 °C within a square cavity. It is differentially heated between the two vertical walls, where the left and right walls are held at 10 °C and 0 °C, respectively. The density anomaly manifests in its flow pattern. As the water is cooled at the right wall, the density increases, which accelerates the flow downward. As the flow develops and the water cools further, the decrease in density causes a recirculation current at the bottom right corner of the cavity.
Another case of this phenomenon is the event of super-cooling, where the water is cooled to below freezing temperatures but does not immediately begin to freeze. Under the same conditions as before, the flow is developed. Afterward, the temperature of the right wall is decreased to −10 °C. This causes the water at that wall to become supercooled, create a counter-clockwise flow, and initially overpower the warm current. This plume is caused by a delay in the nucleation of the ice. Once ice begins to form, the flow returns to a similar pattern as before and the solidification propagates gradually until the flow is redeveloped.
Nuclear reactors
In a nuclear reactor, natural circulation can be a design criterion. It is achieved by reducing turbulence and friction in the fluid flow (that is, minimizing head loss), and by providing a way to remove any inoperative pumps from the fluid path. Also, the reactor (as the heat source) must be physically lower than the steam generators or turbines (the heat sink). In this way, natural circulation will ensure that the fluid will continue to flow as long as the reactor is hotter than the heat sink, even when power cannot be supplied to the pumps. Notable examples are the S5G
and S8G United States Naval reactors, which were designed to operate at a significant fraction of full power under natural circulation, quieting those propulsion plants. The S6G reactor cannot operate at power under natural circulation, but can use it to maintain emergency cooling while shut down.
By the nature of natural circulation, fluids do not typically move very fast, but this is not necessarily bad, as high flow rates are not essential to safe and effective reactor operation. In modern design nuclear reactors, flow reversal is almost impossible. All nuclear reactors, even ones designed to primarily use natural circulation as the main method of fluid circulation, have pumps that can circulate the fluid in the case that natural circulation is not sufficient.
Mathematical models of convection
A number of dimensionless terms have been derived to describe and predict convection, including the Archimedes number, Grashof number, Richardson number, and the Rayleigh number.
In cases of mixed convection (natural and forced occurring together) one would often like to know how much of the convection is due to external constraints, such as the fluid velocity in the pump, and how much is due to natural convection occurring in the system.
The relative magnitudes of the Grashof number and the square of the Reynolds number determine which form of convection dominates. If , forced convection may be neglected, whereas if , natural convection may be neglected. If the ratio, known as the Richardson number, is approximately one, then both forced and natural convection need to be taken into account.
Onset
The onset of natural convection is determined by the Rayleigh number (Ra). This dimensionless number is given by
where
is the difference in density between the two parcels of material that are mixing
is the local gravitational acceleration
is the characteristic length-scale of convection: the depth of the boiling pot, for example
is the diffusivity of the characteristic that is causing the convection, and
is the dynamic viscosity.
Natural convection will be more likely and/or more rapid with a greater variation in density between the two fluids, a larger acceleration due to gravity that drives the convection, and/or a larger distance through the convecting medium. Convection will be less likely and/or less rapid with more rapid diffusion (thereby diffusing away the gradient that is causing the convection) and/or a more viscous (sticky) fluid.
For thermal convection due to heating from below, as described in the boiling pot above, the equation is modified for thermal expansion and thermal diffusivity. Density variations due to thermal expansion are given by:
where
is the reference density, typically picked to be the average density of the medium,
is the coefficient of thermal expansion, and
is the temperature difference across the medium.
The general diffusivity, , is redefined as a thermal diffusivity, .
Inserting these substitutions produces a Rayleigh number that can be used to predict thermal convection.
Turbulence
The tendency of a particular naturally convective system towards turbulence relies on the Grashof number (Gr).
In very sticky, viscous fluids (large ν), fluid motion is restricted, and natural convection will be non-turbulent.
Following the treatment of the previous subsection, the typical fluid velocity is of the order of , up to a numerical factor depending on the geometry of the system. Therefore, Grashof number can be thought of as Reynolds number with the velocity of natural convection replacing the velocity in Reynolds number's formula. However In practice, when referring to the Reynolds number, it is understood that one is considering forced convection, and the velocity is taken as the velocity dictated by external constraints (see below).
Behavior
The Grashof number can be formulated for natural convection occurring due to a concentration gradient, sometimes termed thermo-solutal convection. In this case, a concentration of hot fluid diffuses into a cold fluid, in much the same way that ink poured into a container of water diffuses to dye the entire space. Then:
Natural convection is highly dependent on the geometry of the hot surface, various correlations exist in order to determine the heat transfer coefficient.
A general correlation that applies for a variety of geometries is
The value of f4(Pr) is calculated using the following formula
Nu is the Nusselt number and the values of Nu0 and the characteristic length used to calculate Re are listed below (see also Discussion):
Warning: The values indicated for the Horizontal cylinder are wrong; see discussion.
Natural convection from a vertical plate
One example of natural convection is heat transfer from an isothermal vertical plate immersed in a fluid, causing the fluid to move parallel to the plate. This will occur in any system wherein the density of the moving fluid varies with position. These phenomena will only be of significance when the moving fluid is minimally affected by forced convection.
When considering the flow of fluid is a result of heating, the following correlations can be used, assuming the fluid is an ideal diatomic, has adjacent to a vertical plate at constant temperature and the flow of the fluid is completely laminar.
Num = 0.478(Gr0.25)
Mean Nusselt number = Num = hmL/k
where
hm = mean coefficient applicable between the lower edge of the plate and any point in a distance L (W/m2. K)
L = height of the vertical surface (m)
k = thermal conductivity (W/m. K)
Grashof number = Gr =
where
g = gravitational acceleration (m/s2)
L = distance above the lower edge (m)
ts = temperature of the wall (K)
t∞ = fluid temperature outside the thermal boundary layer (K)
v = kinematic viscosity of the fluid (m2/s)
T = absolute temperature (K)
When the flow is turbulent different correlations involving the Rayleigh Number (a function of both the Grashof number and the Prandtl number) must be used.
Note that the above equation differs from the usual expression for Grashof number because the value has been replaced by its approximation , which applies for ideal gases only (a reasonable approximation for air at ambient pressure).
Pattern formation
Convection, especially Rayleigh–Bénard convection, where the convecting fluid is contained by two rigid horizontal plates, is a convenient example of a pattern-forming system.
When heat is fed into the system from one direction (usually below), at small values it merely diffuses (conducts) from below upward, without causing fluid flow. As the heat flow is increased, above a critical value of the Rayleigh number, the system undergoes a bifurcation from the stable conducting state to the convecting state, where bulk motion of the fluid due to heat begins. If fluid parameters other than density do not depend significantly on temperature, the flow profile is symmetric, with the same volume of fluid rising as falling. This is known as Boussinesq convection.
As the temperature difference between the top and bottom of the fluid becomes higher, significant differences in fluid parameters other than density may develop in the fluid due to temperature. An example of such a parameter is viscosity, which may begin to significantly vary horizontally across layers of fluid. This breaks the symmetry of the system, and generally changes the pattern of up- and down-moving fluid from stripes to hexagons, as seen at right. Such hexagons are one example of a convection cell.
As the Rayleigh number is increased even further above the value where convection cells first appear, the system may undergo other bifurcations, and other more complex patterns, such as spirals, may begin to appear.
See also
References
External links
Fluid mechanics
Physical phenomena | Convection | [
"Physics",
"Chemistry",
"Engineering"
] | 6,605 | [
"Transport phenomena",
"Physical phenomena",
"Convection",
"Civil engineering",
"Thermodynamics",
"Fluid mechanics"
] |
47,527 | https://en.wikipedia.org/wiki/Cryosphere | The cryosphere is an umbrella term for those portions of Earth's surface where water is in solid form. This includes sea ice, ice on lakes or rivers, snow, glaciers, ice caps, ice sheets, and frozen ground (which includes permafrost). Thus, there is a overlap with the hydrosphere. The cryosphere is an integral part of the global climate system. It also has important feedbacks on the climate system. These feedbacks come from the cryosphere's influence on surface energy and moisture fluxes, clouds, the water cycle, atmospheric and oceanic circulation.
Through these feedback processes, the cryosphere plays a significant role in the global climate and in climate model response to global changes. Approximately 10% of the Earth's surface is covered by ice, but this is rapidly decreasing. Current reductions in the cryosphere (caused by climate change) are measurable in ice sheet melt, glaciers decline, sea ice decline, permafrost thaw and snow cover decrease.
Definition and terminology
The cryosphere describes those portions of Earth's surface where water is in solid form. Frozen water is found on the Earth's surface primarily as snow cover, freshwater ice in lakes and rivers, sea ice, glaciers, ice sheets, and frozen ground and permafrost (permanently frozen ground).
The cryosphere is one of five components of the climate system. The others are the atmosphere, the hydrosphere, the lithosphere and the biosphere.
The term cryosphere comes from the Greek word kryos, meaning cold, frost or ice and the Greek word sphaira, meaning globe or ball.
Cryospheric sciences is an umbrella term for the study of the cryosphere. As an interdisciplinary Earth science, many disciplines contribute to it, most notably geology, hydrology, and meteorology and climatology; in this sense, it is comparable to glaciology.
The term deglaciation describes the retreat of cryospheric features.
Properties and interactions
There are several fundamental physical properties of snow and ice that modulate energy exchanges between the surface and the atmosphere. The most important properties are the surface reflectance (albedo), the ability to transfer heat (thermal diffusivity), and the ability to change state (latent heat). These physical properties, together with surface roughness, emissivity, and dielectric characteristics, have important implications for observing snow and ice from space. For example, surface roughness is often the dominant factor determining the strength of radar backscatter. Physical properties such as crystal structure, density, length, and liquid water content are important factors affecting the transfers of heat and water and the scattering of microwave energy.
Residence time and extent
The residence time of water in each of the cryospheric sub-systems varies widely. Snow cover and freshwater ice are essentially seasonal, and most sea ice, except for ice in the central Arctic, lasts only a few years if it is not seasonal. A given water particle in glaciers, ice sheets, or ground ice, however, may remain frozen for 10–100,000 years or longer, and deep ice in parts of East Antarctica may have an age approaching 1 million years.
Most of the world's ice volume is in Antarctica, principally in the East Antarctic Ice Sheet. In terms of areal extent, however, Northern Hemisphere winter snow and ice extent comprise the largest area, amounting to an average 23% of hemispheric surface area in January. The large areal extent and the important climatic roles of snow and ice is related to their unique physical properties. This also indicates that the ability to observe and model snow and ice-cover extent, thickness, and physical properties (radiative and thermal properties) is of particular significance for climate research.
Surface reflectance
The surface reflectance of incoming solar radiation is important for the surface energy balance (SEB). It is the ratio of reflected to incident solar radiation, commonly referred to as albedo. Climatologists are primarily interested in albedo integrated over the shortwave portion of the electromagnetic spectrum (~300 to 3500 nm), which coincides with the main solar energy input. Typically, albedo values for non-melting snow-covered surfaces are high (~80–90%) except in the case of forests.
The higher albedos for snow and ice cause rapid shifts in surface reflectivity in autumn and spring in high latitudes, but the overall climatic significance of this increase is spatially and temporally modulated by cloud cover. (Planetary albedo is determined principally by cloud cover, and by the small amount of total solar radiation received in high latitudes during winter months.) Summer and autumn are times of high-average cloudiness over the Arctic Ocean so the albedo feedback associated with the large seasonal changes in sea-ice extent is greatly reduced. It was found that snow cover exhibited the greatest influence on Earth's radiative balance in the spring (April to May) period when incoming solar radiation was greatest over snow-covered areas.
Thermal properties of cryospheric elements
The thermal properties of cryospheric elements also have important climatic consequences. Snow and ice have much lower thermal diffusivities than air. Thermal diffusivity is a measure of the speed at which temperature waves can penetrate a substance. Snow and ice are many orders of magnitude less efficient at diffusing heat than air. Snow cover insulates the ground surface, and sea ice insulates the underlying ocean, decoupling the surface-atmosphere interface with respect to both heat and moisture fluxes. The flux of moisture from a water surface is eliminated by even a thin skin of ice, whereas the flux of heat through thin ice continues to be substantial until it attains a thickness in excess of 30 to 40 cm. However, even a small amount of snow on top of the ice will dramatically reduce the heat flux and slow down the rate of ice growth. The insulating effect of snow also has major implications for the hydrological cycle. In non-permafrost regions, the insulating effect of snow is such that only near-surface ground freezes and deep-water drainage is uninterrupted.
While snow and ice act to insulate the surface from large energy losses in winter, they also act to retard warming in the spring and summer because of the large amount of energy required to melt ice (the latent heat of fusion, 3.34 x 105 J/kg at 0 °C). However, the strong static stability of the atmosphere over areas of extensive snow or ice tends to confine the immediate cooling effect to a relatively shallow layer, so that associated atmospheric anomalies are usually short-lived and local to regional in scale. In some areas of the world such as Eurasia, however, the cooling associated with a heavy snowpack and moist spring soils is known to play a role in modulating the summer monsoon circulation.
Climate change feedback mechanisms
There are numerous cryosphere-climate feedbacks in the global climate system. These operate over a wide range of spatial and temporal scales from local seasonal cooling of air temperatures to hemispheric-scale variations in ice sheets over time scales of thousands of years. The feedback mechanisms involved are often complex and incompletely understood. For example, Curry et al. (1995) showed that the so-called "simple" sea ice-albedo feedback involved complex interactions with lead fraction, melt ponds, ice thickness, snow cover, and sea-ice extent.
The role of snow cover in modulating the monsoon is just one example of a short-term cryosphere-climate feedback involving the land surface and the atmosphere.
Components
Glaciers and ice sheets
Ice sheets and glaciers are flowing ice masses that rest on solid land. They are controlled by snow accumulation, surface and basal melt, calving into surrounding oceans or lakes and internal dynamics. The latter results from gravity-driven creep flow ("glacial flow") within the ice body and sliding on the underlying land, which leads to thinning and horizontal spreading. Any imbalance of this dynamic equilibrium between mass gain, loss and transport due to flow results in either growing or shrinking ice bodies.Relationships between global climate and changes in ice extent are complex. The mass balance of land-based glaciers and ice sheets is determined by the accumulation of snow, mostly in winter, and warm-season ablation due primarily to net radiation and turbulent heat fluxes to melting ice and snow from warm-air advection Where ice masses terminate in the ocean, iceberg calving is the major contributor to mass loss. In this situation, the ice margin may extend out into deep water as a floating ice shelf, such as that in the Ross Sea.
Sea ice
Sea ice covers much of the polar oceans and forms by freezing of sea water. Satellite data since the early 1970s reveal considerable seasonal, regional, and interannual variability in the sea ice covers of both hemispheres. Seasonally, sea-ice extent in the Southern Hemisphere varies by a factor of 5, from a minimum of 3–4 million km2 in February to a maximum of 17–20 million km2 in September. The seasonal variation is much less in the Northern Hemisphere where the confined nature and high latitudes of the Arctic Ocean result in a much larger perennial ice cover, and the surrounding land limits the equatorward extent of wintertime ice. Thus, the seasonal variability in Northern Hemisphere ice extent varies by only a factor of 2, from a minimum of 7–9 million km2 in September to a maximum of 14–16 million km2 in March.
The ice cover exhibits much greater regional-scale interannual variability than it does hemispherical. For instance, in the region of the Sea of Okhotsk and Japan, maximum ice extent decreased from 1.3 million km2 in 1983 to 0.85 million km2 in 1984, a decrease of 35%, before rebounding the following year to 1.2 million km2. The regional fluctuations in both hemispheres are such that for any several-year period of the satellite record some regions exhibit decreasing ice coverage while others exhibit increasing ice cover.
Frozen ground and permafrost
Snow cover
Most of the Earth's snow-covered area is located in the Northern Hemisphere, and varies seasonally from 46.5 million km2 in January to 3.8 million km2 in August.
Snow cover is an extremely important storage component in the water balance, especially seasonal snowpacks in mountainous areas of the world. Though limited in extent, seasonal snowpacks in the Earth's mountain ranges account for the major source of the runoff for stream flow and groundwater recharge over wide areas of the midlatitudes. For example, over 85% of the annual runoff from the Colorado River basin originates as snowmelt. Snowmelt runoff from the Earth's mountains fills the rivers and recharges the aquifers that over a billion people depend on for their water resources.
Furthermore, over 40% of the world's protected areas are in mountains, attesting to their value both as unique ecosystems needing protection and as recreation areas for humans.
Ice on lakes and rivers
Ice forms on rivers and lakes in response to seasonal cooling. The sizes of the ice bodies involved are too small to exert anything other than localized climatic effects. However, the freeze-up/break-up processes respond to large-scale and local weather factors, such that considerable interannual variability exists in the dates of appearance and disappearance of the ice. Long series of lake-ice observations can serve as a proxy climate record, and the monitoring of freeze-up and break-up trends may provide a convenient integrated and seasonally-specific index of climatic perturbations. Information on river-ice conditions is less useful as a climatic proxy because ice formation is strongly dependent on river-flow regime, which is affected by precipitation, snow melt, and watershed runoff as well as being subject to human interference that directly modifies channel flow, or that indirectly affects the runoff via land-use practices.
Lake freeze-up depends on the heat storage in the lake and therefore on its depth, the rate and temperature of any inflow, and water-air energy fluxes. Information on lake depth is often unavailable, although some indication of the depth of shallow lakes in the Arctic can be obtained from airborne radar imagery during late winter (Sellman et al. 1975) and spaceborne optical imagery during summer (Duguay and Lafleur 1997). The timing of breakup is modified by snow depth on the ice as well as by ice thickness and freshwater inflow.
Changes caused by climate change
Ice sheet melt
Decline of glaciers
Sea ice decline
Permafrost thaw
Snow cover decrease
Studies in 2021 found that Northern Hemisphere snow cover has been decreasing since 1978, along with snow depth. Paleoclimate observations show that such changes are unprecedented over the last millennia in Western North America.
North American winter snow cover increased during the 20th century, largely in response to an increase in precipitation.
Because of its close relationship with hemispheric air temperature, snow cover is an important indicator of climate change.
Global warming is expected to result in major changes to the partitioning of snow and rainfall, and to the timing of snowmelt, which will have important implications for water use and management. These changes also involve potentially important decadal and longer time-scale feedbacks to the climate system through temporal and spatial changes in soil moisture and runoff to the oceans.(Walsh 1995). Freshwater fluxes from the snow cover into the marine environment may be important, as the total flux is probably of the same magnitude as desalinated ridging and rubble areas of sea ice. In addition, there is an associated pulse of precipitated pollutants which accumulate over the Arctic winter in snowfall and are released into the ocean upon ablation of the sea ice.
See also
Cryobiology
International Association of Cryospheric Sciences (IACS)
Polar regions of Earth
Special Report on the Ocean and Cryosphere in a Changing Climate
Water cycle
References
External links
Canadian Cryospheric Information Network
Near-real-time overview of global ice concentration and snow extent
National Snow and Ice Data Center
Water ice | Cryosphere | [
"Environmental_science"
] | 2,898 | [
"Cryosphere",
"Hydrology"
] |
47,535 | https://en.wikipedia.org/wiki/Haptophyte | The haptophytes, classified either as the Haptophyta, Haptophytina or Prymnesiophyta (named for Prymnesium), are a clade of algae.
The names Haptophyceae or Prymnesiophyceae are sometimes used instead. This ending implies classification at the class rank rather than as a division. Although the phylogenetics of this group has become much better understood in recent years, there remains some dispute over which rank is most appropriate.
Characteristics
The chloroplasts are pigmented similarly to those of the heterokonts, but the structure of the rest of the cell is different, so it may be that they are a separate line whose chloroplasts are derived from similar red algal endosymbionts. Haptophyte chloroplasts contain chlorophylls a, c1, and c2 but lack chlorophyll b. For carotenoids, they have beta-, alpha-, and gamma- carotenes. Like diatoms and brown algae, they have also fucoxanthin, an oxidized isoprenoid derivative that is likely the most important driver of their brownish-yellow color.
The cells typically have two slightly unequal flagella, both of which are smooth, and a unique organelle called a haptonema, which is superficially similar to a flagellum but differs in the arrangement of microtubules and in its use. The name comes from the Greek hapsis, touch, and nema, round thread. The mitochondria have tubular cristae.
Most haptophytes reportedly produce chrysolaminarin rather than starch as their major storage polysaccharide, but some Pavlovaceae produce paramylon. The chain length of the chrysolaminarin is reportedly short (polymers of 20–50 glycosides, unlike the 300+ of comparable amylose), and it is located in cytoplasmic membrane-bound vacuoles.
Significance
The best-known haptophytes are coccolithophores, which make up 673 of the 762 described haptophyte species, and have an exoskeleton of calcareous plates called coccoliths. Coccolithophores are some of the most abundant marine phytoplankton, especially in the open ocean, and are extremely abundant as microfossils, forming chalk deposits. Other planktonic haptophytes of note include Chrysochromulina and Prymnesium, which periodically form toxic marine algal blooms, and Phaeocystis, blooms of which can produce unpleasant foam which often accumulates on beaches.
Haptophytes are economically important, as species such as Pavlova lutheri and Isochrysis sp. are widely used in the aquaculture industry to feed oyster and shrimp larvae. They contain a large amount of polyunsaturated fatty acids such as docosahexaenoic acid (DHA), stearidonic acid and alpha-linolenic acid. Tisochrysis lutea contains betain lipids and phospholipids.
Classification
The haptophytes were first placed in the class Chrysophyceae (golden algae), but ultrastructural data have provided evidence to classify them separately. Both molecular and morphological evidence supports their division into five orders; coccolithophores make up the Isochrysidales and Coccolithales. Very small (2-3μm) uncultured pico-prymnesiophytes are ecologically important.
Haptophytes was discussed to be closely related to cryptomonads.
Haptophytes are closely related to the SAR clade.
Subphylum Haptophytina Cavalier-Smith 2015 [Haptophyta Hibberd 1976 sensu Ruggerio et al. 2015]
Clade Rappemonada Kim et al. 2011
Class Rappephyceae Cavalier-Smith 2015
Order Rappemonadales
Family Rappemonadaceae
Clade Haptomonada (Margulis & Schwartz 1998) [Haptophyta Hibberd 1976 emend. Edvardsen & Eikrem 2000; Prymnesiophyta Green & Jordan, 1994; Prymnesiomonada; Prymnesiida Hibberd 1976; Haptophyceae Christensen 1962 ex Silva 1980; Haptomonadida; Patelliferea Cavalier-Smith 1993]
Class Pavlovophyceae Cavalier-Smith 1986 [Pavlovophycidae Cavalier-Smith 1986]
Order Pavlovales Green 1976
Family Pavlovaceae Green 1976
Class Prymnesiophyceae Christensen 1962 emend. Cavalier-Smith 1996 [Haptophyceae s.s.; Prymnesiophycidae Cavalier-Smith 1986; Coccolithophyceae Casper 1972 ex Rothmaler 1951]
Family †Eoconusphaeraceae Kristan-Tollmann 1988 [Conusphaeraceae]
Family †Goniolithaceae Deflandre 1957
Family †Lapideacassaceae Black, 1971
Family †Microrhabdulaceae Deflandre 1963
Family †Nannoconaceae Deflandre 1959
Family †Polycyclolithaceae Forchheimer 1972 emend Varol, 1992
Family †Lithostromationaceae Deflandre 1959
Family †Rhomboasteraceae Bown, 2005
Family Braarudosphaeraceae Deflandre 1947
Family Ceratolithaceae Norris 1965 emend Young & Bown 2014 [Triquetrorhabdulaceae Lipps 1969 - cf Young & Bown 2014]
Family Alisphaeraceae Young et al., 2003
Family Papposphaeraceae Jordan & Young 1990 emend Andruleit & Young 2010
Family Umbellosphaeraceae Young et al., 2003 [Umbellosphaeroideae]
Order †Discoasterales Hay 1977
Family †Discoasteraceae Tan 1927
Family †Heliolithaceae Hay & Mohler 1967
Family †Sphenolithaceae Deflandre 1952
Family †Fasciculithaceae Hay & Mohler 1967
Order Phaeocystales Medlin 2000
Family Phaeocystaceae Lagerheim 1896
Order Prymnesiales Papenfuss 1955 emend. Edvardsen and Eikrem 2000
Family Chrysochromulinaceae Edvardsen, Eikrem & Medlin 2011
Family Prymnesiaceae Conrad 1926 ex Schmidt 1931
Subclass Calcihaptophycidae
Order Isochrysidales Pascher 1910 [Prinsiales Young & Bown 1997]
Family †Prinsiaceae Hay & Mohler 1967 emend. Young & Bown, 1997
Family Isochrysidaceae Parke 1949 [Chrysotilaceae; Marthasteraceae Hay 1977]
Family Noëlaerhabdaceae Jerkovic 1970 emend. Young & Bown, 1997 [Gephyrocapsaceae Black 1971]
Order †Eiffellithales Rood, Hay & Barnard 1971 (loxolith; imbricating murolith)
Family †Chiastozygaceae Rood, Hay & Barnard 1973 [Ahmuellerellaceae Reinhardt, 1965]
Family †Eiffellithaceae Reinhardt 1965
Family †Rhagodiscaceae Hay 1977
Order Stephanolithiales Bown & Young 1997 (protolith; non-imbrication murolith)
Family Parhabdolithaceae Bown 1987
Family †Stephanolithiaceae Black 1968 emend. Black 1973
Order Zygodiscales Young & Bown 1997 [Crepidolithales]
Family Helicosphaeraceae Black 1971
Family Pontosphaeraceae Lemmermann 1908
Family †Zygodiscaceae Hay & Mohler 1967
Order Syracosphaerales Ostenfeld 1899 emend. Young et al., 2003 [Rhabdosphaerales Ostenfeld 1899]
Family Calciosoleniaceae Kamptner 1927
Family Syracosphaeraceae Lohmann, 1902 [Halopappiaceae Kamptner 1928] (caneolith & cyrtolith; murolith)
Family Rhabdosphaeraceae Haeckel, 1894 (planolith)
Order †Watznaueriales Bown 1987 (imbricating placolith)
Family †Watznaueriaceae Rood, Hay & Barnard 1971
Order †Arkhangelskiales Bown & Hampton 1997
Family †Arkhangelskiellaceae Bukry 1969
Family †Kamptneriaceae Bown & Hampton 1997
Order †Podorhabdales Rood 1971 [Biscutales Aubry 2009; Prediscosphaerales Aubry 2009] (non-imbricating or radial placolith)
Family †Axopodorhabdaceae Wind & Wise 1977 [Podorhabdaceae Noel 1965]
Family †Biscutaceae Black, 1971
Family †Calyculaceae Noel 1973
Family †Cretarhabdaceae Thierstein 1973
Family †Mazaganellaceae Bown 1987
Family †Prediscosphaeraceae Rood et al., 1971 [Deflandriaceae Black 1968]
Family †Tubodiscaceae Bown & Rutledge 1997
Order Coccolithales Schwartz 1932 [Coccolithophorales]
Family Reticulosphaeraceae Cavalier-Smith 1996 [Reticulosphaeridae]
Family Calcidiscaceae Young & Bown 1997
Family Coccolithaceae Poche 1913 emend. Young & Bown, 1997 [Coccolithophoraceae]
Family Pleurochrysidaceae Fresnel & Billard 1991
Family Hymenomonadaceae Senn 1900 [Ochrosphaeraceae Schussnig 1930]
References
External links
Algal taxonomy
Bikont phyla | Haptophyte | [
"Biology"
] | 2,061 | [
"Algae",
"Algal taxonomy"
] |
47,544 | https://en.wikipedia.org/wiki/Carrying%20capacity | The carrying capacity of an environment is the maximum population size of a biological species that can be sustained by that specific environment, given the food, habitat, water, and other resources available. The carrying capacity is defined as the environment's maximal load, which in population ecology corresponds to the population equilibrium, when the number of deaths in a population equals the number of births (as well as immigration and emigration). Carrying capacity of the environment implies that the resources extraction is not above the rate of regeneration of the resources and the wastes generated are within the assimilating capacity of the environment. The effect of carrying capacity on population dynamics is modelled with a logistic function. Carrying capacity is applied to the maximum population an environment can support in ecology, agriculture and fisheries. The term carrying capacity has been applied to a few different processes in the past before finally being applied to population limits in the 1950s. The notion of carrying capacity for humans is covered by the notion of sustainable population.
An early detailed examination of global limits was published in the 1972 book Limits to Growth, which has prompted follow-up commentary and analysis, including much criticism. A 2012 review in Nature by 22 international researchers expressed concerns that the Earth may be "approaching a state shift" in which the biosphere may become less hospitable to human life and in which human carrying capacity may diminish. This concern that humanity may be passing beyond "tipping points" for safe use of the biosphere has increased in subsequent years. Recent estimates of Earth's carrying capacity run between two billion and four billion people, depending on how optimistic researchers are about international cooperation to solve collective action problems.
Origins
In terms of population dynamics, the term 'carrying capacity' was not explicitly used in 1838 by the Belgian mathematician Pierre François Verhulst when he first published his equations based on research on modelling population growth.
The origins of the term "carrying capacity" are uncertain, with sources variously stating that it was originally used "in the context of international shipping" in the 1840s, or that it was first used during 19th-century laboratory experiments with micro-organisms. A 2008 review finds the first use of the term in English was an 1845 report by the US Secretary of State to the US Senate. It then became a term used generally in biology in the 1870s, being most developed in wildlife and livestock management in the early 1900s. It had become a staple term in ecology used to define the biological limits of a natural system related to population size in the 1950s.
Neo-Malthusians and eugenicists popularised the use of the words to describe the number of people the Earth can support in the 1950s, although American biostatisticians Raymond Pearl and Lowell Reed had already applied it in these terms to human populations in the 1920s.
Hadwen and Palmer (1923) defined carrying capacity as the density of stock that could be grazed for a definite period without damage to the range.
It was first used in the context of wildlife management by the American Aldo Leopold in 1933, and a year later by the American Paul Lester Errington, a wetlands specialist. They used the term in different ways, Leopold largely in the sense of grazing animals (differentiating between a 'saturation level', an intrinsic level of density a species would live in, and carrying capacity, the most animals which could be in the field) and Errington defining 'carrying capacity' as the number of animals above which predation would become 'heavy' (this definition has largely been rejected, including by Errington himself). The important and popular 1953 textbook on ecology by Eugene Odum, Fundamentals of Ecology, popularised the term in its modern meaning as the equilibrium value of the logistic model of population growth.
Mathematics
The specific reason why a population stops growing is known as a limiting or regulating factor.
The difference between the birth rate and the death rate is the natural increase. If the population of a given organism is below the carrying capacity of a given environment, this environment could support a positive natural increase; should it find itself above that threshold the population typically decreases. Thus, the carrying capacity is the maximum number of individuals of a species that an environment can support in long run.
Population size decreases above carrying capacity due to a range of factors depending on the species concerned, but can include insufficient space, food supply, or sunlight. The carrying capacity of an environment varies for different species.
In the standard ecological algebra as illustrated in the simplified Verhulst model of population dynamics, carrying capacity is represented by the constant K:
where
is the population size,
is the intrinsic rate of natural increase
is the carrying capacity of the local environment, and
, the derivative of with respect to time , is the rate of change in population with time.
Thus, the equation relates the growth rate of the population to the current population size, incorporating the effect of the two constant parameters and . (Note that decrease is negative growth.) The choice of the letter came from the German Kapazitätsgrenze (capacity limit).
This equation is a modification of the original Verhulst model:
In this equation, the carrying capacity , , is
When the Verhulst model is plotted into a graph, the population change over time takes the form of a sigmoid curve, reaching its highest level at . This is the logistic growth curve and it is calculated with:
where
is the natural logarithm base (also known as Euler's number),
is the value of the sigmoid's midpoint,
is the curve's maximum value,
is the logistic growth rate or steepness of the curve and
The logistic growth curve depicts how population growth rate and carrying capacity are inter-connected. As illustrated in the logistic growth curve model, when the population size is small, the population increases exponentially. However, as population size nears carrying capacity, the growth decreases and reaches zero at .
What determines a specific system's carrying capacity involves a limiting factor; this may be available supplies of food or water, nesting areas, space, or the amount of waste that can be absorbed without degrading the environment and decreasing carrying capacity.
Population ecology
Carrying capacity is a commonly used concept for biologists when trying to better understand biological populations and the factors which affect them. When addressing biological populations, carrying capacity can be seen as a stable dynamic equilibrium, taking into account extinction and colonization rates. In population biology, logistic growth assumes that population size fluctuates above and below an equilibrium value.
Numerous authors have questioned the usefulness of the term when applied to actual wild populations. Although useful in theory and in laboratory experiments, carrying capacity as a method of measuring population limits in the environment is less useful as it sometimes oversimplifies the interactions between species.
Agriculture
It is important for farmers to calculate the carrying capacity of their land so they can establish a sustainable stocking rate. For example, calculating the carrying capacity of a paddock in Australia is done in Dry Sheep Equivalents (DSEs). A single DSE is 50 kg Merino wether, dry ewe or non-pregnant ewe, which is maintained in a stable condition. Not only sheep are calculated in DSEs, the carrying capacity for other livestock is also calculated using this measure. A 200 kg weaned calf of a British style breed gaining 0.25 kg/day is 5.5DSE, but if the same weight of the same type of calf were gaining 0.75 kg/day, it would be measured at 8DSE. Cattle are not all the same, their DSEs can vary depending on breed, growth rates, weights, if it is a cow ('dam'), steer or ox ('bullock' in Australia), and if it weaning, pregnant or 'wet' (i.e. lactating).
In other parts of the world different units are used for calculating carrying capacities. In the United Kingdom the paddock is measured in LU, livestock units, although different schemes exist for this. New Zealand uses either LU, EE (ewe equivalents) or SU (stock units). In the US and Canada the traditional system uses animal units (AU). A French/Swiss unit is Unité de Gros Bétail (UGB).
In some European countries such as Switzerland the pasture (alm or alp) is traditionally measured in Stoß, with one Stoß equaling four Füße (feet). A more modern European system is Großvieheinheit (GV or GVE), corresponding to 500 kg in liveweight of cattle. In extensive agriculture 2 GV/ha is a common stocking rate, in intensive agriculture, when grazing is supplemented with extra fodder, rates can be 5 to 10 GV/ha. In Europe average stocking rates vary depending on the country, in 2000 the Netherlands and Belgium had a very high rate of 3.82 GV/ha and 3.19 GV/ha respectively, surrounding countries have rates of around 1 to 1.5 GV/ha, and more southern European countries have lower rates, with Spain having the lowest rate of 0.44 GV/ha.
This system can also be applied to natural areas. Grazing megaherbivores at roughly 1 GV/ha is considered sustainable in central European grasslands, although this varies widely depending on many factors. In ecology it is theoretically (i.e. cyclic succession, patch dynamics, Megaherbivorenhypothese) taken that a grazing pressure of 0.3 GV/ha by wildlife is enough to hinder afforestation in a natural area. Because different species have different ecological niches, with horses for example grazing short grass, cattle longer grass, and goats or deer preferring to browse shrubs, niche differentiation allows a terrain to have slightly higher carrying capacity for a mixed group of species, than it would if there were only one species involved.
Some niche market schemes mandate lower stocking rates than can maximally be grazed on a pasture. In order to market ones' meat products as 'biodynamic', a lower Großvieheinheit of 1 to 1.5 (2.0) GV/ha is mandated, with some farms having an operating structure using only 0.5 to 0.8 GV/ha.
The Food and Agriculture Organization has introduced three international units to measure carrying capacity: FAO Livestock Units for North America, FAO Livestock Units for sub-Saharan Africa, and Tropical Livestock Units.
Another rougher and less precise method of determining the carrying capacity of a paddock is simply by looking objectively at the condition of the herd. In Australia, the national standardized system for rating livestock conditions is done by body condition scoring (BCS). An animal in a very poor condition is scored with a BCS of 0, and an animal which is extremely healthy is scored at 5: animals may be scored between these two numbers in increments of 0.25. At least 25 animals of the same type must be scored to provide a statistically representative number, and scoring must take place monthly -if the average falls, this may be due to a stocking rate above the paddock's carrying capacity or too little fodder. This method is less direct for determining stocking rates than looking at the pasture itself, because the changes in the condition of the stock may lag behind changes in the condition of the pasture.
Fisheries
In fisheries, carrying capacity is used in the formulae to calculate sustainable yields for fisheries management. The maximum sustainable yield (MSY) is defined as "the highest average catch that can be continuously taken from an exploited population (=stock) under average environmental conditions". MSY was originally calculated as half of the carrying capacity, but has been refined over the years, now being seen as roughly 30% of the population, depending on the species or population. Because the population of a species which is brought below its carrying capacity due to fishing will find itself in the exponential phase of growth, as seen in the Verhulst model, the harvesting of an amount of fish at or below MSY is a surplus yield which can be sustainably harvested without reducing population size at equilibrium, keeping the population at its maximum recruitment. However, annual fishing can be seen as a modification of r in the equation -i.e. the environment has been modified, which means that the population size at equilibrium with annual fishing is slightly below what K would be without it.
Note that mathematically and in practical terms, MSY is problematic. If mistakes are made and even a tiny amount of fish are harvested each year above the MSY, populations dynamics imply that the total population will eventually decrease to zero. The actual carrying capacity of the environment may fluctuate in the real world, which means that practically, MSY may actually vary from year to year (annual sustainable yields and maximum average yield attempt to take this into account). Other similar concepts are optimum sustainable yield and maximum economic yield; these are both harvest rates below MSY.
These calculations are used to determine fishing quotas.
Humans
Human carrying capacity is a function of how people live and the technology at their disposal. The two great economic revolutions that marked human history up to 1900—the agricultural and industrial revolutions—greatly increased the Earth's human carrying capacity, allowing human population to grow from 5 to 10 million people in 10,000 BCE to 1.5 billion in 1900. The immense technological improvements of the past 100 years—in applied chemistry, physics, computing, genetic engineering, and more—have further increased Earth's human carrying capacity, at least in the short term. Without the Haber-Bosch process for fixing nitrogen, modern agriculture could not support 8 billion people. Without the Green Revolution of the 1950s and 60s, famine might have culled large numbers of people in poorer countries during the last three decades of the twentieth century.
Recent technological successes, however, have come at grave environmental costs. Climate change, ocean acidification, and the huge dead zones at the mouths of many of world's great rivers, are a function of the scale of contemporary agriculture and the many other demands 8 billion people make on the planet. Scientists now speak of humanity exceeding or threatening to exceed 9 planetary boundaries for safe use of the biosphere. Humanity's unprecedented ecological impacts threaten to degrade the ecosystem services that people and the rest of life depend on—potentially decreasing Earth's human carrying capacity. The signs that we have crossed this threshold are increasing.
The fact that degrading Earth's essential services is obviously possible, and happening in some cases, suggests that 8 billion people may be above Earth's human carrying capacity. But human carrying capacity is always a function of a certain number of people living a certain way. This was encapsulated by Paul Ehrlich and James Holdren's (1972) IPAT equation: environmental impact (I) = population (P) x affluence (A) x the technologies used to accommodate human demands (T). IPAT has found spectacular confirmation in recent decades within climate science, where the Kaya identity for explaining changes in emissions is essentially IPAT with two technology factors broken out for ease of use.
This suggests to technological optimists that new technological discoveries (or the deployment of existing ones) could continue to increase Earth's human carrying capacity, as it has in the past. Yet technology has unexpected side effects, as we have seen with stratospheric ozone depletion, excessive nitrogen deposition in the world's rivers and bays, and global climate change. This suggests that 8 billion people may be sustainable for a few generations, but not over the long term, and the term ‘carrying capacity’ implies a population that is sustainable indefinitely. It is possible, too, that efforts to anticipate and manage the impacts of powerful new technologies, or to divide up the efforts needed to keep global ecological impacts within sustainable bounds among more than 200 nations all pursuing their own self-interest, may prove too complicated to achieve over the long haul.
One issue with applying carrying capacity to any species is that ecosystems are not constant and change over time, therefore changing the resources available. Research has shown that sometimes the presence of human populations can increase local biodiversity, demonstrating that human habitation does not always lead to deforestation and decreased biodiversity. Another issue to consider when applying carrying capacity, especially to humans, is that measuring food resources is arbitrary. This is due to choosing what to consider (e.g., whether or not to include plants that are not available every year), how to classify what is considered (e.g., classifying edible plants that are not usually eaten as food resources or not), and determining if caloric values or nutritional values are privileged. Additional layers to this for humans are their cultural differences in taste (e.g., some consume flying termites) and individual choices on what to invest their labor into (e.g., fishing vs. farming), both of which vary over time. This leads to the need to determine whether or not to include all food resources or only those the population considered will consume. Carrying capacity measurements over large areas also assumes homogeneity in the resources available but this does not account for how resources and access to them can greatly vary within regions and populations. They also assume that the populations in the region only rely on that region’s resources even though humans exchange resources with others from other regions and there are few, if any, isolated populations. Variations in standards of living which directly impact resource consumption are also not taken into account. These issues show that while there are limits to resources, a more complex model of how humans interact with their ecosystem needs to be used to understand them.
Recent warnings that humanity may have exceeded Earth's carrying capacity
Between 1900 and 2020, Earth's human population increased from 1.6 billion to 7.8 billion (a 390% increase). These successes greatly increased human resource demands, generating significant environmental degradation.
Millennium ecosystem assessment
The Millennium Ecosystem Assessment (MEA) of 2005 was a massive, collaborative effort to assess the state of Earth's ecosystems, involving more than 1,300 experts worldwide. Their first two of four main findings were the following. The first finding is:Over the past 50 years, humans have changed ecosystems more rapidly and extensively than in any comparable period of time in human history, largely to meet rapidly growing demands for food, fresh water, timber, fiber, and fuel. This has resulted in a substantial and largely irreversible loss in the diversity of life on Earth.The second of the four main findings is:The changes that have been made to ecosystems have contributed to substantial net gains in human well-being and economic development, but these gains have been achieved at growing costs in the form of the degradation of many ecosystem services, increased risks of nonlinear changes, and the exacerbation of poverty for some groups of people. These problems, unless addressed, will substantially diminish the benefits that future generations obtain from ecosystems.According to the MEA, these unprecedented environmental changes threaten to reduce the Earth's long-term human carrying capacity. “The degradation of ecosystem services could grow significantly worse during the first half of this [21st] century,” they write, serving as a barrier to improving the lives of poor people around the world.
Critiques of Carrying Capacity with Relation to Humans
Humans and human culture itself are highly adaptable things that have overcome issues that seemed incomprehensible at the time before. It is not to say that carrying capacity is not something that should be considered and thought about, but it should be taken with some skepticism when presented as a concretely evidenced proof of something. Many biologists, ecologists, and social scientists have disposed of the term altogether due to the generalizations that are made that gloss over the complexity of interactions that take place on the micro and macro level. Carrying capacity in a human environment is subject to change at any time due to the highly adaptable nature of human society and culture. If resources, time, and energy are put into an issue, there very well may be a solution that exposes itself. This also should not be used as an excuse to overexploit or take advantage of the land or resources that are available. Nonetheless, it is possible to not be pessimistic as technological, social, and institutional adaptions could be accelerated, especially in a time of need, to solve problems, or in this case, increase carrying capacity. There are also of course resources on this Earth that are limited that most certainly will run out if overused or used without proper oversight/checks and balances. If things are left without remaining checked then overconsumption and exploitation of land and resources is likely to occur.
Ecological footprint accounting
Ecological Footprint accounting measures the demands people make on nature and compares them to available supplies, for both individual countries and the world as a whole. Developed originally by Mathis Wackernagel and William Rees, it has been refined and applied in a variety of contexts over the years by Global Footprint Network (GFN). On the demand side, the Ecological Footprint measures how fast a population uses resources and generates wastes, with a focus on five main areas: carbon emissions (or carbon footprint), land devoted to direct settlement, timber and paper use, food and fiber use, and seafood consumption. It converts these into per capita or total hectares used. On the supply side, national or global biocapacity represents the productivity of ecological assets in a particular nation or the world as a whole; this includes “cropland, grazing land, forest land, fishing grounds, and built-up land.” Again the various metrics to capture biocapacity are translated into the single term of hectares of available land. As Global Footprint Network (GFN) states:Each city, state or nation’s Ecological Footprint can be compared to its biocapacity, or that of the world. If a population’s Ecological Footprint exceeds the region’s biocapacity, that region runs a biocapacity deficit. Its demand for the goods and services that its land and seas can provide—fruits and vegetables, meat, fish, wood, cotton for clothing, and carbon dioxide absorption—exceeds what the region’s ecosystems can regenerate. In more popular communications, this is called “an ecological deficit.” A region in ecological deficit meets demand by importing, liquidating its own ecological assets (such as overfishing), and/or emitting carbon dioxide into the atmosphere. If a region’s biocapacity exceeds its Ecological Footprint, it has a biocapacity reserve.According to the GFN's calculations, humanity has been using resources and generating wastes in excess of sustainability since approximately 1970: currently humanity use Earth's resources at approximately 170% of capacity. This implies that humanity is well over Earth's human carrying capacity for our current levels of affluence and technology use. According to Global Footprint Network:In 2024, [Earth Overshoot Day] fell on August 1. Earth Overshoot Day marks the date when humanity has exhausted nature’s budget for the year. For the rest of the year, we are maintaining our ecological deficit by drawing down local resource stocks and accumulating carbon dioxide in the atmosphere. We are operating in overshoot.The concept of ‘ecological overshoot’ can be seen as equivalent to exceeding human carrying capacity. According to the most recent calculations from Global Footprint Network, most of the world's residents live in countries in ecological overshoot (see the map on the right).
This includes countries with dense populations (such as China, India, and the Philippines), countries with high per capita consumption and resource use (France, Germany, and Saudi Arabia), and countries with both high per capita consumption and large numbers of people (Japan, the United Kingdom, and the United States).
Planetary Boundaries Framework
According to its developers, the planetary boundaries framework defines “a safe operating space for humanity based on the intrinsic biophysical processes that regulate the stability of the Earth system.” Human civilization has evolved in the relative stability of the Holocene epoch; crossing planetary boundaries for safe levels of atmospheric carbon, ocean acidity, or one of the other stated boundaries could send the global ecosystem spiraling into novel conditions that are less hospitable to life—possibly reducing global human carrying capacity. This framework, developed in an article published in 2009 in Nature and then updated in two articles published in 2015 in Science and in 2018 in PNAS, identifies nine stressors of planetary support systems that need to stay within critical limits to preserve stable and safe biospheric conditions (see figure below). Climate change and biodiversity loss are seen as especially crucial, since on their own, they could push the Earth system out of the Holocene state: “transitions between time periods in Earth history have often been delineated by substantial shifts in climate, the biosphere, or both.”
The scientific consensus is that humanity has exceeded three to five of the nine planetary boundaries for safe use of the biosphere and is pressing hard on several more. By itself, crossing one of the planetary boundaries does not prove humanity has exceeded Earth's human carrying capacity; perhaps technological improvements or clever management might reduce this stressor and bring us back within the biosphere's safe operating space. But when several boundaries are crossed, it becomes harder to argue that carrying capacity has not been breached. Because fewer people helps reduce all nine planetary stressors, the more boundaries are crossed, the clearer it appears that reducing human numbers is part of what is needed to get back within a safe operating space. Population growth regularly tops the list of causes of humanity's increasing impact on the natural environment in Earth system science literature. Recently, planetary boundaries developer Will Steffen and co-authors ranked global population change as the leading indicator of the influence of socio-economic trends on the functioning of the Earth system in the modern era, post-1750.
See also
Further reading
Kin, Cheng Sok, et al. "Predicting Earth's Carrying Capacity of Human Population as the Predator and the Natural Resources as the Prey in the Modified Lotka-Volterra Equations with Time-dependent Parameters." arXiv preprint arXiv:1904.05002 (2019).
References
Control of demographics
Demographics indicators
Ecological metrics
Population ecology
Economic geography
Ecological economics
Environmental terminology | Carrying capacity | [
"Mathematics"
] | 5,407 | [
"Ecological metrics",
"Quantity",
"Metrics"
] |
47,592 | https://en.wikipedia.org/wiki/Waveform | In electronics, acoustics, and related fields, the waveform of a signal is the shape of its graph as a function of time, independent of its time and magnitude scales and of any displacement in time. Periodic waveforms repeat regularly at a constant period. The term can also be used for non-periodic or aperiodic signals, like chirps and pulses.
In electronics, the term is usually applied to time-varying voltages, currents, or electromagnetic fields. In acoustics, it is usually applied to steady periodic sounds — variations of pressure in air or other media. In these cases, the waveform is an attribute that is independent of the frequency, amplitude, or phase shift of the signal.
The waveform of an electrical signal can be visualized in an oscilloscope or any other device that can capture and plot its value at various times, with suitable scales in the time and value axes. The electrocardiograph is a medical device to record the waveform of the electric signals that are associated with the beating of the heart; that waveform has important diagnostic value. Waveform generators, that can output a periodic voltage or current with one of several waveforms, are a common tool in electronics laboratories and workshops.
The waveform of a steady periodic sound affects its timbre. Synthesizers and modern keyboards can generate sounds with many complicated waveforms.
Common periodic waveforms
Simple examples of periodic waveforms include the following, where is time, is wavelength, is amplitude and is phase:
Sine wave: The amplitude of the waveform follows a trigonometric sine function with respect to time.
Square wave: This waveform is commonly used to represent digital information. A square wave of constant period contains odd harmonics that decrease at −6 dB/octave.
Triangle wave: It contains odd harmonics that decrease at −12 dB/octave.
Sawtooth wave: This looks like the teeth of a saw. Found often in time bases for display scanning. It is used as the starting point for subtractive synthesis, as a sawtooth wave of constant period contains odd and even harmonics that decrease at −6 dB/octave.
The Fourier series describes the decomposition of periodic waveforms, such that any periodic waveform can be formed by the sum of a (possibly infinite) set of fundamental and harmonic components. Finite-energy non-periodic waveforms can be analyzed into sinusoids by the Fourier transform.
Other periodic waveforms are often called composite waveforms and can often be described as a combination of a number of sinusoidal waves or other basis functions added together.
See also
Arbitrary waveform generator
Carrier wave
Crest factor
Continuous waveform
Envelope (music)
Frequency domain
Phase offset modulation
Spectrum analyzer
Waveform monitor
Waveform viewer
Wave packet
References
Further reading
Yuchuan Wei, Qishan Zhang. Common Waveform Analysis: A New And Practical Generalization of Fourier Analysis. Springer US, Aug 31, 2000
Hao He, Jian Li, and Petre Stoica. Waveform design for active sensing systems: a computational approach. Cambridge University Press, 2012.
Solomon W. Golomb, and Guang Gong. Signal design for good correlation: for wireless communication, cryptography, and radar. Cambridge University Press, 2005.
Jayant, Nuggehally S and Noll, Peter. Digital coding of waveforms: principles and applications to speech and video. Englewood Cliffs, NJ, 1984.
M. Soltanalian. Signal Design for Active Sensing and Communications. Uppsala Dissertations from the Faculty of Science and Technology (printed by Elanders Sverige AB), 2014.
Nadav Levanon, and Eli Mozeson. Radar signals. Wiley. com, 2004.
Jian Li, and Petre Stoica, eds. Robust adaptive beamforming. New Jersey: John Wiley, 2006.
Fulvio Gini, Antonio De Maio, and Lee Patton, eds. Waveform design and diversity for advanced radar systems. Institution of engineering and technology, 2012.
External links
Collection of single cycle waveforms sampled from various sources | Waveform | [
"Physics"
] | 827 | [
"Waves",
"Physical phenomena",
"Waveforms"
] |
47,596 | https://en.wikipedia.org/wiki/Korean%20Peninsula%20Energy%20Development%20Organization | The Korean Peninsula Energy Development Organization (KEDO) was an organization founded on March 15, 1995, by the United States, South Korea, and Japan to implement the 1994 U.S.-North Korea Agreed Framework that froze North Korea's indigenous nuclear power plant development centered at the Yongbyon Nuclear Scientific Research Center, that was suspected of being a step in a nuclear weapons program.
KEDO's principal activity was to construct two light water reactor nuclear power plants in North Korea to replace North Korea's Magnox type reactors. The original target year for completion was 2003.
Since then, other members joined:
1995: Australia, Canada, New Zealand
1996: Argentina, Chile, Indonesia
1997: European Union, Poland
1999: Czech Republic
2000: Uzbekistan
KEDO discussions took place at the level of a U.S. Assistant Secretary of State, South Korea's deputy foreign minister, and the head of the Asian bureau of Japan's Foreign Ministry.
The KEDO Secretariat was located in New York. KEDO was shut down in 2006.
History
Formal ground breaking on the site for two light water reactors (LWR) was on August 19, 1997, at Kumho, 30 km north of Sinpo. The Kumho site had been previously selected for two similar sized reactors that had been promised in the 1980s by the Soviet Union, before its collapse.
Soon after the Agreed Framework was signed, U.S. Congress control changed to the Republican Party, who did not support the agreement. Some Republican Senators were strongly against the agreement, regarding it as appeasement. KEDO's first director, Stephen Bosworth, later commented "The Agreed Framework was a political orphan within two weeks after its signature".
Arranging project financing was not easy, and formal invitations to bid were not issued until 1998, by which time the delays were infuriating North Korea. Significant spending on the LWR project did not commence until 2000, with "First Concrete" pouring at the construction site on August 7, 2002. Construction of both reactors was well behind the original schedule.
In the wake of the breakdown of the Agreed Framework in 2003, KEDO largely lost its function. KEDO ensured that the nuclear power plant project assets at the construction site at Kumho in North Korea and at manufacturers' facilities around the world ($1.5 billion invested to date) were preserved and maintained. The project was reported to be about 30% complete. One reactor containment building was about 50% complete and another about 15% finished. No key equipment for the reactors had been moved to the site.
In 2005, there were reports indicating that KEDO had agreed in principle to terminate the light-water reactor project. On January 9, 2006, it was announced that the project was over and the workers would be returning to their home countries. North Korea demanded compensation and has refused to return the approximately $45 million worth of equipment left behind.
Executive Directors
Stephen W. Bosworth, 1995–1997
L. Desaix Anderson, 1997–2001
Charles Kartman, 2001–2005
See also
Division of Korea
Six-party talks
References
External links
Agreement on Supply of a Light-Water Reactor Project to the Democratic People's Republic of Korea - KEDO, 1995
Ten Years of KEDO: What Have We Learned?, U.S. Institute of Peace, March 10, 2005
Half-forgotten project is a key in next round of 6-party talks - JoongAng Daily, September 12, 2005
Kumho: North Korea's nuclear ghost town - Asia Times, September 24, 2005
KEDO Puts Final Nail in N.Korea Reactor Project, The Chosun Ilbo, November 23, 2005
KEDO told to leave North Korea , JoongAng Daily, December 13, 2005
N.Korea says to build light-water nuclear reactors, Reuters, December 20, 2005
An unfair burden, JoongAng Daily, December 23, 2005
What Did We Learn From KEDO?, The Stanley Foundation, November 2006
KEDO Demands $1.9 Bil. Compensation From NK, The Korea Times, January 16, 2007
A History of KEDO 1994-2006, Robert Carlin, Joel Wit, Charles Kartman, Center for International Security and Cooperation, July 18, 2012
Reflections on KEDO: Ambassador Stephen Bosworth, Joel Wit and Robert Carlin (video interview of first KEDO Director), 38 North, July 19, 2012
Organizations established in 1995
International nuclear energy organizations
International organizations based in the United States
Foreign relations of North Korea
Foreign relations of South Korea
Nuclear program of North Korea
Nuclear power in North Korea
Intergovernmental organizations established by treaty | Korean Peninsula Energy Development Organization | [
"Engineering"
] | 941 | [
"International nuclear energy organizations",
"Nuclear organizations"
] |
47,600 | https://en.wikipedia.org/wiki/Simple%20group | In mathematics, a simple group is a nontrivial group whose only normal subgroups are the trivial group and the group itself. A group that is not simple can be broken into two smaller groups, namely a nontrivial normal subgroup and the corresponding quotient group. This process can be repeated, and for finite groups one eventually arrives at uniquely determined simple groups, by the Jordan–Hölder theorem.
The complete classification of finite simple groups, completed in 2004, is a major milestone in the history of mathematics.
Examples
Finite simple groups
The cyclic group of congruence classes modulo 3 (see modular arithmetic) is simple. If is a subgroup of this group, its order (the number of elements) must be a divisor of the order of which is 3. Since 3 is prime, its only divisors are 1 and 3, so either is , or is the trivial group. On the other hand, the group is not simple. The set of congruence classes of 0, 4, and 8 modulo 12 is a subgroup of order 3, and it is a normal subgroup since any subgroup of an abelian group is normal. Similarly, the additive group of the integers is not simple; the set of even integers is a non-trivial proper normal subgroup.
One may use the same kind of reasoning for any abelian group, to deduce that the only simple abelian groups are the cyclic groups of prime order. The classification of nonabelian simple groups is far less trivial. The smallest nonabelian simple group is the alternating group of order 60, and every simple group of order 60 is isomorphic to . The second smallest nonabelian simple group is the projective special linear group PSL(2,7) of order 168, and every simple group of order 168 is isomorphic to PSL(2,7).
Infinite simple groups
The infinite alternating group , i.e. the group of even finitely supported permutations of the integers, is simple. This group can be written as the increasing union of the finite simple groups with respect to standard embeddings . Another family of examples of infinite simple groups is given by , where is an infinite field and .
It is much more difficult to construct finitely generated infinite simple groups. The first existence result is non-explicit; it is due to Graham Higman and consists of simple quotients of the Higman group. Explicit examples, which turn out to be finitely presented, include the infinite Thompson groups and . Finitely presented torsion-free infinite simple groups were constructed by Burger and Mozes.
Classification
There is as yet no known classification for general (infinite) simple groups, and no such classification is expected. One reason for this is the existence of continuum-many Tarski monster groups for every sufficiently-large prime characteristic, each simple and having only the cyclic group of that characteristic as its subgroups.
Finite simple groups
The finite simple groups are important because in a certain sense they are the "basic building blocks" of all finite groups, somewhat similar to the way prime numbers are the basic building blocks of the integers. This is expressed by the Jordan–Hölder theorem which states that any two composition series of a given group have the same length and the same factors, up to permutation and isomorphism. In a huge collaborative effort, the classification of finite simple groups was declared accomplished in 1983 by Daniel Gorenstein, though some problems surfaced (specifically in the classification of quasithin groups, which were plugged in 2004).
Briefly, finite simple groups are classified as lying in one of 18 families, or being one of 26 exceptions:
– cyclic group of prime order
– alternating group for
The alternating groups may be considered as groups of Lie type over the field with one element, which unites this family with the next, and thus all families of non-abelian finite simple groups may be considered to be of Lie type.
One of 16 families of groups of Lie type or their derivatives
The Tits group is generally considered of this form, though strictly speaking it is not of Lie type, but rather index 2 in a group of Lie type.
One of 26 exceptions, the sporadic groups, of which 20 are subgroups or subquotients of the monster group and are referred to as the "Happy Family", while the remaining 6 are referred to as pariahs.
Structure of finite simple groups
The famous theorem of Feit and Thompson states that every group of odd order is solvable. Therefore, every finite simple group has even order unless it is cyclic of prime order.
The Schreier conjecture asserts that the group of outer automorphisms of every finite simple group is solvable. This can be proved using the classification theorem.
History for finite simple groups
There are two threads in the history of finite simple groups – the discovery and construction of specific simple groups and families, which took place from the work of Galois in the 1820s to the construction of the Monster in 1981; and proof that this list was complete, which began in the 19th century, most significantly took place 1955 through 1983 (when victory was initially declared), but was only generally agreed to be finished in 2004. By 2018, its publication was envisioned as a series of 12 monographs, the tenth of which was published in 2023. See for 19th century history of simple groups.
Construction
Simple groups have been studied at least since early Galois theory, where Évariste Galois realized that the fact that the alternating groups on five or more points are simple (and hence not solvable), which he proved in 1831, was the reason that one could not solve the quintic in radicals. Galois also constructed the projective special linear group of a plane over a prime finite field, , and remarked that they were simple for p not 2 or 3. This is contained in his last letter to Chevalier, and are the next example of finite simple groups.
The next discoveries were by Camille Jordan in 1870. Jordan had found 4 families of simple matrix groups over finite fields of prime order, which are now known as the classical groups.
At about the same time, it was shown that a family of five groups, called the Mathieu groups and first described by Émile Léonard Mathieu in 1861 and 1873, were also simple. Since these five groups were constructed by methods which did not yield infinitely many possibilities, they were called "sporadic" by William Burnside in his 1897 textbook.
Later Jordan's results on classical groups were generalized to arbitrary finite fields by Leonard Dickson, following the classification of complex simple Lie algebras by Wilhelm Killing. Dickson also constructed exception groups of type G2 and E6 as well, but not of types F4, E7, or E8 . In the 1950s the work on groups of Lie type was continued, with Claude Chevalley giving a uniform construction of the classical groups and the groups of exceptional type in a 1955 paper. This omitted certain known groups (the projective unitary groups), which were obtained by "twisting" the Chevalley construction. The remaining groups of Lie type were produced by Steinberg, Tits, and Herzig (who produced 3D4(q) and 2E6(q)) and by Suzuki and Ree (the Suzuki–Ree groups).
These groups (the groups of Lie type, together with the cyclic groups, alternating groups, and the five exceptional Mathieu groups) were believed to be a complete list, but after a lull of almost a century since the work of Mathieu, in 1964 the first Janko group was discovered, and the remaining 20 sporadic groups were discovered or conjectured in 1965–1975, culminating in 1981, when Robert Griess announced that he had constructed Bernd Fischer's "Monster group". The Monster is the largest sporadic simple group having order of 808,017,424,794,512,875,886,459,904,961,710,757,005,754,368,000,000,000. The Monster has a faithful 196,883-dimensional representation in the 196,884-dimensional Griess algebra, meaning that each element of the Monster can be expressed as a 196,883 by 196,883 matrix.
Classification
The full classification is generally accepted as beginning with the Feit–Thompson theorem of 1962–1963 and being completed in 2004.
Soon after the construction of the Monster in 1981, a proof, totaling more than 10,000 pages, was supplied in 1983 by Daniel Gorenstein, that claimed to successfully list all finite simple groups. This was premature, as gaps were later discovered in the classification of quasithin groups. The gaps were filled in 2004 by a 1300 page classification of quasithin groups and the proof is now generally accepted as complete.
Tests for nonsimplicity
Sylow's test: Let n be a positive integer that is not prime, and let p be a prime divisor of n. If 1 is the only divisor of n that is congruent to 1 modulo p, then there does not exist a simple group of order n.
Proof: If n is a prime-power, then a group of order n has a nontrivial center and, therefore, is not simple. If n is not a prime power, then every Sylow subgroup is proper, and, by Sylow's Third Theorem, we know that the number of Sylow p-subgroups of a group of order n is equal to 1 modulo p and divides n. Since 1 is the only such number, the Sylow p-subgroup is unique, and therefore it is normal. Since it is a proper, non-identity subgroup, the group is not simple.
Burnside: A non-Abelian finite simple group has order divisible by at least three distinct primes. This follows from Burnside's theorem.
See also
Almost simple group
Characteristically simple group
Quasisimple group
Semisimple group
List of finite simple groups
References
Notes
Textbooks
; 2007 preprint.
Papers
Properties of groups | Simple group | [
"Mathematics"
] | 2,055 | [
"Mathematical structures",
"Algebraic structures",
"Properties of groups"
] |
47,607 | https://en.wikipedia.org/wiki/Suspension%20bridge | A suspension bridge is a type of bridge in which the deck is hung below suspension cables on vertical suspenders. The first modern examples of this type of bridge were built in the early 1800s. Simple suspension bridges, which lack vertical suspenders, have a long history in many mountainous parts of the world.
Besides the bridge type most commonly called suspension bridges, covered in this article, there are other types of suspension bridges. The type covered here has cables suspended between towers, with vertical suspender cables that transfer the live and dead loads of the deck below, upon which traffic crosses. This arrangement allows the deck to be level or to arc upward for additional clearance. Like other suspension bridge types, this type often is constructed without the use of falsework.
The suspension cables must be anchored at each end of the bridge, since any load applied to the bridge is transformed into tension in these main cables. The main cables continue beyond the pillars to deck-level supports, and further continue to connections with anchors in the ground. The roadway is supported by vertical suspender cables or rods, called hangers. In some circumstances, the towers may sit on a bluff or canyon edge where the road may proceed directly to the main span. Otherwise, the bridge will typically have two smaller spans, running between either pair of pillars and the highway, which may be supported by suspender cables or their own trusswork. In cases where trusswork supports the spans, there will be very little arc in the outboard main cables.
History
The earliest suspension bridges were ropes slung across a chasm, with a deck possibly at the same level or hung below the ropes such that the rope had a catenary shape.
Precursors
The Tibetan siddha and bridge-builder Thangtong Gyalpo originated the use of iron chains in his version of simple suspension bridges. In 1433, Gyalpo built eight bridges in eastern Bhutan. The last surviving chain-linked bridge of Gyalpo's was the Thangtong Gyalpo Bridge in Duksum en route to Trashi Yangtse, which was finally washed away in 2004. Gyalpo's iron chain bridges did not include a suspended-deck bridge, which is the standard on all modern suspension bridges today. Instead, both the railing and the walking layer of Gyalpo's bridges used wires. The stress points that carried the screed were reinforced by the iron chains. Before the use of iron chains it is thought that Gyalpo used ropes from twisted willows or yak skins. He may have also used tightly bound cloth.
The Inca used rope bridges, documented as early as 1615. It is not known when they were first made. Queshuachaca is considered the last remaining Inca rope bridge and is rebuilt annually.
Chain bridges
The first iron chain suspension bridge in the Western world was the Jacob's Creek Bridge (1801) in Westmoreland County, Pennsylvania, designed by inventor James Finley. Finley's bridge was the first to incorporate all of the necessary components of a modern suspension bridge, including a suspended deck which hung by trusses. Finley patented his design in 1808, and published it in the Philadelphia journal, The Port Folio, in 1810.
Early British chain bridges included the Dryburgh Abbey Bridge (1817) and 137 m Union Bridge (1820), with spans rapidly increasing to 176 m with the Menai Bridge (1826), "the first important modern suspension bridge". The first chain bridge on the German speaking territories was the Chain Bridge in Nuremberg. The Sagar Iron Suspension Bridge with a 200 feet span (also termed Beose Bridge) was constructed near Sagar, India during 1828–1830 by Duncan Presgrave, Mint and Assay Master. The Clifton Suspension Bridge (designed in 1831, completed in 1864 with a 214 m central span), is similar to the Sagar bridge. It is one of the longest of the parabolic arc chain type. The current Marlow suspension bridge was designed by William Tierney Clark and was built between 1829 and 1832, replacing a wooden bridge further downstream which collapsed in 1828. It is the only suspension bridge across the non-tidal Thames. The Széchenyi Chain Bridge, (designed in 1840, opened in 1849), spanning the River Danube in Budapest, was also designed by William Clark and it is a larger-scale version of Marlow Bridge.
An interesting variation is Thornewill and Warham's Ferry Bridge in Burton-on-Trent, Staffordshire (1889), where the chains are not attached to abutments as is usual, but instead are attached to the main girders, which are thus in compression. Here, the chains are made from flat wrought iron plates, eight inches (203 mm) wide by an inch and a half (38 mm) thick, rivetted together.
Wire-cable
The first wire-cable suspension bridge was the Spider Bridge at Falls of Schuylkill (1816), a modest and temporary footbridge built following the collapse of James Finley's nearby Chain Bridge at Falls of Schuylkill (1808). The footbridge's span was 124 m, although its deck was only 0.45 m wide.
Development of wire-cable suspension bridges dates to the temporary simple suspension bridge at Annonay built by Marc Seguin and his brothers in 1822. It spanned only 18 m. The first permanent wire cable suspension bridge was Guillaume Henri Dufour's Saint Antoine Bridge in Geneva of 1823, with two 40 m spans. The first with cables assembled in mid-air in the modern method was Joseph Chaley's Grand Pont Suspendu in Fribourg, in 1834.
In the United States, the first major wire-cable suspension bridge was the Wire Bridge at Fairmount in Philadelphia, Pennsylvania. Designed by Charles Ellet Jr. and completed in 1842, it had a span of 109 m. Ellet's Niagara Falls suspension bridge (1847–48) was abandoned before completion. It was used as scaffolding for John A. Roebling's double decker railroad and carriage bridge (1855).
The Otto Beit Bridge (1938–1939) was the first modern suspension bridge outside the United States built with parallel wire cables.
Structure
Bridge main components
Two towers/pillars, two suspension cables, four suspension cable anchors, multiple suspender cables, the bridge deck.
Structural analysis
The main cables of a suspension bridge will form a catenary when hanging under their own weight only. When supporting the deck, the cables will instead form a parabola, assuming the weight of the cables is small compared to the weight of the deck. One can see the shape from the constant increase of the gradient of the cable with linear (deck) distance, this increase in gradient at each connection with the deck providing a net upward support force. Combined with the relatively simple constraints placed upon the actual deck, that makes the suspension bridge much simpler to design and analyze than a cable-stayed bridge in which the deck is in compression.
Comparison with cable-stayed bridge
Cable-stayed bridges and suspension bridges may appear to be similar, but are quite different in principle and in their construction.
In suspension bridges, large main cables (normally two) hang between the towers and are anchored at each end to the ground. The main cables, which are free to move on bearings in the towers, bear the load of the bridge deck. Before the deck is installed, the cables are under tension from their own weight. Along the main cables smaller cables or rods connect to the bridge deck, which is lifted in sections. As this is done, the tension in the cables increases, as it does with the live load of traffic crossing the bridge. The tension on the main cables is transferred to the ground at the anchorages and by downwards compression on the towers.
In cable-stayed bridges, the towers are the primary load-bearing structures that transmit the bridge loads to the ground. A cantilever approach is often used to support the bridge deck near the towers, but lengths further from them are supported by cables running directly to the towers. By design, all static horizontal forces of the cable-stayed bridge are balanced so that the supporting towers do not tend to tilt or slide and so must only resist horizontal forces from the live loads.
Advantages
Longer main spans are achievable than with any other type of bridge.
Less material may be required than other bridge types, even at spans they can achieve, leading to a reduced construction cost.
Except for installation of the initial temporary cables, little or no access from below is required during construction and so a waterway can remain open while the bridge is built above.
They may be better able to withstand earthquake movements than heavier and more rigid bridges.
Bridge decks can have deck sections replaced in order to widen traffic lanes for larger vehicles or add additional width for separated cycling/pedestrian paths.
Disadvantages
Considerable stiffness or aerodynamic profiling may be required to prevent the bridge deck from vibrating under high winds.
The relatively low deck stiffness compared to other (non-suspension) types of bridges makes it more difficult to carry heavy rail traffic in which high concentrated live loads occur.
Some access below may be required during construction to lift the initial cables or to lift deck units. That access can often be avoided in cable-stayed bridge construction.
Variations
Underspanned
In an underspanned suspension bridge, also called under-deck cable-stayed bridge, the main cables hang entirely below the bridge deck, but are still anchored into the ground in a similar way to the conventional type. Very few bridges of this nature have been built, as the deck is inherently less stable than when suspended below the cables. Examples include the Pont des Bergues of 1834 designed by Guillaume Henri Dufour; James Smith's Micklewood Bridge; and a proposal by Robert Stevenson for a bridge over the River Almond near Edinburgh.
Roebling's Delaware Aqueduct (begun 1847) consists of three sections supported by cables. The timber structure essentially hides the cables; and from a quick view, it is not immediately apparent that it is even a suspension bridge.
Suspension cable types
The main suspension cables in older bridges were often made from a chain or linked bars, but modern bridge cables are made from multiple strands of wire. This not only adds strength but improves reliability (often called redundancy in engineering terms) because the failure of a few flawed strands in the hundreds used pose very little threat of failure, whereas a single bad link or eyebar can cause failure of an entire bridge. (The failure of a single eyebar was found to be the cause of the collapse of the Silver Bridge over the Ohio River.) Another reason is that as spans increased, engineers were unable to lift larger chains into position, whereas wire strand cables can be formulated one by one in mid-air from a temporary walkway.
Suspender-cable terminations
Poured sockets are used to make a high strength, permanent cable termination. They are created by inserting the suspender wire rope (at the bridge deck supports) into the narrow end of a conical cavity which is oriented in-line with the intended direction of strain. The individual wires are splayed out inside the cone or 'capel', and the cone is then filled with molten lead-antimony-tin (Pb80Sb15Sn5) solder.
Deck structure types
Most suspension bridges have open truss structures to support the roadbed, particularly owing to the unfavorable effects of using plate girders, discovered from the Tacoma Narrows Bridge (1940) bridge collapse. In the 1960s, developments in bridge aerodynamics allowed the re-introduction of plate structures as shallow box girders, first seen on the Severn bridge, built 1961–1966. In the picture of the Yichang Bridge, note the very sharp entry edge and sloping undergirders in the suspension bridge shown. This enables this type of construction to be used without the danger of vortex shedding and consequent aeroelastic effects, such as those that destroyed the original Tacoma Narrows bridge.
Forces
Three kinds of forces operate on any bridge: the dead load, the live load, and the dynamic load. Dead load refers to the weight of the bridge itself. Like any other structure, a bridge has a tendency to collapse simply because of the gravitational forces acting on the materials of which the bridge is made. Live load refers to traffic that moves across the bridge as well as normal environmental factors such as changes in temperature, precipitation, and winds. Dynamic load refers to environmental factors that go beyond normal weather conditions, factors such as sudden gusts of wind and earthquakes. All three factors must be taken into consideration when building a bridge.
Use other than road and rail
The principles of suspension used on a large scale also appear in contexts less dramatic than road or rail bridges. Light cable suspension may prove less expensive and seem more elegant for a cycle or footbridge than strong girder supports. An example of this is the Nescio Bridge in the Netherlands, and the Roebling designed 1904 Riegelsville suspension pedestrian bridge across the Delaware River in Pennsylvania. The longest pedestrian suspension bridge, which spans the River Paiva, Arouca Geopark, Portugal, opened in April 2021. The 516 metres bridge hangs 175 meters above the river.
Where such a bridge spans a gap between two buildings, there is no need to construct towers, as the buildings can anchor the cables. Cable suspension may also be augmented by the inherent stiffness of a structure that has much in common with a tubular bridge.
Construction sequence (wire strand cable type)
Typical suspension bridges are constructed using a sequence generally described as follows. Depending on length and size, construction may take anywhere between a year and a half (construction on the original Tacoma Narrows Bridge took only 19 months) up to as long as a decade (the Akashi-Kaikyō Bridge's construction began in May 1986 and was opened in May 1998 – a total of twelve years).
Where the towers are founded on underwater piers, caissons are sunk and any soft bottom is excavated for a foundation. If the bedrock is too deep to be exposed by excavation or the sinking of a caisson, pilings are driven to the bedrock or into overlying hard soil, or a large concrete pad to distribute the weight over less resistant soil may be constructed, first preparing the surface with a bed of compacted gravel. (Such a pad footing can also accommodate the movements of an active fault, and this has been implemented on the foundations of the cable-stayed Rio-Antirio bridge.) The piers are then extended above water level, where they are capped with pedestal bases for the towers.
Where the towers are founded on dry land, deep foundation excavation or pilings are used.
From the tower foundation, towers of single or multiple columns are erected using high-strength reinforced concrete, stonework, or steel. Concrete is used most frequently in modern suspension bridge construction due to the high cost of steel.
Large devices called saddles, which will carry the main suspension cables, are positioned atop the towers. Typically of cast steel, they can also be manufactured using riveted forms, and are equipped with rollers to allow the main cables to shift under construction and normal loads.
Anchorages are constructed, usually in tandem with the towers, to resist the tension of the cables and form as the main anchor system for the entire structure. These are usually anchored in good quality rock but may consist of massive reinforced concrete deadweights within an excavation. The anchorage structure will have multiple protruding open eyebolts enclosed within a secure space.
Temporary suspended walkways, called catwalks, are then erected using a set of guide wires hoisted into place via winches positioned atop the towers. These catwalks follow the curve set by bridge designers for the main cables, in a path mathematically described as a catenary arc. Typical catwalks are usually between eight and ten feet wide and are constructed using wire grate and wood slats.
Gantries are placed upon the catwalks, which will support the main cable spinning reels. Then, cables attached to winches are installed, and in turn, the main cable spinning devices are installed.
High strength wire (typically 4 or 6 gauge galvanized steel wire), is pulled in a loop by pulleys on the traveler, with one end affixed at an anchorage. When the traveler reaches the opposite anchorage the loop is placed over an open anchor eyebar. Along the catwalk, workers also pull the cable wires to their desired tension. This continues until a bundle, called a "cable strand" is completed, and temporarily bundled using stainless steel wire. This process is repeated until the final cable strand is completed. Workers then remove the individual wraps on the cable strands (during the spinning process, the shape of the main cable closely resembles a hexagon), and then the entire cable is then compressed by a traveling hydraulic press into a closely packed cylinder and tightly wrapped with additional wire to form the final circular cross-section. The wire used in suspension bridge construction is a galvanized steel wire that has been coated with corrosion inhibitors.
At specific points along the main cable (each being the exact distance horizontally in relation to the next) devices called "cable bands" are installed to carry steel wire ropes called Suspender cables. Each suspender cable is engineered and cut to precise lengths, and are looped over the cable bands. In some bridges, where the towers are close to or on the shore, the suspender cables may be applied only to the central span. Early suspender cables were fitted with zinc jewels and a set of steel washers, which formed the support for the deck. Modern suspender cables carry a shackle-type fitting.
Special lifting hoists attached to the suspenders or from the main cables are used to lift prefabricated sections of the bridge deck to the proper level, provided that the local conditions allow the sections to be carried below the bridge by barge or other means. Otherwise, a traveling cantilever derrick may be used to extend the deck one section at a time starting from the towers and working outward. If the addition of the deck structure extends from the towers the finished portions of the deck will pitch upward rather sharply, as there is no downward force in the center of the span. Upon completion of the deck, the added load will pull the main cables into an arc mathematically described as a parabola, while the arc of the deck will be as the designer intended – usually a gentle upward arc for added clearance if over a shipping channel, or flat in other cases such as a span over a canyon. Arched suspension spans also give the structure more rigidity and strength.
With the completion of the primary structure various details such as lighting, handrails, finish painting and paving is installed or completed.
Longest spans
Suspension bridges are typically ranked by the length of their main span. These are the ten bridges with the longest spans, followed by the length of the span and the year the bridge opened for traffic:
Other examples
(Chronological)
Union Bridge (England/Scotland, 1820), the longest span (137 m) from 1820 to 1826. The oldest suspension bridge in the world still carrying road traffic.
Roebling's Delaware Aqueduct (USA, 1847), the oldest wire suspension bridge still in service in the United States.
John A. Roebling Suspension Bridge (USA, 1866), then the longest wire suspension bridge in the world at 1,057 feet (322 m) main span.
Brooklyn Bridge (USA, 1883), the first steel-wire suspension bridge.
Bear Mountain Bridge (USA, 1924), the longest suspension span (497 m) from 1924 to 1926. The first suspension bridge to have a concrete deck. The construction methods pioneered in building it would make possible several much larger projects to follow.
Benjamin Franklin Bridge (USA, 1926), replaced Bear Mountain Bridge as the longest span at 1,750 feet between the towers. Includes an active subway line and never-used trolley stations on the span.
San Francisco–Oakland Bay Bridge eastern span (USA, 2013). The eastern portion is a self-anchored suspension bridge, the longest of its type in the world. It replaced a cantilever bridge.
Golden Gate Bridge (USA, 1937), the longest suspension bridge from 1937 to 1964. It was also the world's tallest bridge from 1937 to 1993, and remains the tallest bridge in the United States.
Mackinac Bridge (USA, 1957), the longest suspension bridge between anchorages in the Western hemisphere.
Si Du River Bridge (China, 2009), the highest bridge in the world, with its deck around 500 meters above the surface of the river.
Rod El Farag Axis Bridge (Egypt, 2019), a modern Egyptian steel wire-cables based suspension bridge crossing the river Nile, which was completed in 2019 and holds the Guinness World Record for the widest suspension bridge in the world with a width of 67.3 meters, and with a span of 540 meters.
Notable collapses
Broughton Suspension Bridge (England) was an iron chain bridge built in 1826. One of Europe's first suspension bridges, it collapsed in 1831 due to mechanical resonance induced by troops marching in step. As a result of the incident, the British Army issued an order that troops should "break step" when crossing a bridge.
Silver Bridge (USA) was an eyebar chain highway bridge, built in 1928, that collapsed in late 1967, killing forty-six people. The bridge had a low-redundancy design that was difficult to inspect. The collapse inspired legislation to ensure that older bridges were regularly inspected and maintained. Following the collapse a bridge of similar design was immediately closed and eventually demolished. A second similarly-designed bridge had been built with a higher margin of safety and remained in service until 1991.
The Tacoma Narrows Bridge, (USA), 1940, was vulnerable to structural vibration in sustained and moderately strong winds due to its plate-girder deck structure. Wind caused a phenomenon called aeroelastic fluttering that led to its collapse only months after completion. The collapse was captured on film. There were no human deaths in the collapse; several drivers escaped their cars on foot and reached the anchorages before the span dropped.
Yarmouth suspension bridge (England) was built in 1829 and collapsed in 1845, killing 79 people.
Peace River Suspension Bridge (Canada), which was completed in 1943, collapsed when the north anchor's soil support for the suspension bridge failed in October 1957. The entire bridge subsequently collapsed.
Kutai Kartanegara Bridge (Indonesia) over the Mahakam River, located in Kutai Kartanegara Regency, East Kalimantan district on the Indonesia island of Borneo, was built in 1995, completed in 2001 and collapsed in 2011. Dozens of vehicles on the bridge fell into the Mahakam River. As a result of this incident, 24 people died and dozens of others were injured and were treated at the Aji Muhammad Parikesit Regional Hospital. Meanwhile, 12 people were reported missing, 31 people were seriously injured, and 8 people had minor injuries. Research findings indicate that the collapse was largely caused by the construction failure of the vertical hanging clamp. It was also found that poor maintenance, fatigue in the cable hanger construction materials, material quality, and bridge loads that exceed vehicle capacity, can also have an impact on bridge collapse. In 2013 the Kutai Kartanegara Bridge rebuilt the same location and completed in 2015 with a Through arch bridge design.
On 30 October 2022, Jhulto Pul, a pedestrian suspension bridge over the Machchhu River in the city of Morbi, Gujarat, India collapsed, leading to the deaths of at least 141 people.
See also
:Category: Suspension bridges—For articles about specific suspension bridges.
Cable-stayed bridge—Superficially similar to a suspension bridge, but cables from the towers directly support the roadway, rather than the road being suspended indirectly by additional cables from the main cables connecting two towers.
Cable-stayed suspension bridge
Floating cable-stayed bridge
Floating suspension bridge
Inca rope bridge—Has features in common with a suspension bridge and predates them by at least three hundred years. However, in a rope bridge the deck itself is suspended from the anchored piers and the guardrails are non-structural.
List of longest suspension bridge spans
Self-anchored suspension bridge—Combining elements of a suspension bridge and a cable-stayed bridge.
Simple suspension bridge—A modern implementation of the rope bridge using steel cables, although either the upper guardrail or lower footboard cables may be the main structural cables.
Timeline of three longest spans—Whether bridge, aerial tramway, powerline, ceiling or dome etc.
References
External links
New Brunswick Canada suspension footbridges;
Structurae: suspension bridges
American Society of Civil Engineers; History and heritage of civil engineering – bridges
Bridgemeister: Mostly suspension bridges
Bridges by structural type
Structural engineering | Suspension bridge | [
"Engineering"
] | 5,041 | [
"Structural engineering",
"Civil engineering",
"Construction"
] |
47,611 | https://en.wikipedia.org/wiki/Doomsday%20Clock | The Doomsday Clock is a symbol that represents the likelihood of a human-made global catastrophe, in the opinion of the members of the Bulletin of the Atomic Scientists. Created by J. Robert Oppenheimer, Albert Einstein & Eugene Rabinowitch and maintained since 1947, the Clock is a metaphor, not a prediction, for threats to humanity from unchecked scientific and technological advances. That is, the time on the Clock is not to be interpreted as actual time. A hypothetical global catastrophe is represented by midnight on the Clock, with the Bulletins opinion on how close the world is to one represented by a certain number of minutes or seconds to midnight, which is then assessed in January of each year. The main factors influencing the Clock are nuclear warfare, climate change, and artificial intelligence. The Bulletins Science and Security Board monitors new developments in the life sciences and technology that could inflict irrevocable harm to humanity.
The Clock's original setting in 1947 was 7 minutes to midnight. It has since been set backward 8 times and forward 17 times. The farthest time from midnight was 17 minutes in 1991, and the nearest is 90 seconds, set in January 2023.
The Clock was moved to 150 seconds (2 minutes, 30 seconds) in 2017, then forward to 2 minutes to midnight in January 2018, and left unchanged in 2019. In January 2020, it was moved forward to 100 seconds (1 minute, 40 seconds) before midnight. In January 2023, the Clock was moved forward to 90 seconds (1 minute, 30 seconds) before midnight, announced in a live stream, and further explained to be impacted by considerations of biosecurity concerns resulting in large part from the global effects of the Russian invasion of Ukraine in an article authored by members of the Bulletin of the Atomic Scientists' Science and Security Board, which included public health experts Suzet McKinney and Asha M. George. The board announced that the clock remained unchanged in January 2024.
History
The Doomsday Clock's origin can be traced to the international group of researchers called the Chicago Atomic Scientists, who had participated in the Manhattan Project. After the atomic bombings of Hiroshima and Nagasaki, they began publishing a mimeographed newsletter and then the magazine, Bulletin of the Atomic Scientists, which, since its inception, has depicted the Clock on every cover. The Clock was first represented in 1947, when the Bulletin co-founder Hyman Goldsmith asked artist Martyl Langsdorf (wife of Manhattan Project research associate and Szilárd petition signatory Alexander Langsdorf, Jr.) to design a cover for the magazine's June 1947 issue. As Eugene Rabinowitch, another co-founder of the Bulletin, explained later:
Langsdorf chose a clock to reflect the urgency of the problem: like a countdown, the Clock suggests that destruction will naturally occur unless someone takes action to stop it.
In January 2007, designer Michael Bierut, who was on the Bulletins Governing Board, redesigned the Doomsday Clock to give it a more modern feel. In 2009, the Bulletin ceased its print edition and became one of the first print publications in the U.S. to become entirely digital; the Clock is now found as part of the logo on the Bulletin's website. Information about the Doomsday Clock Symposium, a timeline of the Clock's settings, and multimedia shows about the Clock's history and culture can also be found on the Bulletins website.
The 5th Doomsday Clock Symposium was held on November 14, 2013, in Washington, D.C.; it was a day-long event that was open to the public and featured panelists discussing various issues on the topic "Communicating Catastrophe". There was also an evening event at the Hirshhorn Museum and Sculpture Garden in conjunction with the Hirshhorn's current exhibit, "Damage Control: Art and Destruction Since 1950". The panel discussions, held at the American Association for the Advancement of Science, were streamed live from the Bulletins website and can still be viewed there. Reflecting international events dangerous to humankind, the Clock has been adjusted 25 times since its inception in 1947, when it was set to "seven minutes to midnight".
The Doomsday Clock has become a universally recognized metaphor according to The Two-Way, an NPR blog. According to the Bulletin, the Clock attracts more daily visitors to the Bulletin's site than any other feature.
Basis for settings
"Midnight" has a deeper meaning besides the constant threat of war. There are various elements taken into consideration when the scientists from the Bulletin decide what Midnight and "global catastrophe" really mean in a particular year. They might include "politics, energy, weapons, diplomacy, and climate science"; potential sources of threat include nuclear threats, climate change, bioterrorism, and artificial intelligence. Members of the board judge Midnight by discussing how close they think humanity is to the end of civilization. In 1947, at the beginning of the Cold War, the Clock was started at seven minutes to midnight.
Fluctuations and threats
Before January 2020, the two tied-for-lowest points for the Doomsday Clock were in 1953 (when the Clock was set to two minutes until midnight, after the U.S. and the Soviet Union began testing hydrogen bombs) and in 2018, following the failure of world leaders to address tensions relating to nuclear weapons and climate change issues. In other years, the Clock's time has fluctuated from 17 minutes in 1991 to 2 minutes 30 seconds in 2017. Discussing the change to minutes in 2017, the first use of a fraction in the Clock's history, Lawrence Krauss, one of the scientists from the Bulletin, warned that political leaders must make decisions based on facts, and those facts "must be taken into account if the future of humanity is to be preserved". In an announcement from the Bulletin about the status of the Clock, they went as far to call for action from "wise" public officials and "wise" citizens to make an attempt to steer human life away from catastrophe while humans still can.
On January 24, 2018, scientists moved the clock to two minutes to midnight, based on threats greatest in the nuclear realm. The scientists said, of recent moves by North Korea under Kim Jong-un and the administration of Donald Trump in the U.S.: "Hyperbolic rhetoric and provocative actions by both sides have increased the possibility of nuclear war by accident or miscalculation".
The clock was left unchanged in 2019 due to the twin threats of nuclear weapons and climate change, and the problem of those threats being "exacerbated this past year by the increased use of information warfare to undermine democracy around the world, amplifying risk from these and other threats and putting the future of civilization in extraordinary danger".
On January 23, 2020, the Clock was moved to 100 seconds (1 minute, 40 seconds) before midnight. The Bulletins executive chairman, Jerry Brown, said "the dangerous rivalry and hostility among the superpowers increases the likelihood of nuclear blunder... Climate change just compounds the crisis". The "100 seconds to midnight" setting remained unchanged in 2021 and 2022.
On January 24, 2023, the Clock was moved to 90 seconds (1 minute, 30 seconds) before midnight, the closest it has ever been set to midnight since its inception in 1947. This adjustment was largely attributed to the risk of nuclear escalation that arose from the Russian invasion of Ukraine. Other reasons cited included climate change, biological threats such as COVID-19, and risks associated with disinformation and disruptive technologies.
Criticism
In 2016 Anders Sandberg of the Future of Humanity Institute has stated that the "grab bag of threats" currently mixed together by the Clock can induce paralysis. People may be more likely to succeed at smaller, incremental challenges; for example, taking steps to prevent the accidental detonation of nuclear weapons was a small but significant step towards avoiding nuclear war. Alex Barasch in Slate argued that "putting humanity on a permanent, blanket high-alert isn't helpful when it comes to policy or science" and criticized the Bulletin for neither explaining nor attempting to quantify their methodology.
Cognitive psychologist Steven Pinker harshly criticized the Doomsday Clock as a political stunt, pointing to the words of its founder that its purpose was "to preserve civilization by scaring men into rationality". He stated that it is inconsistent and not based on any objective indicators of security, using as an example its being farther from midnight in 1962 during the Cuban Missile Crisis than in the "far calmer 2007". He argued it was another example of humanity's tendency toward historical pessimism, and compared it to other predictions of self-destruction that went unfulfilled.
Conservative media outlets have often criticized the Bulletin and the Doomsday Clock. Keith Payne wrote 2010 in the National Review that the Clock overestimated the effects of "developments in the areas of nuclear testing and formal arms control". In 2018, Tristin Hopper in the National Post acknowledged that "there are plenty of things to worry about regarding climate change", but states that climate change is not in the same league as total nuclear destruction. In addition, some critics accuse the Bulletin of pushing a political agenda.
Timeline
In popular culture
"Seven Minutes to Midnight", a 1980 single by Wah! Heat, refers to that year's change of the Doomsday Clock from nine to seven minutes to midnight.
Australian rock band Midnight Oil's 1984 LP Red Sails in the Sunset features a song called "Minutes to Midnight", and the album's cover shows an aerial-view rendering of Sydney after a nuclear strike.
The title of Iron Maiden's 1984 song "2 Minutes to Midnight" is a reference to the Doomsday Clock.
The Doomsday Clock appears in the beginning of the 1985 music video for "Russians" by Sting.
The 1986 short story "The End of the Whole Mess" by Stephen King refers to the Doomsday Clock being set at fifteen seconds before midnight due to elevated geopolitical tension.
The Doomsday Clock was a recurring visual theme in Alan Moore and Dave Gibbons's seminal Watchmen graphic novel series (1986–87), its 2009 film adaptation, and its 2019 television miniseries sequel. Additionally its sequel series, which takes place in the main DC Universe, borrows the title.
The title of Linkin Park's 2007 album Minutes to Midnight is a reference to the Doomsday Clock. Their music video for "Shadow of the Day" from Minutes to Midnight, represents the Doomsday Clock as an actual clock with it reaching midnight at the end of the video.
In the Flobots' song "The Circle in the Square", the lyrics say "the clock is now 11:55 on the big hand", which was the Doomsday Clock's setting in 2012 when the song was released.
The title of the 1982 Doctor Who episode "Four to Doomsday" references the Doomsday Clock. In the 2017 episode "The Pyramid at the End of the World", the Monks changed every clock in the world to three minutes to midnight as a warning about what will happen if humanity does not accept their help. Representatives of the three most powerful armies on Earth agreed not to fight each other, believing a potential war is the catastrophe. However, the clock remained displaying two minutes to midnight. After the Doctor averted the true catastrophe - an accidental bacteriological disaster, the clock began moving backwards.
The Doomsday Clock is featured in Yael Bartana's What if Women Ruled the World, which premiered on July5, 2017 at the Manchester International Festival.
One minute to midnight on the Doomsday Clock is heavily referenced in the grime/punk crossover song "Effed" by Nottingham rapper Snowy and Jason Williamson of Sleaford Mods. Because of the track's political content, there was an initial reluctance from mainstream radio stations to play the track before the 2019 United Kingdom general election. However, the track was later championed by a number of BBC Radio DJs, including punk innovator Iggy Pop.
In the Criminal Minds season 13 episode "The Bunker", the unsubs abduct women using the Doomsday Clock.
The Madam Secretary season 2 episode "On the Clock" features the Doomsday Clock, as the characters try to keep it from moving forward.
The character Bezel in Chikn Nuggit is a personification of the Doomsday Clock.
See also
References
External links
Bulletin of the Atomic Scientists
Timeline of the Doomsday Clock
Alert measurement systems
Clocks
Fear
Nuclear warfare
Political symbols
Symbols introduced in 1947 | Doomsday Clock | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 2,586 | [
"Machines",
"Clocks",
"Measuring instruments",
"Alert measurement systems",
"Physical systems",
"Nuclear warfare",
"Warning systems",
"Radioactivity"
] |
47,625 | https://en.wikipedia.org/wiki/Yarkovsky%20effect | The Yarkovsky effect is a force acting on a rotating body in space caused by the anisotropic emission of thermal photons, which carry momentum. It is usually considered in relation to meteoroids or small asteroids (about 10 cm to 10 km in diameter), as its influence is most significant for these bodies.
History of discovery
The effect was discovered by the Polish-Russian civil engineer Ivan Osipovich Yarkovsky (1844–1902), who worked in Russia on scientific problems in his spare time. Writing in a pamphlet around the year 1900, Yarkovsky noted that the daily heating of a rotating object in space would cause it to experience a force that, while tiny, could lead to large long-term effects in the orbits of small bodies, especially meteoroids and small asteroids. Yarkovsky's insight would have been forgotten had it not been for the Estonian astronomer Ernst J. Öpik (1893–1985), who read Yarkovsky's pamphlet sometime around 1909. Decades later, Öpik, recalling the pamphlet from memory, discussed the possible importance of the Yarkovsky effect on movement of meteoroids about the Solar System.
Mechanism
The Yarkovsky effect is a consequence of the fact that change in the temperature of an object warmed by radiation (and therefore the intensity of thermal radiation from the object) lags behind changes in the incoming radiation. That is, the surface of the object takes time to become warm when first illuminated, and takes time to cool down when illumination stops. In general there are two components to the effect:
Diurnal effect: On a rotating body illuminated by the Sun (e.g. an asteroid or the Earth), the surface is warmed by solar radiation during the day, and cools at night. The thermal properties of the surface cause a lag between the absorption of radiation from the Sun and the emission of radiation as heat, so the surface is warmest not when the Sun is at its peak but slightly later. This results in a difference between the directions of absorption and re-emission of radiation, which yields a net force along the direction of motion of the orbit. If the object is a prograde rotator, the force is in the direction of motion of the orbit, and causes the semi-major axis of the orbit to increase steadily; the object spirals away from the Sun. A retrograde rotator spirals inward. The diurnal effect is the dominant component for bodies with diameter greater than about 100 m.
Seasonal effect: This is easiest to understand for the idealised case of a non-rotating body orbiting the Sun, for which each "year" consists of exactly one "day". As it travels around its orbit, the "dusk" hemisphere which has been heated over a long preceding time period is invariably in the direction of orbital motion. The excess of thermal radiation in this direction causes a braking force that always causes spiraling inward toward the Sun. In practice, for rotating bodies, this seasonal effect increases along with the axial tilt. It dominates only if the diurnal effect is small enough. This may occur because of very rapid rotation (no time to cool off on the night side, hence an almost uniform longitudinal temperature distribution), small size (the whole body is heated throughout) or an axial tilt close to 90°. The seasonal effect is more important for smaller asteroid fragments (from a few metres up to about 100 m), provided their surfaces are not covered by an insulating regolith layer and they do not have exceedingly slow rotations. Additionally, on very long timescales over which the spin axis of the body may be repeatedly changed by collisions (and hence also the direction of the diurnal effect changes), the seasonal effect will also tend to dominate.
In general, the effect is size-dependent, and will affect the semi-major axis of smaller asteroids, while leaving large asteroids practically unaffected. For kilometre-sized asteroids, the Yarkovsky effect is minuscule over short periods: the force on asteroid 6489 Golevka has been estimated at 0.25 newtons, for a net acceleration of 10−12 m/s2. But it is steady; over millions of years an asteroid's orbit can be perturbed enough to transport it from the asteroid belt to the inner Solar System.
The mechanism is more complicated for bodies in strongly eccentric orbits.
Measurement
The effect was first measured in 1991–2003 on the asteroid 6489 Golevka. The asteroid drifted 15 km from its predicted position over twelve years (the orbit was established with great precision by a series of radar observations in 1991, 1995 and 1999 from the Arecibo radio telescope).
Without direct measurement, it is very hard to predict the exact result of the Yarkovsky effect on a given asteroid's orbit. This is because the magnitude of the effect depends on many variables that are hard to determine from the limited observational information that is available. These include the exact shape of the asteroid, its orientation, and its albedo. Calculations are further complicated by the effects of shadowing and thermal "reillumination", whether caused by local craters or a possible overall concave shape. The Yarkovsky effect also competes with radiation pressure, whose net effect may cause similar small long-term forces for bodies with albedo variations or non-spherical shapes.
As an example, even for the simple case of the pure seasonal Yarkovsky effect on a spherical body in a circular orbit with 90° obliquity, semi-major axis changes could differ by as much as a factor of two between the case of a uniform albedo and the case of a strong north–south albedo asymmetry. Depending on the object's orbit and spin axis, the Yarkovsky change of the semi-major axis may be reversed simply by changing from a spherical to a non-spherical shape.
Despite these difficulties, utilizing the Yarkovsky effect is one scenario under investigation to alter the course of potentially Earth-impacting near-Earth asteroids. Possible asteroid deflection strategies include "painting" the surface of the asteroid or focusing solar radiation onto the asteroid to alter the intensity of the Yarkovsky effect and so alter the orbit of the asteroid away from a collision with Earth. The OSIRIS-REx mission, launched in September 2016, studied the Yarkovsky effect on asteroid Bennu.
In 2020, astronomers confirmed Yarkovsky acceleration of the asteroid 99942 Apophis. The findings are relevant to asteroid impact avoidance as 99942 Apophis was thought to have a very small chance of Earth impact in 2068, and the Yarkovsky effect was a significant source of prediction uncertainty.
In 2021, a multidisciplinary professional-amateur collaboration combined Gaia satellite and ground-based radar measurements with amateur stellar occultation observations to further refine 99942 Apophis's orbit and measure the Yarkovsky acceleration with high precision, to within 0.5%. With these, astronomers were able to eliminate the possibility of a collision with the Earth for at least the next 100 years.
See also
Asteroid
Poynting–Robertson effect
Radiation pressure
YORP effect
References
External links
Asteroid Nudged by Sunlight: Most Precise Measurement of Yarkovsky Effect – (ScienceDaily 2012-05-24)
Asteroids
Concepts in astrophysics
Orbital perturbations
Radiation effects
Rotation | Yarkovsky effect | [
"Physics",
"Materials_science",
"Engineering"
] | 1,497 | [
"Physical phenomena",
"Concepts in astrophysics",
"Classical mechanics",
"Rotation",
"Astrophysics",
"Materials science",
"Motion (physics)",
"Radiation",
"Condensed matter physics",
"Radiation effects"
] |
47,641 | https://en.wikipedia.org/wiki/Standard%20Model | The Standard Model of particle physics is the theory describing three of the four known fundamental forces (electromagnetic, weak and strong interactions – excluding gravity) in the universe and classifying all known elementary particles. It was developed in stages throughout the latter half of the 20th century, through the work of many scientists worldwide, with the current formulation being finalized in the mid-1970s upon experimental confirmation of the existence of quarks. Since then, proof of the top quark (1995), the tau neutrino (2000), and the Higgs boson (2012) have added further credence to the Standard Model. In addition, the Standard Model has predicted various properties of weak neutral currents and the W and Z bosons with great accuracy.
Although the Standard Model is believed to be theoretically self-consistent and has demonstrated some success in providing experimental predictions, it leaves some physical phenomena unexplained and so falls short of being a complete theory of fundamental interactions. For example, it does not fully explain why there is more matter than anti-matter, incorporate the full theory of gravitation as described by general relativity, or account for the universe's accelerating expansion as possibly described by dark energy. The model does not contain any viable dark matter particle that possesses all of the required properties deduced from observational cosmology. It also does not incorporate neutrino oscillations and their non-zero masses.
The development of the Standard Model was driven by theoretical and experimental particle physicists alike. The Standard Model is a paradigm of a quantum field theory for theorists, exhibiting a wide range of phenomena, including spontaneous symmetry breaking, anomalies, and non-perturbative behavior. It is used as a basis for building more exotic models that incorporate hypothetical particles, extra dimensions, and elaborate symmetries (such as supersymmetry) to explain experimental results at variance with the Standard Model, such as the existence of dark matter and neutrino oscillations.
Historical background
In 1928, Paul Dirac introduced the Dirac equation, which implied the existence of antimatter.
In 1954, Yang Chen-Ning and Robert Mills extended the concept of gauge theory for abelian groups, e.g. quantum electrodynamics, to nonabelian groups to provide an explanation for strong interactions. In 1957, Chien-Shiung Wu demonstrated parity was not conserved in the weak interaction.
In 1961, Sheldon Glashow combined the electromagnetic and weak interactions. In 1964, Murray Gell-Mann and George Zweig introduced quarks and that same year Oscar W. Greenberg implicitly introduced color charge of quarks. In 1967 Steven Weinberg and Abdus Salam incorporated the Higgs mechanism into Glashow's electroweak interaction, giving it its modern form.
In 1970, Sheldon Glashow, John Iliopoulos, and Luciano Maiani introduced the GIM mechanism, predicting the charm quark. In 1973 Gross and Wilczek and Politzer independently discovered that non-Abelian gauge theories, like the color theory of the strong force, have asymptotic freedom. In 1976, Martin Perl discovered the tau lepton at the SLAC. In 1977, a team led by Leon Lederman at Fermilab discovered the bottom quark.
The Higgs mechanism is believed to give rise to the masses of all the elementary particles in the Standard Model. This includes the masses of the W and Z bosons, and the masses of the fermions, i.e. the quarks and leptons.
After the neutral weak currents caused by Z boson exchange were discovered at CERN in 1973, the electroweak theory became widely accepted and Glashow, Salam, and Weinberg shared the 1979 Nobel Prize in Physics for discovering it. The W± and Z0 bosons were discovered experimentally in 1983; and the ratio of their masses was found to be as the Standard Model predicted.
The theory of the strong interaction (i.e. quantum chromodynamics, QCD), to which many contributed, acquired its modern form in 1973–74 when asymptotic freedom was proposed (a development that made QCD the main focus of theoretical research) and experiments confirmed that the hadrons were composed of fractionally charged quarks.
The term "Standard Model" was introduced by Abraham Pais and Sam Treiman in 1975, with reference to the electroweak theory with four quarks. Steven Weinberg, has since claimed priority, explaining that he chose the term Standard Model out of a sense of modesty and used it in 1973 during a talk in Aix-en-Provence in France.
Particle content
The Standard Model includes members of several classes of elementary particles, which in turn can be distinguished by other characteristics, such as color charge.
All particles can be summarized as follows:
Fermions
The Standard Model includes 12 elementary particles of spin , known as fermions. Fermions respect the Pauli exclusion principle, meaning that two identical fermions cannot simultaneously occupy the same quantum state in the same atom. Each fermion has a corresponding antiparticle, which are particles that have corresponding properties with the exception of opposite charges. Fermions are classified based on how they interact, which is determined by the charges they carry, into two groups: quarks and leptons. Within each group, pairs of particles that exhibit similar physical behaviors are then grouped into generations (see the table). Each member of a generation has a greater mass than the corresponding particle of generations prior. Thus, there are three generations of quarks and leptons. As first-generation particles do not decay, they comprise all of ordinary (baryonic) matter. Specifically, all atoms consist of electrons orbiting around the atomic nucleus, ultimately constituted of up and down quarks. On the other hand, second- and third-generation charged particles decay with very short half-lives and can only be observed in high-energy environments. Neutrinos of all generations also do not decay, and pervade the universe, but rarely interact with baryonic matter.
There are six quarks: up, down, charm, strange, top, and bottom. Quarks carry color charge, and hence interact via the strong interaction. The color confinement phenomenon results in quarks being strongly bound together such that they form color-neutral composite particles called hadrons; quarks cannot individually exist and must always bind with other quarks. Hadrons can contain either a quark-antiquark pair (mesons) or three quarks (baryons). The lightest baryons are the nucleons: the proton and neutron. Quarks also carry electric charge and weak isospin, and thus interact with other fermions through electromagnetism and weak interaction. The six leptons consist of the electron, electron neutrino, muon, muon neutrino, tau, and tau neutrino. The leptons do not carry color charge, and do not respond to strong interaction. The charged leptons carry an electric charge of −1 e, while the three neutrinos carry zero electric charge. Thus, the neutrinos' motions are influenced by only the weak interaction and gravity, making them difficult to observe.
Gauge bosons
The Standard Model includes 4 kinds of gauge bosons of spin 1, with bosons being quantum particles containing an integer spin. The gauge bosons are defined as force carriers, as they are responsible for mediating the fundamental interactions. The Standard Model explains the four fundamental forces as arising from the interactions, with fermions exchanging virtual force carrier particles, thus mediating the forces. At a macroscopic scale, this manifests as a force. As a result, they do not follow the Pauli exclusion principle that constrains fermions; bosons do not have a theoretical limit on their spatial density. The types of gauge bosons are described below.
Electromagnetism: Photons mediate the electromagnetic force, responsible for interactions between electrically charged particles. The photon is massless and is described by the theory of quantum electrodynamics (QED).
Strong Interactions: Gluons mediate the strong interactions, which binds quarks to each other by influencing the color charge, with the interactions being described in the theory of quantum chromodynamics (QCD). They have no mass, and there are eight distinct gluons, with each being denoted through a color-anticolor charge combination (e.g. red–antigreen). As gluons have an effective color charge, they can also interact amongst themselves.
Weak Interactions: The , , and gauge bosons mediate the weak interactions between all fermions, being responsible for radioactivity. They contain mass, with the having more mass than the . The weak interactions involving the act only on left-handed particles and right-handed antiparticles. The carries an electric charge of +1 and −1 and couples to the electromagnetic interaction. The electrically neutral boson interacts with both left-handed particles and right-handed antiparticles. These three gauge bosons along with the photons are grouped together, as collectively mediating the electroweak interaction.
Gravity: It is currently unexplained in the Standard Model, as the hypothetical mediating particle graviton has been proposed, but not observed. This is due to the incompatibility of quantum mechanics and Einstein's theory of general relativity, regarded as being the best explanation for gravity. In general relativity, gravity is explained as being the geometric curving of spacetime.
The Feynman diagram calculations, which are a graphical representation of the perturbation theory approximation, invoke "force mediating particles", and when applied to analyze high-energy scattering experiments are in reasonable agreement with the data. However, perturbation theory (and with it the concept of a "force-mediating particle") fails in other situations. These include low-energy quantum chromodynamics, bound states, and solitons. The interactions between all the particles described by the Standard Model are summarized by the diagrams on the right of this section.
Higgs boson
The Higgs particle is a massive scalar elementary particle theorized by Peter Higgs (and others) in 1964, when he showed that Goldstone's 1962 theorem (generic continuous symmetry, which is spontaneously broken) provides a third polarisation of a massive vector field. Hence, Goldstone's original scalar doublet, the massive spin-zero particle, was proposed as the Higgs boson, and is a key building block in the Standard Model. It has no intrinsic spin, and for that reason is classified as a boson with spin-0.
The Higgs boson plays a unique role in the Standard Model, by explaining why the other elementary particles, except the photon and gluon, are massive. In particular, the Higgs boson explains why the photon has no mass, while the W and Z bosons are very heavy. Elementary-particle masses and the differences between electromagnetism (mediated by the photon) and the weak force (mediated by the W and Z bosons) are critical to many aspects of the structure of microscopic (and hence macroscopic) matter. In electroweak theory, the Higgs boson generates the masses of the leptons (electron, muon, and tau) and quarks. As the Higgs boson is massive, it must interact with itself.
Because the Higgs boson is a very massive particle and also decays almost immediately when created, only a very high-energy particle accelerator can observe and record it. Experiments to confirm and determine the nature of the Higgs boson using the Large Hadron Collider (LHC) at CERN began in early 2010 and were performed at Fermilab's Tevatron until its closure in late 2011. Mathematical consistency of the Standard Model requires that any mechanism capable of generating the masses of elementary particles must become visible at energies above ; therefore, the LHC (designed to collide two proton beams) was built to answer the question of whether the Higgs boson actually exists.
On 4 July 2012, two of the experiments at the LHC (ATLAS and CMS) both reported independently that they had found a new particle with a mass of about (about 133 proton masses, on the order of ), which is "consistent with the Higgs boson". On 13 March 2013, it was confirmed to be the searched-for Higgs boson.
Theoretical aspects
Construction of the Standard Model Lagrangian
Technically, quantum field theory provides the mathematical framework for the Standard Model, in which a Lagrangian controls the dynamics and kinematics of the theory. Each kind of particle is described in terms of a dynamical field that pervades space-time.
The construction of the Standard Model proceeds following the modern method of constructing most field theories: by first postulating a set of symmetries of the system, and then by writing down the most general renormalizable Lagrangian from its particle (field) content that observes these symmetries.
The global Poincaré symmetry is postulated for all relativistic quantum field theories. It consists of the familiar translational symmetry, rotational symmetry and the inertial reference frame invariance central to the theory of special relativity. The local SU(3) × SU(2) × U(1) gauge symmetry is an internal symmetry that essentially defines the Standard Model. Roughly, the three factors of the gauge symmetry give rise to the three fundamental interactions. The fields fall into different representations of the various symmetry groups of the Standard Model (see table). Upon writing the most general Lagrangian, one finds that the dynamics depends on 19 parameters, whose numerical values are established by experiment. The parameters are summarized in the table (made visible by clicking "show") above.
Quantum chromodynamics sector
The quantum chromodynamics (QCD) sector defines the interactions between quarks and gluons, which is a Yang–Mills gauge theory with SU(3) symmetry, generated by . Since leptons do not interact with gluons, they are not affected by this sector. The Dirac Lagrangian of the quarks coupled to the gluon fields is given by
where is a three component column vector of Dirac spinors, each element of which refers to a quark field with a specific color charge (i.e. red, blue, and green) and summation over flavor (i.e. up, down, strange, etc.) is implied.
The gauge covariant derivative of QCD is defined by , where
are the Dirac matrices,
is the 8-component () SU(3) gauge field,
are the 3 × 3 Gell-Mann matrices, generators of the SU(3) color group,
represents the gluon field strength tensor, and
is the strong coupling constant.
The QCD Lagrangian is invariant under local SU(3) gauge transformations; i.e., transformations of the form , where is 3 × 3 unitary matrix with determinant 1, making it a member of the group SU(3), and is an arbitrary function of spacetime.
Electroweak sector
The electroweak sector is a Yang–Mills gauge theory with the symmetry group ,
where the subscript sums over the three generations of fermions; , and are the left-handed doublet, right-handed singlet up type, and right handed singlet down type quark fields; and and are the left-handed doublet and right-handed singlet lepton fields.
The electroweak gauge covariant derivative is defined as , where
is the U(1) gauge field,
is the weak hypercharge – the generator of the U(1) group,
is the 3-component SU(2) gauge field,
are the Pauli matrices – infinitesimal generators of the SU(2) group – with subscript L to indicate that they only act on left-chiral fermions,
and are the U(1) and SU(2) coupling constants respectively,
() and are the field strength tensors for the weak isospin and weak hypercharge fields.
Notice that the addition of fermion mass terms into the electroweak Lagrangian is forbidden, since terms of the form do not respect gauge invariance. Neither is it possible to add explicit mass terms for the U(1) and SU(2) gauge fields. The Higgs mechanism is responsible for the generation of the gauge boson masses, and the fermion masses result from Yukawa-type interactions with the Higgs field.
Higgs sector
In the Standard Model, the Higgs field is an SU(2) doublet of complex scalar fields with four degrees of freedom:
where the superscripts + and 0 indicate the electric charge of the components. The weak hypercharge of both components is 1. Before symmetry breaking, the Higgs Lagrangian is
where is the electroweak gauge covariant derivative defined above and is the potential of the Higgs field. The square of the covariant derivative leads to three and four point interactions between the electroweak gauge fields and and the scalar field . The scalar potential is given by
where , so that acquires a non-zero Vacuum expectation value, which generates masses for the Electroweak gauge fields (the Higgs mechanism), and , so that the potential is bounded from below. The quartic term describes self-interactions of the scalar field .
The minimum of the potential is degenerate with an infinite number of equivalent ground state solutions, which occurs when . It is possible to perform a gauge transformation on such that the ground state is transformed to a basis where and . This breaks the symmetry of the ground state. The expectation value of now becomes
where has units of mass and sets the scale of electroweak physics. This is the only dimensional parameter of the Standard Model and has a measured value of ~.
After symmetry breaking, the masses of the W and Z are given by and , which can be viewed as predictions of the theory. The photon remains massless. The mass of the Higgs boson is . Since and are free parameters, the Higgs's mass could not be predicted beforehand and had to be determined experimentally.
Yukawa sector
The Yukawa interaction terms are:
where , , and are matrices of Yukawa couplings, with the term giving the coupling of the generations and , and h.c. means Hermitian conjugate of preceding terms. The fields and are left-handed quark and lepton doublets. Likewise, and are right-handed up-type quark, down-type quark, and lepton singlets. Finally is the Higgs doublet and is its charge conjugate state.
The Yukawa terms are invariant under the SU(2) × U(1) gauge symmetry of the Standard Model and generate masses for all fermions after spontaneous symmetry breaking.
Fundamental interactions
The Standard Model describes three of the four fundamental interactions in nature; only gravity remains unexplained. In the Standard Model, such an interaction is described as an exchange of bosons between the objects affected, such as a photon for the electromagnetic force and a gluon for the strong interaction. Those particles are called force carriers or messenger particles.
Gravity
Despite being perhaps the most familiar fundamental interaction, gravity is not described by the Standard Model, due to contradictions that arise when combining general relativity, the modern theory of gravity, and quantum mechanics. However, gravity is so weak at microscopic scales, that it is essentially unmeasurable. The graviton is postulated to be the mediating particle, but has not yet been proved to exist.
Electromagnetism
Electromagnetism is the only long-range force in the Standard Model. It is mediated by photons and couples to electric charge. Electromagnetism is responsible for a wide range of phenomena including atomic electron shell structure, chemical bonds, electric circuits and electronics. Electromagnetic interactions in the Standard Model are described by quantum electrodynamics.
Weak nuclear force
The weak interaction is responsible for various forms of particle decay, such as beta decay. It is weak and short-range, due to the fact that the weak mediating particles, W and Z bosons, have mass. W bosons have electric charge and mediate interactions that change the particle type (referred to as flavor) and charge. Interactions mediated by W bosons are charged current interactions. Z bosons are neutral and mediate neutral current interactions, which do not change particle flavor. Thus Z bosons are similar to the photon, aside from them being massive and interacting with the neutrino. The weak interaction is also the only interaction to violate parity and CP. Parity violation is maximal for charged current interactions, since the W boson interacts exclusively with left-handed fermions and right-handed antifermions.
In the Standard Model, the weak force is understood in terms of the electroweak theory, which states that the weak and electromagnetic interactions become united into a single electroweak interaction at high energies.
Strong nuclear force
The strong nuclear force is responsible for hadronic and nuclear binding. It is mediated by gluons, which couple to color charge. Since gluons themselves have color charge, the strong force exhibits confinement and asymptotic freedom. Confinement means that only color-neutral particles can exist in isolation, therefore quarks can only exist in hadrons and never in isolation, at low energies. Asymptotic freedom means that the strong force becomes weaker, as the energy scale increases. The strong force overpowers the electrostatic repulsion of protons and quarks in nuclei and hadrons respectively, at their respective scales.
While quarks are bound in hadrons by the fundamental strong interaction, which is mediated by gluons, nucleons are bound by an emergent phenomenon termed the residual strong force or nuclear force. This interaction is mediated by mesons, such as the pion. The color charges inside the nucleon cancel out, meaning most of the gluon and quark fields cancel out outside of the nucleon. However, some residue is "leaked", which appears as the exchange of virtual mesons, that causes the attractive force between nucleons. The (fundamental) strong interaction is described by quantum chromodynamics, which is a component of the Standard Model.
Tests and predictions
The Standard Model predicted the existence of the W and Z bosons, gluon, top quark and charm quark, and predicted many of their properties before these particles were observed. The predictions were experimentally confirmed with good precision.
The Standard Model also predicted the existence of the Higgs boson, which was found in 2012 at the Large Hadron Collider, the final fundamental particle predicted by the Standard Model to be experimentally confirmed.
Challenges
Self-consistency of the Standard Model (currently formulated as a non-abelian gauge theory quantized through path-integrals) has not been mathematically proved. While regularized versions useful for approximate computations (for example lattice gauge theory) exist, it is not known whether they converge (in the sense of S-matrix elements) in the limit that the regulator is removed. A key question related to the consistency is the Yang–Mills existence and mass gap problem.
Experiments indicate that neutrinos have mass, which the classic Standard Model did not allow. To accommodate this finding, the classic Standard Model can be modified to include neutrino mass, although it is not obvious exactly how this should be done.
If one insists on using only Standard Model particles, this can be achieved by adding a non-renormalizable interaction of leptons with the Higgs boson. On a fundamental level, such an interaction emerges in the seesaw mechanism where heavy right-handed neutrinos are added to the theory.
This is natural in the left-right symmetric extension of the Standard Model and in certain grand unified theories. As long as new physics appears below or around 1014 GeV, the neutrino masses can be of the right order of magnitude.
Theoretical and experimental research has attempted to extend the Standard Model into a unified field theory or a theory of everything, a complete theory explaining all physical phenomena including constants. Inadequacies of the Standard Model that motivate such research include:
The model does not explain gravitation, although physical confirmation of a theoretical particle known as a graviton would account for it to a degree. Though it addresses strong and electroweak interactions, the Standard Model does not consistently explain the canonical theory of gravitation, general relativity, in terms of quantum field theory. The reason for this is, among other things, that quantum field theories of gravity generally break down before reaching the Planck scale. As a consequence, we have no reliable theory for the very early universe.
Some physicists consider it to be ad hoc and inelegant, requiring 19 numerical constants whose values are unrelated and arbitrary. Although the Standard Model, as it now stands, can explain why neutrinos have masses, the specifics of neutrino mass are still unclear. It is believed that explaining neutrino mass will require an additional 7 or 8 constants, which are also arbitrary parameters.
The Higgs mechanism gives rise to the hierarchy problem if some new physics (coupled to the Higgs) is present at high energy scales. In these cases, in order for the weak scale to be much smaller than the Planck scale, severe fine tuning of the parameters is required; there are, however, other scenarios that include quantum gravity in which such fine tuning can be avoided. There are also issues of quantum triviality, which suggests that it may not be possible to create a consistent quantum field theory involving elementary scalar particles.
The model is inconsistent with the emerging Lambda-CDM model of cosmology. Contentions include the absence of an explanation in the Standard Model of particle physics for the observed amount of cold dark matter (CDM) and its contributions to dark energy, which are many orders of magnitude too large. It is also difficult to accommodate the observed predominance of matter over antimatter (matter/antimatter asymmetry). The isotropy and homogeneity of the visible universe over large distances seems to require a mechanism like cosmic inflation, which would also constitute an extension of the Standard Model.
Currently, no proposed theory of everything has been widely accepted or verified.
See also
Yang–Mills theory
Fundamental interaction:
Quantum electrodynamics
Strong interaction: Color charge, Quantum chromodynamics, Quark model
Weak interaction: Electroweak interaction, Fermi's interaction, Weak hypercharge, Weak isospin
Gauge theory: Introduction to gauge theory
Generation
Higgs mechanism: Higgs boson, Alternatives to the Standard Higgs Model
Lagrangian
Open questions: CP violation, Neutrino masses, QCD matter, Quantum triviality
Quantum field theory
Standard Model: Mathematical formulation of, Physics beyond the Standard Model
Electron electric dipole moment
Notes
References
Further reading
Introductory textbooks
Advanced textbooks
Highlights the gauge theory aspects of the Standard Model.
Highlights dynamical and phenomenological aspects of the Standard Model.
920 pages.
952 pages.
670 pages. Highlights group-theoretical aspects of the Standard Model.
Journal articles
External links
"The Standard Model explained in Detail by CERN's John Ellis" omega tau podcast.
The Standard Model on the CERN website explains how the basic building blocks of matter interact, governed by four fundamental forces.
Particle Physics: Standard Model, Leonard Susskind lectures (2010).
Concepts in physics
Particle physics | Standard Model | [
"Physics"
] | 5,764 | [
"Standard Model",
"Particle physics",
"nan"
] |
47,643 | https://en.wikipedia.org/wiki/Michel%20Foucault | Paul-Michel Foucault ( , ; ; 15 October 192625 June 1984) was a French historian of ideas and philosopher who was also an author, literary critic, political activist, and teacher. Foucault's theories primarily addressed the relationships between power versus knowledge and liberty, and he analyzed how they are used as a form of social control through multiple institutions. Though often cited as a structuralist and postmodernist, Foucault rejected these labels and sought to critique authority without limits on himself. His thought has influenced academics within a large number of contrasting areas of study, with this especially including those working in anthropology, communication studies, criminology, cultural studies, feminism, literary theory, psychology, and sociology. His efforts against homophobia and racial prejudice as well as against other ideological doctrines have also shaped research into critical theory and Marxism–Leninism alongside other topics.
Born in Poitiers, France, into an upper-middle-class family, Foucault was educated at the Lycée Henri-IV, at the , where he developed an interest in philosophy and came under the influence of his tutors Jean Hyppolite and Louis Althusser, and at the University of Paris (Sorbonne), where he earned degrees in philosophy and psychology. After several years as a cultural diplomat abroad, he returned to France and published his first major book, The History of Madness (1961). After obtaining work between 1960 and 1966 at the University of Clermont-Ferrand, he produced The Birth of the Clinic (1963) and The Order of Things (1966), publications that displayed his increasing involvement with structuralism, from which he later distanced himself. These first three histories exemplified a historiographical technique Foucault was developing, which he called "archaeology".
From 1966 to 1968, Foucault lectured at the University of Tunis before returning to France, where he became head of the philosophy department at the new experimental university of Paris VIII. Foucault subsequently published The Archaeology of Knowledge (1969). In 1970, Foucault was admitted to the Collège de France, a membership he retained until his death. He also became active in several left-wing groups involved in campaigns against racism and other violations of human rights, focusing on struggles such as penal reform. Foucault later published Discipline and Punish (1975) and The History of Sexuality (1976), in which he developed archaeological and genealogical methods that emphasized the role that power plays in society.
Foucault died in Paris from complications of HIV/AIDS. He became the first public figure in France to die from complications of the disease, with his charisma and career influence changing mass awareness of the pandemic. This occurrence influenced HIV/AIDS activism; his partner, Daniel Defert, founded the AIDES charity in his memory. It continues to campaign as of 2024 despite the deaths of both Defert (in 2023) and Foucault (in 1984).
Early life
Early years: 1926–1938
Paul-Michel Foucault was born on 15 October 1926 in the city of Poitiers, west-central France, as the second of three children in a prosperous, socially conservative, upper-middle-class family. Family tradition prescribed naming him after his father, Paul Foucault (1893–1959), but his mother insisted on the addition of Michel; referred to as Paul at school, he expressed a preference for "Michel" throughout his life.
His father, a successful local surgeon born in Fontainebleau, moved to Poitiers, where he set up his own practice. He married Anne Malapert, the daughter of prosperous surgeon Dr. Prosper Malapert, who owned a private practice and taught anatomy at the University of Poitiers' School of Medicine. Paul Foucault eventually took over his father-in-law's medical practice, while Anne took charge of their large mid-19th-century house, Le Piroir, in the village of Vendeuvre-du-Poitou. Together the couple had three children—a girl named Francine and two boys, Paul-Michel and Denys—who all shared the same fair hair and bright blue eyes. The children were raised to be nominal Catholics, attending mass at the Church of Saint-Porchair, and while Michel briefly became an altar boy, none of the family was devout. Michel is not related to the physicist Léon Foucault.
In later life, Foucault revealed very little about his childhood. Describing himself as a "juvenile delinquent", he said his father was a "bully" who sternly punished him. In 1930, two years early, Foucault began his schooling at the local Lycée Henry-IV. There he undertook two years of elementary education before entering the main lycée, where he stayed until 1936. Afterwards, he took his first four years of secondary education at the same establishment, excelling in French, Greek, Latin, and history, though doing poorly at mathematics, including arithmetic.
Teens to young adulthood: 1939–1945
In 1939, the Second World War began, followed by Nazi Germany's occupation of France in 1940. Foucault's parents opposed the occupation and the Vichy regime, but did not join the Resistance. That year, Foucault's mother enrolled him in the Collège Saint-Stanislas, a strict Catholic institution run by the Jesuits. Although he later described his years there as an "ordeal", Foucault excelled academically, particularly in philosophy, history, and literature. In 1942 he entered his final year, the terminale, where he focused on the study of philosophy, earning his baccalauréat in 1943.
Returning to the local Lycée Henry-IV, he studied history and philosophy for a year, aided by a personal tutor, the philosopher . Rejecting his father's wishes that he become a surgeon, in 1945 Foucault went to Paris, where he enrolled in one of the country's most prestigious secondary schools, which was also known as the Lycée Henri-IV. Here he studied under the philosopher Jean Hyppolite, an existentialist and expert on the work of 19th-century German philosopher Georg Wilhelm Friedrich Hegel. Hyppolite had devoted himself to uniting existentialist theories with the dialectical theories of Hegel and Karl Marx. These ideas influenced Foucault, who adopted Hyppolite's conviction that philosophy must develop through a study of history.
University studies: 1946–1951
In autumn 1946, attaining excellent results, Foucault was admitted to the élite (ENS), for which he undertook exams and an oral interrogation by Georges Canguilhem and Pierre-Maxime Schuhl to gain entry. Of the hundred students entering the ENS, Foucault ranked fourth based on his entry results, and encountered the highly competitive nature of the institution. Like most of his classmates, he lived in the school's communal dormitories on the Parisian Rue d'Ulm.
He remained largely unpopular, spending much time alone, reading voraciously. His fellow students noted his love of violence and the macabre; he decorated his bedroom with images of torture and war drawn during the Napoleonic Wars by Spanish artist Francisco Goya, and on one occasion chased a classmate with a dagger. Prone to self-harm, in 1948 Foucault allegedly attempted suicide; his father sent him to see the psychiatrist Jean Delay at the Sainte-Anne Hospital Center. Obsessed with the idea of self-mutilation and suicide, Foucault attempted the latter several times in ensuing years, praising suicide in later writings. The ENS's doctor examined Foucault's state of mind, suggesting that his suicidal tendencies emerged from the distress surrounding his homosexuality, because same-sex sexual activity was socially taboo in France. At the time, Foucault engaged in homosexual activity with men whom he encountered in the underground Parisian gay scene, also indulging in drug use; according to biographer James Miller, he enjoyed the thrill and sense of danger that these activities offered him.
Although studying various subjects, Foucault soon gravitated towards philosophy, reading not only Hegel and Marx but also Immanuel Kant, Edmund Husserl and most significantly, Martin Heidegger. He began reading the publications of philosopher Gaston Bachelard, taking a particular interest in his work exploring the history of science. He graduated from the ENS with a B.A. (licence) in Philosophy in 1948 and a DES (, roughly equivalent to an M.A.) in Philosophy in 1949. His DES thesis under the direction of Hyppolite was titled La Constitution d'un transcendental dans La Phénoménologie de l'esprit de Hegel (The Constitution of a Historical Transcendental in Hegel's Phenomenology of Spirit).
In 1948, the philosopher Louis Althusser became a tutor at the ENS. A Marxist, he influenced both Foucault and a number of other students, encouraging them to join the French Communist Party. Foucault did so in 1950, but never became particularly active in its activities, and never adopted an orthodox Marxist viewpoint, rejecting core Marxist tenets such as class struggle. He soon became dissatisfied with the bigotry that he experienced within the party's ranks; he personally faced homophobia and was appalled by the anti-semitism exhibited during the 1952–53 "Doctors' plot" in the Soviet Union. He left the Communist Party in 1953, but remained Althusser's friend and defender for the rest of his life. Although failing at the first attempt in 1950, he passed his agrégation in philosophy on the second try, in 1951. Excused from national service on medical grounds, he decided to start a doctorate at the Fondation Thiers in 1951, focusing on the philosophy of psychology, but he relinquished it after only one year in 1952.
Foucault was also interested in psychology and he attended Daniel Lagache's lectures at the University of Paris, where he obtained a B.A. (licence) in psychology in 1949 and a Diploma in Psychopathology (Diplôme de psychopathologie) from the university's institute of psychology (now ) in June 1952.
Early career (1951–1960)
France: 1951–1955
Over the following few years, Foucault embarked on a variety of research and teaching jobs. From 1951 to 1955, he worked as a psychology instructor at the ENS at Althusser's invitation. In Paris, he shared a flat with his brother, who was training to become a surgeon, but for three days in the week commuted to the northern town of Lille, teaching psychology at the Université de Lille from 1953 to 1954. Many of his students liked his lecturing style. Meanwhile, he continued working on his thesis, visiting the Bibliothèque Nationale every day to read the work of psychologists such as Ivan Pavlov, Jean Piaget and Karl Jaspers. Undertaking research at the psychiatric institute of the Sainte-Anne Hospital, he became an unofficial intern, studying the relationship between doctor and patient and aiding experiments in the electroencephalographic laboratory. Foucault adopted many of the theories of the psychoanalyst Sigmund Freud, undertaking psychoanalytical interpretation of his dreams and making friends undergo Rorschach tests.
Embracing the Parisian avant-garde, Foucault entered into a romantic relationship with the serialist composer Jean Barraqué. Together, they tried to produce their greatest work, heavily used recreational drugs and engaged in sado-masochistic sexual activity. In August 1953, Foucault and Barraqué holidayed in Italy, where the philosopher immersed himself in Untimely Meditations (1873–1876), a set of four essays by the philosopher Friedrich Nietzsche. Later describing Nietzsche's work as "a revelation", he felt that reading the book deeply affected him, being a watershed moment in his life. Foucault subsequently experienced another groundbreaking self-revelation when watching a Parisian performance of Samuel Beckett's new play, Waiting for Godot, in 1953.
Interested in literature, Foucault was an avid reader of the philosopher Maurice Blanchot's book reviews published in Nouvelle Revue Française. Enamoured of Blanchot's literary style and critical theories, in later works he adopted Blanchot's technique of "interviewing" himself. Foucault also came across Hermann Broch's 1945 novel The Death of Virgil, a work that obsessed both him and Barraqué. While the latter attempted to convert the work into an epic opera, Foucault admired Broch's text for its portrayal of death as an affirmation of life. The couple took a mutual interest in the work of such authors as the Marquis de Sade, Fyodor Dostoyevsky, Franz Kafka and Jean Genet, all of whose works explored the themes of sex and violence.
Interested in the work of Swiss psychologist Ludwig Binswanger, Foucault aided family friend Jacqueline Verdeaux in translating his works into French. Foucault was particularly interested in Binswanger's studies of Ellen West who, like himself, had a deep obsession with suicide, eventually killing herself. In 1954, Foucault authored an introduction to Binswanger's paper "Dream and Existence", in which he argued that dreams constituted "the birth of the world" or "the heart laid bare", expressing the mind's deepest desires. That same year, Foucault published his first book, Maladie mentale et personalité (Mental Illness and Personality), in which he exhibited his influence from both Marxist and Heideggerian thought, covering a wide range of subject matter from the reflex psychology of Pavlov to the classic psychoanalysis of Freud. Referencing the work of sociologists and anthropologists such as Émile Durkheim and Margaret Mead, he presented his theory that illness was culturally relative. Biographer James Miller noted that while the book exhibited "erudition and evident intelligence", it lacked the "kind of fire and flair" which Foucault exhibited in subsequent works. It was largely critically ignored, receiving only one review at the time. Foucault grew to despise it, unsuccessfully attempting to prevent its republication and translation into English.
Sweden, Poland, and West Germany: 1955–1960
Foucault spent the next five years abroad, first in Sweden, working as cultural diplomat at the University of Uppsala, a job obtained through his acquaintance with historian of religion Georges Dumézil. At Uppsala he was appointed a Reader in French language and literature, while simultaneously working as director of the Maison de France, thus opening the possibility of a cultural-diplomatic career. Although finding it difficult to adjust to the "Nordic gloom" and long winters, he developed close friendships with two Frenchmen, biochemist Jean-François Miquel and physicist Jacques Papet-Lépine, and entered into romantic and sexual relationships with various men. In Uppsala he became known for his heavy alcohol consumption and reckless driving in his new Jaguar car. In spring 1956 Barraqué broke from his relationship with Foucault, announcing that he wanted to leave the "vertigo of madness". In Uppsala, Foucault spent much of his spare time in the university's Carolina Rediviva library, making use of their Bibliotheca Walleriana collection of texts on the history of medicine for his ongoing research. Finishing his doctoral thesis, Foucault hoped that Uppsala University would accept it, but Sten Lindroth, a positivistic historian of science there, remained unimpressed, asserting that it was full of speculative generalisations and was a poor work of history; he refused to allow Foucault to be awarded a doctorate at Uppsala. In part because of this rejection, Foucault left Sweden. Later, Foucault admitted that the work was a first draft with certain lack of quality.
Again at Dumézil's behest, in October 1958 Foucault arrived in the capital of the Polish People's Republic, Warsaw and took charge of the University of Warsaw's Centre Français. Foucault found life in Poland difficult due to the lack of material goods and services following the destruction of the Second World War. Witnessing the aftermath of the Polish October of 1956, when students had protested against the governing communist Polish United Workers' Party, he felt that most Poles despised their government as a puppet regime of the Soviet Union, and thought that the system ran "badly". Considering the university a liberal enclave, he traveled the country giving lectures; proving popular, he adopted the position of de facto cultural attaché. Like France and Sweden, Poland legally tolerated but socially frowned on homosexual activity, and Foucault undertook relationships with a number of men; one was with a Polish security agent who hoped to trap Foucault in an embarrassing situation, which therefore would reflect badly on the French embassy. Wracked in diplomatic scandal, he was ordered to leave Poland for a new destination. Various positions were available in West Germany, and so Foucault relocated to the (where he served as director in 1958–1960), teaching the same courses he had given in Uppsala and Warsaw. Spending much time in the Reeperbahn red-light district, he entered into a relationship with a transvestite.
Growing career (1960–1970)
Madness and Civilization: 1960
In West Germany, Foucault completed in 1960 his primary thesis (thèse principale) for his State doctorate, titled Folie et déraison: Histoire de la folie à l'âge classique (trans. "Madness and Insanity: History of Madness in the Classical Age"), a philosophical work based upon his studies into the history of medicine. The book discussed how West European society had dealt with madness, arguing that it was a social construct distinct from mental illness. Foucault traces the evolution of the concept of madness through three phases: the Renaissance, the later 17th and 18th centuries, and the modern experience. The work alludes to the work of French poet and playwright Antonin Artaud, who exerted a strong influence over Foucault's thought at the time.
Histoire de la folie was an expansive work, consisting of 943 pages of text, followed by appendices and a bibliography. Foucault submitted it at the University of Paris, although the university's regulations for awarding a State doctorate required the submission of both his main thesis and a shorter complementary thesis. Obtaining a doctorate in France at the period was a multi-step process. The first step was to obtain a rapporteur, or "sponsor" for the work: Foucault chose Georges Canguilhem. The second was to find a publisher, and as a result Folie et déraison was published in French in May 1961 by the company Plon, whom Foucault chose over Presses Universitaires de France after being rejected by Gallimard. In 1964, a heavily abridged version was published as a mass market paperback, then translated into English for publication the following year as Madness and Civilization: A History of Insanity in the Age of Reason.
Folie et déraison received a mixed reception in France and in foreign journals focusing on French affairs. Although it was critically acclaimed by Maurice Blanchot, Michel Serres, Roland Barthes, Gaston Bachelard, and Fernand Braudel, it was largely ignored by the leftist press, much to Foucault's disappointment. It was notably criticised for advocating metaphysics by young philosopher Jacques Derrida in a March 1963 lecture at the University of Paris. Responding with a vicious retort, Foucault criticised Derrida's interpretation of René Descartes. The two remained bitter rivals until reconciling in 1981. In the English-speaking world, the work became a significant influence on the anti-psychiatry movement during the 1960s; Foucault took a mixed approach to this, associating with a number of anti-psychiatrists but arguing that most of them misunderstood his work.
Foucault's secondary thesis (), written in Hamburg between 1959 and 1960, was a translation and commentary on German philosopher Immanuel Kant's Anthropology from a Pragmatic Point of View (1798); the thesis was titled Introduction à l'Anthropologie. Largely consisting of Foucault's discussion of textual dating—an "archaeology of the Kantian text"—he rounded off the thesis with an evocation of Nietzsche, his biggest philosophical influence. This work's rapporteur was Foucault's old tutor and then-director of the ENS, Hyppolite, who was well acquainted with German philosophy. After both theses were championed and reviewed, he underwent his public defense of his doctoral thesis (soutenance de thèse) on 20 May 1961. The academics responsible for reviewing his work were concerned about the unconventional nature of his major thesis; reviewer Henri Gouhier noted that it was not a conventional work of history, making sweeping generalisations without sufficient particular argument, and that Foucault clearly "thinks in allegories". They all agreed however that the overall project was of merit, awarding Foucault his doctorate "despite reservations".
University of Clermont-Ferrand, The Birth of the Clinic, and The Order of Things: 1960–1966
In October 1960, Foucault took a tenured post in philosophy at the University of Clermont-Ferrand, commuting to the city every week from Paris, where he lived in a high-rise block on the rue du Dr Finlay. Responsible for teaching psychology, which was subsumed within the philosophy department, he was considered a "fascinating" but "rather traditional" teacher at Clermont. The department was run by Jules Vuillemin, who soon developed a friendship with Foucault. Foucault then took Vuillemin's job when the latter was elected to the Collège de France in 1962. In this position, Foucault took a dislike to another staff member whom he considered stupid: Roger Garaudy, a senior figure in the Communist Party. Foucault made life at the university difficult for Garaudy, leading the latter to transfer to Poitiers. Foucault also caused controversy by securing a university job for his lover, the philosopher Daniel Defert, with whom he retained a non-monogamous relationship for the rest of his life.
Foucault maintained a keen interest in literature, publishing reviews in literary journals, including Tel Quel and Nouvelle Revue Française, and sitting on the editorial board of Critique, In May 1963, he published a book devoted to poet, novelist, and playwright Raymond Roussel. It was written in under two months, published by Gallimard, and was described by biographer David Macey as "a very personal book" that resulted from a "love affair" with Roussel's work. It was published in English in 1983 as Death and the Labyrinth: The World of Raymond Roussel. Receiving few reviews, it was largely ignored. That same year he published a sequel to Folie et déraison, titled Naissance de la Clinique, subsequently translated as The Birth of the Clinic: An Archaeology of Medical Perception. Shorter than its predecessor, it focused on the changes that the medical establishment underwent in the late 18th and early 19th centuries. Like his preceding work, Naissance de la Clinique was largely critically ignored, but later gained a cult following. It was of interest within the field of medical ethics, as it considered the ways in which the history of medicine and hospitals, and the training that those working within them receive, bring about a particular way of looking at the body: the 'medical gaze'. Foucault was also selected to be among the "Eighteen Man Commission" that assembled between November 1963 and March 1964 to discuss university reforms that were to be implemented by Christian Fouchet, the Gaullist Minister of National Education. Implemented in 1967, they brought staff strikes and student protests.
In April 1966, Gallimard published Foucault's (Words and Things), later translated as The Order of Things: An Archaeology of the Human Sciences. Exploring how man came to be an object of knowledge, it argued that all periods of history have possessed certain underlying conditions of truth that constituted what was acceptable as scientific discourse. Foucault argues that these conditions of discourse have changed over time, from one period's épistémè to another. Although designed for a specialist audience, the work gained media attention, becoming a surprise bestseller in France. Appearing at the height of interest in structuralism, Foucault was quickly grouped with scholars such as Jacques Lacan, Claude Lévi-Strauss, and Roland Barthes, as the latest wave of thinkers set to topple the existentialism popularized by Jean-Paul Sartre. Although initially accepting this description, Foucault soon vehemently rejected it, because he "never posited a universal theory of discourse, but rather sought to describe the historical forms taken by discoursive practices". Foucault and Sartre regularly criticised one another in the press. Both Sartre and Simone de Beauvoir attacked Foucault's ideas as "bourgeois", while Foucault retaliated against their Marxist beliefs by proclaiming that "Marxism exists in nineteenth-century thought as a fish exists in water; that is, it ceases to breathe anywhere else."
University of Tunis and Vincennes: 1966–1970
In September 1966, Foucault took a position teaching psychology at the University of Tunis in Tunisia. His decision to do so was largely because his lover, Defert, had been posted to the country as part of his national service. Foucault moved a few kilometres from Tunis, to the village of Sidi Bou Saïd, where fellow academic Gérard Deledalle lived with his wife. Soon after his arrival, Foucault announced that Tunisia was "blessed by history", a nation which "deserves to live forever because it was where Hannibal and St. Augustine lived". His lectures at the university proved very popular, and were well attended. Although many young students were enthusiastic about his teaching, they were critical of what they believed to be his right-wing political views, viewing him as a "representative of Gaullist technocracy", even though he considered himself a leftist.
Foucault was in Tunis during the anti-government and pro-Palestinian riots that rocked the city in June 1967, and which continued for a year. Although highly critical of the violent, ultra-nationalistic and anti-semitic nature of many protesters, he used his status to try to prevent some of his militant leftist students from being arrested and tortured for their role in the agitation. He hid their printing press in his garden, and tried to testify on their behalf at their trials, but was prevented when the trials became closed-door events. While in Tunis, Foucault continued to write. Inspired by a correspondence with the surrealist artist René Magritte, Foucault started to write a book about the impressionist artist Édouard Manet, but never completed it.
In 1968, Foucault returned to Paris, moving into an apartment on the Rue de Vaugirard. After the May 1968 student protests, Minister of Education Edgar Faure responded by founding new universities with greater autonomy. Most prominent of these was the Centre Expérimental de Vincennes in Vincennes on the outskirts of Paris. A group of prominent academics were asked to select teachers to run the centre's departments, and Canguilheim recommended Foucault as head of the Philosophy Department. Becoming a tenured professor of Vincennes, Foucault's desire was to obtain "the best in French philosophy today" for his department, employing Michel Serres, Judith Miller, Alain Badiou, Jacques Rancière, François Regnault, Henri Weber, Étienne Balibar, and François Châtelet; most of them were Marxists or ultra-left activists.
Lectures began at the university in January 1969, and straight away its students and staff, including Foucault, were involved in occupations and clashes with police, resulting in arrests. In February, Foucault gave a speech denouncing police provocation to protesters at the Maison de la Mutualité. Such actions marked Foucault's embrace of the ultra-left, undoubtedly influenced by Defert, who had gained a job at Vincennes' sociology department and who had become a Maoist. Most of the courses at Foucault's philosophy department were Marxist–Leninist oriented, although Foucault himself gave courses on Nietzsche, "The end of Metaphysics", and "The Discourse of Sexuality", which were highly popular and over-subscribed. While the right-wing press was heavily critical of this new institution, new Minister of Education Olivier Guichard was angered by its ideological bent and the lack of exams, with students being awarded degrees in a haphazard manner. He refused national accreditation of the department's degrees, resulting in a public rebuttal from Foucault.
Later life (1970–1984)
Collège de France and Discipline and Punish: 1970–1975
Foucault desired to leave Vincennes and become a fellow of the prestigious Collège de France. He requested to join, taking up a chair in what he called the "history of systems of thought", and his request was championed by members Dumézil, Hyppolite, and Vuillemin. In November 1969, when an opening became available, Foucault was elected to the Collège, though with opposition by a large minority. He gave his inaugural lecture in December 1970, which was subsequently published as L'Ordre du discours (The Discourse of Language). He was obliged to give 12 weekly lectures a year—and did so for the rest of his life—covering the topics that he was researching at the time; these became "one of the events of Parisian intellectual life" and were repeatedly packed out events. On Mondays, he also gave seminars to a group of students; many of them became a "Foulcauldian tribe" who worked with him on his research. He enjoyed this teamwork and collective research, and together they published a number of short books. Working at the Collège allowed him to travel widely, giving lectures in Brazil, Japan, Canada, and the United States over the next 14 years. In 1970 and 1972, Foucault served as a professor in the French Department of the University at Buffalo in Buffalo, New York.
In May 1971, Foucault co-founded the Groupe d'Information sur les Prisons (GIP) along with historian Pierre Vidal-Naquet and journalist Jean-Marie Domenach. The GIP aimed to investigate and expose poor conditions in prisons and give prisoners and ex-prisoners a voice in French society. It was highly critical of the penal system, believing that it converted petty criminals into hardened delinquents. The GIP gave press conferences and staged protests surrounding the events of the Toul prison riot in December 1971, alongside other prison riots that it sparked off; in doing so it faced a police crackdown and repeated arrests. The group became active across France, with 2,000 to 3,000, members, but disbanded before 1974. Also campaigning against the death penalty, Foucault co-authored a short book on the case of the convicted murderer Pierre Rivière. After his research into the penal system, Foucault published Surveiller et punir: Naissance de la prison (Discipline and Punish: The Birth of the Prison) in 1975, offering a history of the system in western Europe. In it, Foucault examines the penal evolution away from corporal and capital punishment to the penitentiary system that began in Europe and the United States around the end of the 18th century. Biographer Didier Eribon described it as "perhaps the finest" of Foucault's works, and it was well received.
Foucault was also active in anti-racist campaigns; in November 1971, he was a leading figure in protests following the perceived racist killing of Arab migrant Djellali Ben Ali. In this he worked alongside his old rival Sartre, the journalist Claude Mauriac, and one of his literary heroes, Jean Genet. This campaign was formalised as the Committee for the Defence of the Rights of Immigrants, but there was tension at their meetings as Foucault opposed the anti-Israeli sentiment of many Arab workers and Maoist activists. At a December 1972 protest against the police killing of Algerian worker Mohammad Diab, both Foucault and Genet were arrested, resulting in widespread publicity. Foucault was also involved in founding the Agence de Press-Libération (APL), a group of leftist journalists who intended to cover news stories neglected by the mainstream press. In 1973, they established the daily newspaper Libération, and Foucault suggested that they establish committees across France to collect news and distribute the paper, and advocated a column known as the "Chronicle of the Workers' Memory" to allow workers to express their opinions. Foucault wanted an active journalistic role in the paper, but this proved untenable, and he soon became disillusioned with Libération, believing that it distorted the facts; he did not publish in it until 1980.
In 1975 he had an LSD experience with Simeon Wade and Michael Stoneman in Death Valley, California, and later wrote "it was the greatest experience of his life, and that it profoundly changed his life and his work". In front of Zabriskie Point they took LSD while listening to a well-prepared music program: Richard Strauss's Four Last Songs, followed by Charles Ives's Three Places in New England, ending with a few avant-garde pieces by Stockhausen. According to Wade, as soon as he came back to Paris, Foucault scrapped the second The History of Sexualitys manuscript, and totally rethought the whole project.
The History of Sexuality and Iranian Revolution: 1976–1979
In 1976, Gallimard published Foucault's Histoire de la sexualité: la volonté de savoir (The History of Sexuality: The Will to Knowledge), a short book exploring what Foucault called the "repressive hypothesis". It revolved largely around the concept of power, rejecting both Marxist and Freudian theory. Foucault intended it as the first in a seven-volume exploration of the subject. Histoire de la sexualité was a best-seller in France and gained positive press, but lukewarm intellectual interest, something that upset Foucault, who felt that many misunderstood his hypothesis. He soon became dissatisfied with Gallimard after being offended by senior staff member Pierre Nora. Along with Paul Veyne and François Wahl, Foucault launched a new series of academic books, known as Des travaux (Some Works), through the company Seuil, which he hoped would improve the state of academic research in France. He also produced introductions for the memoirs of Herculine Barbin and My Secret Life.
Foucault's Histoire de la sexualité concentrates on the relation between truth and sex. He defines truth as a system of ordered procedures for the production, distribution, regulation, circulation, and operation of statements. Through this system of truth, power structures are created and enforced. Though Foucault's definition of truth may differ from other sociologists before and after him, his work with truth in relation to power structures, such as sexuality, has left a profound mark on social science theory. In his work, he examines the heightened curiosity regarding sexuality that induced a "world of perversion" during the elite, capitalist 18th and 19th century in the western world. According to Foucault in History of Sexuality, society of the modern age is symbolized by the conception of sexual discourses and their union with the system of truth. In the "world of perversion", including extramarital affairs, homosexual behavior, and other such sexual promiscuities, Foucault concludes that sexual relations of the kind are constructed around producing the truth. Sex became not only a means of pleasure, but an issue of truth. Sex is what confines one to darkness, but also what brings one to light.
Similarly, in The History of Sexuality, society validates and approves people based on how closely they fit the discursive mold of sexual truth. As Foucault reminds us, in the 18th and 19th centuries, the Church was the epitome of power structure within society. Thus, many aligned their personal virtues with those of the Church, further internalizing their beliefs on the meaning of sex. However, those who unify their sexual relation to the truth become decreasingly obliged to share their internal views with those of the Church. They will no longer see the arrangement of societal norms as an effect of the Church's deep-seated power structure.
Foucault remained a political activist, focusing on protesting government abuses of human rights around the world. He was a key player in the 1975 protests against the Spanish government who were set to execute 11 militants sentenced to death without fair trial. It was his idea to travel to Madrid with six others to give a press conference there; they were subsequently arrested and deported back to Paris. In 1977, he protested the extradition of Klaus Croissant to West Germany, and his rib was fractured during clashes with riot police. In July that year, he organised an assembly of Eastern Bloc dissidents to mark the visit of Soviet general secretary Leonid Brezhnev to Paris. In 1979, he campaigned for Vietnamese political dissidents to be granted asylum in France.
In 1977, Italian newspaper Corriere della sera asked Foucault to write a column for them. In doing so, in 1978 he travelled to Tehran in Iran, days after the Black Friday massacre. Documenting the developing Iranian Revolution, he met with opposition leaders such as Mohammad Kazem Shariatmadari and Mehdi Bazargan, and discovered the popular support for Islamism. Returning to France, he was one of the journalists who visited the Ayatollah Khomeini, before visiting Tehran. His articles expressed awe of Khomeini's Islamist movement, for which he was widely criticised in the French press, including by Iranian expatriates. Foucault's response was that Islamism was to become a major political force in the region, and that the West must treat it with respect rather than hostility. In April 1978, Foucault traveled to Japan, where he studied Zen Buddhism under Omori Sogen at the Seionji temple in Uenohara.
Final years: 1980–1984
Although remaining critical of power relations, Foucault expressed cautious support for the Socialist Party government of François Mitterrand following its electoral victory in 1981. But his support soon deteriorated when that party refused to condemn the Polish government's crackdown on the 1982 demonstrations in Poland orchestrated by the Solidarity trade union. He and sociologist Pierre Bourdieu authored a document condemning Mitterrand's inaction that was published in Libération, and they also took part in large public protests on the issue. Foucault continued to support Solidarity, and with his friend Simone Signoret traveled to Poland as part of a Médecins du Monde expedition, taking time out to visit the Auschwitz concentration camp. He continued his academic research, and in June 1984 Gallimard published the second and third volumes of Histoire de la sexualité. Volume two, L'Usage des plaisirs, dealt with the "techniques of self" prescribed by ancient Greek pagan morality in relation to sexual ethics, while volume three, Le Souci de soi, explored the same theme in the Greek and Latin texts of the first two centuries CE. A fourth volume, Les Aveux de la chair, was to examine sexuality in early Christianity, but it was not finished.
In October 1980, Foucault became a visiting professor at the University of California, Berkeley, giving the Howison Lectures on "Truth and Subjectivity", while in November he lectured at the Humanities Institute at New York University. His growing popularity in American intellectual circles was noted by Time magazine, while Foucault went on to lecture at University of California, Los Angeles in 1981, the University of Vermont in 1982, and Berkeley again in 1983, where his lectures drew huge crowds. Foucault spent many evenings in the San Francisco gay scene, frequenting sado-masochistic bathhouses, engaging in unprotected sex. He praised sado-masochistic activity in interviews with the gay press, describing it as "the real creation of new possibilities of pleasure, which people had no idea about previously". Foucault contracted HIV and eventually developed AIDS. Little was known of the virus at the time; the first cases had only been identified in 1980. Foucault initially referred to AIDS as a "dreamed-up disease". In summer 1983, he developed a persistent dry cough, which concerned friends in Paris, but Foucault insisted it was just a pulmonary infection. Only when hospitalized was Foucault correctly diagnosed as being HIV-positive; treated with antibiotics, he delivered a final set of lectures at the Collège de France. Foucault entered Paris' Hôpital de la Salpêtrière—the same institution that he had studied in Madness and Civilisation—on 10 June 1984, with neurological symptoms complicated by sepsis. He died in the hospital on 25 June.
Death
On 26 June 1984, Libération announced Foucault's death, mentioning the rumour that it had been brought on by AIDS. The following day, Le Monde issued a medical bulletin cleared by his family that made no reference to HIV/AIDS. On 29 June, Foucault's la levée du corps ceremony was held, in which the coffin was carried from the hospital morgue. Hundreds attended, including activists and academic friends, while Gilles Deleuze gave a speech using excerpts from The History of Sexuality. His body was then buried at Vendeuvre-du-Poitou in a small ceremony. Soon after his death, Foucault's partner Daniel Defert founded the first national HIV/AIDS organisation in France, AIDES; a play on the French word for "help" (aide) and the English- language acronym for the disease. On the second anniversary of Foucault's death, Defert publicly revealed in The Advocate that Foucault's death was AIDS-related.
Personal life
Foucault's first biographer, Didier Eribon, described the philosopher as "a complex, many-sided character", and that "under one mask there is always another". He also noted that he exhibited an "enormous capacity for work". At the ENS, Foucault's classmates unanimously summed him up as a figure who was both "disconcerting and strange" and "a passionate worker". As he aged, his personality changed: Eribon noted that while he was a "tortured adolescent", post-1960, he had become "a radiant man, relaxed and cheerful", even being described by those who worked with him as a dandy. He noted that in 1969, Foucault embodied the idea of "the militant intellectual".
Foucault was an atheist. He loved classical music, particularly enjoying the work of Johann Sebastian Bach and Wolfgang Amadeus Mozart, and became known for wearing turtleneck sweaters. After his death, Foucault's friend Georges Dumézil described him as having possessed "a profound kindness and goodness", also exhibiting an "intelligence [that] literally knew no bounds". His life-partner Daniel Defert inherited his estate, whose archive was sold in 2012 to the National Library of France for €3.8 million ($4.5 million in April 2021).
Politics
Politically, Foucault was a leftist throughout much of his life, though his particular stance within the left often changed. In the early 1950s, while never adopting an orthodox Marxist viewpoint, Foucault had been a member of the French Communist Party, leaving the party after three years as he expressed disgust in the prejudice within its ranks against Jews and homosexuals. After spending some time working in Poland, which was a Communist state ostensibly governed by the Polish United Workers' Party but actually was an abject police-state satellite of the Soviet Union, he became further disillusioned with communist ideology. As a result, in the early 1960s, Foucault was considered to be "violently anticommunist" by some of his detractors, even though he was involved in leftist campaigns along with most of his students and colleagues.
Philosophical work
Foucault's colleague Pierre Bourdieu summarized the philosopher's thought as "a long exploration of transgression, of going beyond social limits, always inseparably linked to knowledge and power".
Philosopher Philip Stokes of the University of Reading noted that overall, Foucault's work was "dark and pessimistic". Though it does, however, leave some room for optimism, in that it illustrates how the discipline of philosophy can be used to highlight areas of domination. In doing so, as Stokes claimed, the ways in which we are being dominated become better understood, so that we may strive to build social structures that minimise this risk of domination. In all of this development there had to be close attention to detail; it is the detail which eventually individualizes people.
Later in his life, Foucault explained that his work was less about analyzing power as a phenomenon than about trying to characterize the different ways in which contemporary society has expressed the use of power to "objectivise subjects". These have taken three broad forms: one involving scientific authority to classify and 'order' knowledge about human populations; the second has been to categorize and 'normalise' human subjects (by identifying madness, illness, physical features, and so on); and the third relates to the manner in which the impulse to fashion sexual identities and train one's own body to engage in routines and practices ends up reproducing certain patterns within a given society.
Literature
In addition to his philosophical work, Foucault also wrote on literature. Death and the Labyrinth: The World of Raymond Roussel, published in 1963 and translated in English in 1986, is Foucault's only book-length work on literature. He described it as "by far the book I wrote most easily, with the greatest pleasure, and most rapidly". Foucault explores theory, criticism, and psychology with reference to the texts of Raymond Roussel, one of the first notable experimental writers. Foucault also gave a lecture responding to Roland Barthes' famous essay "The Death of the Author" titled "What Is an Author?" in 1969, later published in full. According to literary theoretician Kornelije Kvas, for Foucault, "denying the existence of a historical author on account of his/her irrelevance for interpretation is absurd, for the author is a function of the text that organizes its sense".
Power
Foucault's analysis of power comes in two forms: empirical and theoretical. The empirical analyses concern themselves with historical (and modern) forms of power and how these emerged from previous forms of power. Foucault describes three types of power in his empirical analyses: sovereign power, disciplinary power, and biopower.
Foucault is generally critical of "theories" that try to give absolute answers to "everything". Therefore, he considered his own "theory" of power to be closer to a method than a typical "theory". According to Foucault, most people misunderstand power. For this reason, he makes clear that power cannot be completely described as:
A group of institutions and/or mechanisms whose aim it is for a citizen to obey and yield to the state (a typical liberal definition of power);
Yielding to rules (a typical psychoanalytical definition of power); or
A general and oppressing system where one societal class or group oppresses another (a typical feminist or Orthodox Marxist definition of power).
Foucault is not critical of considering these phenomena as "power", but claims that these theories of power cannot completely describe all forms of power. Foucault also claims that a liberal definition of power has effectively hidden other forms of power to the extent that people have uncritically accepted them.
Foucault's power analysis begins on micro-level, with singular "force relations". Richard A. Lynch defines Foucault's concept of "force relation" as "whatever in one's social interactions that pushes, urges or compels one to do something". According to Foucault, force relations are an effect of difference, inequality or unbalance that exists in other forms of relationships (such as sexual or economic). Force, and power, is however not something that a person or group "holds" (such as in the sovereign definition of power), instead power is a complex group of forces that comes from "everything" and therefore exists everywhere. That relations of power always result from inequality, difference or unbalance also means that power always has a goal or purpose. Power comes in two forms: tactics and strategies. Tactics is power on the micro-level, which can for example be how a person chooses to express themselves through their clothes. Strategies on the other hand, is power on macro-level, which can be the state of fashion at any moment. Strategies consist of a combination of tactics. At the same time, power is non-subjective according to Foucault. This posits a paradox, according to Lynch, since "someone" has to exert power, while at the same time there can be no "someone" exerting this power. According to Lynch this paradox can be solved with two observations:
By looking at power as something which reaches further than the influence of single people or groups. Even if individuals and groups try to influence fashion, for example, their actions will often get unexpected consequences.
Even if individuals and groups have a free choice, they are also affected and limited by their context/situation.
According to Foucault, force relations are constantly changing, constantly interacting with other force relations which may weaken, strengthen or change one another. Foucault writes that power always includes resistance, which means there is always a possibility that power and force relations will change in some way. According to Richard A. Lynch, the purpose of Foucault's work on power is to increase peoples' awareness of how power has shaped their way of being, thinking and acting, and by increasing this awareness making it possible for them to change their way of being, thinking and acting.
Sovereign power
With "sovereign power" Foucault alludes to a power structure that is similar to a pyramid, where one person or a group of people (at the top of the pyramid) holds the power, while the "normal" (and oppressed) people are at the bottom of the pyramid. In the middle parts of the pyramid are the people who enforce the sovereign's orders. A typical example of sovereign power is absolute monarchy.
In historical absolute monarchies, crimes had been considered a personal offense against the sovereign and his/her power. The punishment was often public and spectacular, partly to deter others from committing crimes, but also to reinstate the sovereign's power. This was however both expensive and ineffective—it led far too often to people sympathizing with the criminal. In modern times, when disciplinary power is dominant, criminals are instead subjected to various disciplinary techniques to "remold" the criminal into a "law abiding citizen".
According to Chloë Taylor, a characteristic for sovereign power is that the sovereign has the right to take life, wealth, services, labor and products. The sovereign has a right to subtract—to take life, to enslave life, etc.—but not the right to control life in the way that later happens in disciplinary systems of power. According to Taylor, the form of power that the philosopher Thomas Hobbes is concerned about, is sovereign power. According to Hobbes, people are "free" so long they are not literally placed in chains.
Disciplinary power
What Foucault calls "disciplinary power" aims to use bodies' skills as effectively as possible. The more useful the body becomes, the more obedient it also has to become. The purpose of this is not only to use the bodies' skills, but also prevent these skills from being used to revolt against the power.
Disciplinary power has "individuals" as its object, target and instrument. According to Foucault, "individual" is however a construct created by disciplinary power. The disciplinary power's techniques create a "rational self-control", which in practice means that the disciplinary power is internalized and therefore doesn't continuously need external force. Foucault says that disciplinary power is primarily not an oppressing form of power, but rather so a productive form of power. Disciplinary power doesn't oppress interests or desires, but instead subjects bodies to reconstructed patterns of behavior to reconstruct their thoughts, desires and interests. According to Foucault this happens in factories, schools, hospitals and prisons. Disciplinary power creates a certain type of individual by producing new movements, habits and skills. It focuses on details, single movements, their timing and speed. It organizes bodies in time and space, and controls every movement for maximal effect. It uses rules, surveillance, exams and controls. The activities follow certain plans, whose purpose it is to lead the bodies to certain pre-determined goals. The bodies are also combined with each other, to reach a productivity that is greater than the sum of all bodies activities.
Disciplinary power has according to Foucault been especially successful due to its usage of three technologies: hierarchical observation, normalizing judgement and exams. By hierarchical observation, the bodies become constantly visible to the power. The observation is hierarchical since there is not a single observer, but rather so a "hierarchy" of observers. An example of this is mental asylums during the 19th century, when the psychiatrist was not the only observer, but also nurses and auxiliary staff. From these observations and scientific discourses, a norm is established and used to judge the observed bodies. For the disciplinary power to continue to exist, this judgement has to be normalized. Foucault mentions several characteristics of this judgement: (1) all deviations, even small ones, from correct behavior are punished, (2) repeated rule violations are punished extra, (3) exercises are used as a behavior correcting technique and punishment, (4) rewards are used together with punishment to establish a hierarchy of good and bad behavior/people, (5) rank/grades/etc. are used as punishment and reward. Examinations combine the hierarchical observation with judgement. Exams objectify and individualize the observed bodies by creating extensive documentation about every observed body. The purpose of the exams is therefore to gather further information about each individual, track their development and compare their results to the norm.
According to Foucault, the "formula" for disciplinary power can be seen in philosopher Jeremy Bentham's plan for the "optimal prison": the panopticon. Such a prison consists of a circle-formed building where every cell is inhabited by only one prisoner. In every cell there are two windows—one to let in light from outside and one pointing to the middle of the circle-formed building. In this middle there is a tower where a guard can be placed to observe the prisoners. Since the prisoners will never be able to know whether they are being watched or not at a given moment, they will internalize the disciplinary power and regulate their own behavior (as if they were constantly being watched). Foucault says this construction (1) creates an individuality by separating prisoners from each other in the physical room, (2) since the prisoners cannot know if they are being watched at any given moment, they internalize the disciplinary power and regulate their own behavior as if they were always watched, (3) the surveillance makes it possible to create extensive documentation about each prisoner and their behavior. According to Foucault the panopticon has been used as a model also for other disciplinary institutions, such as mental asylums in the 19th century.
Biopower
With "biopower" Foucault refers to power over bios (life)—power over populations. Biopower primarily rests on norms which are internalized by people, rather than external force. It encourages, strengthens, controls, observes, optimizes and organize the forces below it. Foucault has sometimes described biopower as separate from disciplinary power, but at other times he has described disciplinary power as an expression of biopower. Biopower can use disciplinary techniques, but in contrast to disciplinary power its target is populations rather than individuals.
Biopower studies populations regarding (for example) number of births, life expectancy, public health, housing, migration, crime, which social groups are over-represented in deviations from the norm (regarding health, crime, etc.) and tries to adjust, control or eliminate these norm deviations. One example is the age distribution in a population. Biopower is interested in age distribution to compensate for future (or current) lacks of labor power, retirement homes, etc. Yet another example is sex: because sex is connected to population growth, sex and sexuality have been of great interest to biopower. On a disciplinary level, people who engaged in non-reproductive sexual acts have been treated for psychiatric diagnoses such as "perversion", "frigidity" and "sexual dysfunction". On a biopower-level, the usage of contraceptives has been studied, some social groups have (by various means) been encouraged to have children, while others (such as poor, sick, unmarried women, criminals or people with disabilities) have been discouraged or prevented from having children.
In the era of biopower, death has become a scandal and a catastrophe, but despite this biopower has according to Foucault killed more people than any other form of power has ever done before it. Under sovereign power, the sovereign king could kill people to exert his power or start wars simply to extend his kingdom, but during the era of biopower wars have instead been motivated by an ambition to "protect life itself". Similar motivations has also been used for genocide. For example, Nazi Germany motivated its attempt to eradicate Jews, the mentally ill and disabled with the motivation that Jews were "a threat to the German health", and that the money spent on healthcare for mentally ill and disabled people would be better spent on "viable Germans". Chloë Taylor also mentions the Iraq War was motivated by similar tenets. The motivation was at first that Iraq was thought to have weapons of mass destruction and connections to Al-Qaeda. However, when the Bush and Blair administrations did not find any evidence to support either of these theories, the motivation for the war was changed. In the new motivation, the cause of the war was said to be that Saddam Hussein had committed crimes against his own population. Taylor means that in modern times, war has to be "concealed" under a rhetoric of humanitarian aid, despite the fact that these wars often cause humanitarian crises.
During the 19th century, slums were increasing in number and size across the western world. Criminality, illness, alcoholism and prostitution was common in these areas, and the middle class considered the people who lived in these slums as "unmoral" and "lazy". The middle class also feared that this underclass sooner or later would "take over" because the population growth was greater in these slums than it was in the middle class. This fear gave rise to the scientific study of eugenics, whose founder Francis Galton had been inspired by Charles Darwin and his theory of natural selection. According to Galton, society was preventing natural selection by helping "the weak", thus causing a spread of the "negative qualities" into the rest of the population.
The body and sexuality
According to Foucault, the body is not something objective that stands outside history and culture. Instead, Foucault argues, the body has been and is continuously shaped by society and history—by work, diet, body ideals, exercise, medical interventions, etc. Foucault presents no "theory" of the body, but does write about it in Discipline and Punish as well as in The History of Sexuality. Foucault was critical of all purely biological explanations of phenomena such as sexuality, madness and criminality. Further, Foucault argues, that the body is not sufficient as a basis for self-understanding and understanding of others.
In Discipline and Punish, Foucault shows how power and the body are tied together, for example by the disciplinary power primarily focusing on individual bodies and their behavior. Foucault argues that power, by manipulating bodies/behavior, also manipulates people's minds. Foucault turns the platonic saying "the body is the prison of the soul" (Phaedo, 66a–67d) and instead posits that "the soul is the prison of the body".
According to Foucault, sexology has tried to exert itself as a "science" by referring to the material (the body). In contrast to this, Foucault argues that sexology is a pseudoscience, and that "sex" is a pseudo-scientific idea. For Foucault the idea of a natural, biologically grounded and fundamental sexuality is a normative historical construct that has also been used as an instrument of power. By describing sex as the biological and fundamental cause to peoples' gender identity, sexual identity and sexual behavior, power has effectively been able to normalize sexual and gendered behavior. This has made it possible to evaluate, pathologize and "correct" peoples' sexual and gendered behavior, by comparing bodies behaviors to the constructed "normal" behavior. For Foucault, a "normal sexuality" is as much of a construct as a "natural sexuality". Therefore, Foucault was also critical of the popular discourse that dominated the debate over sexuality during the 1960s and 1970s. During this time, the popular discourse argued for a "liberation" of sexuality from a cultural, moral and capitalistic oppression. Foucault, however, argues that peoples' opinions about and experiences of sexuality are always a result of cultural and power mechanisms. To "liberate" sexuality from one group of norms only means that another group of norms takes its place. This, however, does not mean that Foucault considers resistance to be futile. What Foucault argues for is rather that it is impossible to become completely free from power, and that there is simply no "natural" sexuality. Power always involves a dimension of resistance, and therefore also a possibility for change. Although Foucault considers it impossible to step outside of power-networks, it is always possible to change these networks or navigate them differently.
According to Foucault, the body is not only an "obedient and passive object" that is dominated by discourses and power. The body is also the "seed" to resistance against dominant discourses and power techniques. The body is never fully compliant, and experiences can never fully be reduced to linguistic descriptions. There is always a possibility to experience something that is not possible to describe with words, and in this discrepancy there is also a possibility for resistance against dominant discourses.
Foucault's view of the historical construction of the body has influenced many feminist and queer-theorists. According to Johanna Oksala, Foucault's influence on queer theory has been so great than he can be considered one of the founders of queer theory. The fundamental idea behind queer theory is that there is no natural fundament that lies behind identities such as gay, lesbian, heterosexual, etc. Instead these identities are considered cultural constructions that have been constructed through normative discourses and relations of power. Feminists have with the help of Foucault's ideas studied different ways that women form their bodies: through plastic surgery, diet, eating disorders, etc. Foucault's historization of sex has also affected feminist theorists such as Judith Butler, who used Foucault's theories about the relation between subject, power and sex to question gendered subjects. Butler follows Foucault by saying that there is no "true" gender behind gender identity that constitutes its biological and objective fundament. However, Butler is critical of Foucault, arguing that Foucault "naively" presents bodies and pleasures as a ground for resistance against power without extending his historization of sexuality to gendered subjects/bodies. Foucault has received criticism from other feminists, such as Susan Bordo and Kate Soper.
Johanna Oksala argues that Foucault, by saying that sex/sexuality are constructed, doesn't deny the existence of sexuality. Oksala also argues that the goal of critical theories such as Foucault's is not to liberate the body and sexuality from oppression, but rather to question and deny the identities that are posited as "natural" and "essential" by showing how these identities are historical and cultural constructions.
In May 1977, Foucault signed a petition to the French parliament, calling for the lowering of the homosexual age of consent from 21 to 15 to match with the heterosexual one. In a 1978 broadcast, Foucault argued that children could give sexual consent, saying that "to assume that a child is incapable of explaining what happened and was incapable of giving his consent are two abuses that are intolerable, quite unacceptable."
Subjectivity
Foucault considered his primary project to be the investigation of how people through history have been made into "subjects". Subjectivity, for Foucault, is not a state of being, but a practice—an active "being". According to Foucault, "the subject" has, by western philosophers, usually been considered as something given; natural and objective. On the contrary, Foucault considers subjectivity to be a construction created by power. Foucault talks of "assujettissement", which is a French term that for Foucault refers to a process where power creates subjects while also oppressing them using social norms. For Foucault "social norms" are standards that people are encouraged to follow, that are also used to compare and define people. As an example of "assujettissement", Foucault mentions "homosexual", a historically contingent type of subjectivity that was created by sexology. Foucault writes that sodomy was previously considered a serious sexual deviation, but a temporary one. Homosexuality, however, became a "species", a past, a childhood and a type of life. "Homosexuals" have by the same power that created this subjectivity been discriminated against, due to homosexuality being considered as a deviation from the "normal" sexuality. However, Foucault argues, the creation of a subjectivity such as "homosexuality" does not only have negative consequences for the people who are subjectivised—subjectivity of homosexuality has also led to the creation of gay bars and the pride parade.
According to Foucault, scientific discourses have played an important role in the disciplinary power system, by classifying and categorizing people, observing their behavior and "treating" them when their behavior has been considered "abnormal". He defines discourse as a form of oppression that does not require physical force. He identifies its production as "controlled, selected, organized and redistributed by a certain number of procedures", which are driven by individuals' aspiration of knowledge to create "rules" and "systems" that translate into social codes. Moreover, discourse creates a force that extends beyond societal institutions and could be found in social and formal fields such as health care systems, education and law enforcement. The formation of these fields may seem to contribute to social development; however, Foucault warns against discourses' harmful aspects on society.
Sciences such as psychiatry, biology, medicine, economics, psychoanalysis, psychology, sociology, ethnology, pedagogy and criminology have all categorized behaviors as rational, irrational, normal, abnormal, human, inhuman, etc. By doing so, they have all created various types of subjectivity and norms, which are then internalized by people as "truths". People have then adapted their behavior to get closer to what these sciences has labeled as "normal". For example, Foucault claims that psychological observation/surveillance and psychological discourses have created a type of psychology-centered subjectivity, which has led to people considering unhappiness a fault in their psychology rather than in society. This has also, according to Foucault, been a way for society to resist criticism—criticism against society has been turned against the individual and their psychological health.
Self-constituting subjectivity
According to Foucault, subjectivity is not necessarily something that is forced upon people externally—it is also something that is established in a person's relation to themselves. This can, for example, happen when a person is trying to "find themselves" or "be themselves", something Edward McGushin describes as a typical modern activity. In this quest for the "true self", the self is established in two levels: as a passive object (the "true self" that is searched for) and as an active "searcher". The ancient Cynics and the 19th-century philosopher Friedrich Nietzsche posited that the "true self" can only be found by going through great hardship and/or danger. The ancient Stoics and 17th-century philosopher René Descartes, however, argued that the "self" can be found by quiet and solitary introspection. Yet another example is Socrates, who argued that self-awareness can only be found by having debates with others, where the debaters question each other's foundational views and opinions. Foucault, however, argued that "subjectivity" is a process, rather than a state of being. As such, Foucault argued that there is no "true self" to be found. Rather so, the "self" is constituted/created in activities such as the ones employed to "find" the "self". In other words, exposing oneself to hardships and danger does not "reveal" the "true self", according to Foucault, but rather creates a particular type of self and subjectivity. However, according to Foucault the "form" for the subject is in great part already constituted by power, before these self-constituting practices are employed. Schools, workplaces, households, government institutions, entertainment media and the healthcare sector all, through disciplinary power, contribute to forming people into being particular types of subjects.
Freedom
Todd May defines Foucault's concept of freedom as: that which we can do of ourselves within our specific historical context. A condition for this, according to Foucault, is that we are aware of our situation and how it has been created/affected (and is still being affected) by power. According to May, two of the aspects of how power has shaped peoples' way of being, thinking and acting are described in the books where Foucault deals with disciplinary power and the history of sexuality. However, May argues, there will always be aspects of peoples' formation that will be unknown to them, hence the constant necessity for the type of analyses that Foucault did.
Foucault argues that the forces that have affected people can be changed; people always have the capacity to change the factors that limit their freedom. Freedom is thus not a state of being, but a practice—a way of being in relation to oneself, to others and to the world. According to Todd May, Foucault's concept of freedom also includes constructing histories like the ones Foucault did about the history of disciplinary power and sexuality—histories that investigate and describe the forces that have influenced people into becoming who they are. From the knowledge that is reached through such investigations, people can thereafter decide which forces they believe are acceptable and which they consider to be intolerable and has to be changed. Freedom is for Foucault a type of "experimentation" with different "transformations". Since these experiments cannot be controlled completely, May argues they may either lead to the reconstruction of intolerable power relations or the creation of new ones. Thus, May argues, it is always necessary to continue with such experimentation and Foucauldian analyses.
Practice of critique
Foucault's "alternative" to the modern subjectivity is described by Cressida Heyes as "critique". For Foucault there are no "good" and "bad" forms of subjectivity, since they are all a result of power relations. In the same way, Foucault argues there are no "good" and "bad" norms. All norms and institutions are at the same time enabling as they are oppressing. Therefore, Foucault argues, it is always crucial to continue with the practice of "critique". Critique is for Foucault a practice that searches for the processes and events that led to our way of being—a questioning of who we "are" and how this "we" came to be. Such a "critical ontology of the present" shows that peoples' current "being" is in fact a historically contingent, unstable and changeable construction. Foucault emphasizes that since the current way of being is not a necessity, it is also possible to change it. Critique also includes investigating how and when people are being enabled and when they are being oppressed by the current norms and institutions, finding ways to reduce limitations on freedom, resist normalization and develop new and different way of relating to oneself and others. Foucault argues that it is impossible to go beyond power relations, but that it is always possible to navigate power relations in a different way.
Epimeleia heautou, "care for the self"
As an alternative to the modern "search" for the "true self", and as a part of "the work of freedom", Foucault discusses the antique Greek term epimeleia heautou, "care for the self" (ἐπιμέλεια ἑαυτοῦ). According to Foucault, among the ancient Greek philosophers, self-awareness was not a goal in itself, but rather something that was sought after in order to "care for oneself". Care for the self consists of what Foucault calls "the art of living" or "technologies of the self". The goal of these techniques was, according to Foucault, to transform oneself into a more ethical person. As an example of this, Foucault mentions meditation, the stoic activity of contemplating past and future actions and evaluating if these actions are in line with one's values and goals, and "contemplation of nature". Contemplation of nature is another stoic activity, that consists of reflecting on how "small" one's existence is when compared to the greater cosmos.
Knowledge
Foucault is described by Mary Beth Mader as an epistemological constructivist and historicist. Foucault is critical of the idea that humans can reach "absolute" knowledge about the world. A fundamental goal in many of Foucault's works is to show how that which has traditionally been considered as absolute, universal and true in fact are historically contingent. To Foucault, even the idea of absolute knowledge is a historically contingent idea. This does however not lead to epistemological nihilism; rather, Foucault argues that we "always begin anew" when it comes to knowledge. At the same time Foucault is critical of modern western philosophy for lacking "spirituality". With "spirituality" Foucault refers to a certain type of ethical being, and the processes that lead to this state of being. Foucault argues that such a spirituality was a natural part of the ancient Greek philosophy, where knowledge was considered as something that was only accessible to those that had an ethical character. According to Foucault this changed in the "cartesian moment", the moment when René Descartes reached the "insight" that self-awareness was something given (Cogito, ergo sum, "I think, therefore I am"), and from this "insight" Descartes drew conclusions about God, the world, and knowledge. According to Foucault, since Descartes knowledge has been something separate from ethics. In modern times, Foucault argues, anyone can reach "knowledge", as long as they are rational beings, educated, willing to participate in the scientific community and use a scientific method. Foucault is critical of this "modern" view of knowledge.
Foucault describes two types of "knowledge": "savoir" and "connaissance", two French terms that both can be translated as "knowledge" but with separate meanings for Foucault. By "savoir" Foucault is referring to a process where subjects are created, while at the same time these subjects also become objects for knowledge. An example of this can be seen in criminology and psychiatry. In these sciences, subjects such as "the rational person", "the mentally ill person", "the law abiding person", "the criminal", etc. are created, and these sciences center their attention and knowledge on these subjects. The knowledge about these subjects is "connaissance", while the process in which subjects and knowledge are created is "savoir". A similar term in Foucault's corpus is "pouvoir/savoir" (power/knowledge). With this term Foucault is referring to a type of knowledge that is considered "common sense", but that is created and withheld in that position (as "common sense") by power. The term power/knowledge comes from Jeremy Bentham's idea that panopticons wouldn't only be prisons, but would be used for experiments where the criminals' behaviour would be studied. Power/knowledge thus refers to forms of power where the power compares individuals, measures differences, establishes a norm and then forces this norm unto the subjects. This is especially successful when the established norm is internalized and institutionalized (by "institutionalized" Foucault refers to when the norm is omnipresent). Because then, when the norm is internalized and institutionalized, it has effectively become a part of peoples' "common sense"—the "obvious", the "given", the "natural". When this has happened, this "common sense" also affects the explicit knowledge (scientific knowledge), Foucault argues. Ellen K. Feder states that the premise "the world consists of women and men" is an example of this. This premise, Feder argues, has been considered "common sense", and has led to the creation of the psychiatric diagnosis gender identity disorder (GID). For example, during the 1970s, children with behavior that was not considered appropriate for their gender was diagnosed with GID. The treatment then consisted of trying to make the child adapt to the prevailing gender norms. Feder argues that this is an example of power/knowledge since psychiatry, from the "common sense" premise "the world consists of women and men" (a premise which is upheld in this status by power), created a new diagnosis, a new type of subject and a whole body of knowledge surrounding this new subject.
Influence and reception
Foucault's works have exercised a powerful influence over numerous humanistic and social scientific disciplines as one of the most influential and controversial scholars of the post-World War II period. According to a London School of Economics' analysis in 2016, his works Discipline and Punish and The History of Sexuality were among the 25 most cited books in the social sciences of all time, at just over 100,000 citations. In 2007, Foucault was listed as the single most cited scholar in the humanities by the ISI Web of Science among a large quantity of French philosophers, the compilation's author commenting that "What this says of modern scholarship is for the reader to decide—and it is imagined that judgments will vary from admiration to despair, depending on one's view".
According to Gary Gutting, Foucault's "detailed historical remarks on the emergence of disciplinary and regulatory biopower have been widely influential". Leo Bersani wrote that:"[Foucault] is our most brilliant philosopher of power. More originally than any other contemporary thinker, he has attempted to define the historical constraints under which we live, at the same time that he has been anxious to account for—if possible, even to locate—the points at which we might resist those constraints and counter some of the moves of power. In the present climate of cynical disgust with the exercise of political power, Foucault's importance can hardly be exaggerated." Foucault's work on "biopower" has been widely influential within the disciplines of philosophy and political theory, particularly for such authors as Giorgio Agamben, Roberto Esposito, Antonio Negri, and Michael Hardt. His discussions on power and discourse have inspired many critical theorists, who believe that Foucault's analysis of power structures could aid the struggle against inequality. They claim that through discourse analysis, hierarchies may be uncovered and questioned by way of analyzing the corresponding fields of knowledge through which they are legitimated. This is one of the ways that Foucault's work is linked to critical theory. His work Discipline and Punish influenced his friend and contemporary Gilles Deleuze, who published the paper "Postscript on the Societies of Control", praising Foucault's work but arguing that contemporary western society has in fact developed from a 'disciplinary society' into a 'society of control'. Deleuze went on to publish a book dedicated to Foucault's thought in 1988 under the title Foucault.
Foucault's discussions of the relationship between power and knowledge has influenced postcolonial critiques in explaining the discursive formation of colonialism, particularly in Edward Said's work Orientalism. Foucault's work has been compared to that of Erving Goffman by the sociologist Michael Hviid Jacobsen and Soren Kristiansen, who list Goffman as an influence on Foucault. Foucault's writings, particularly The History of Sexuality, have also been very influential in feminist philosophy and queer theory, particularly the work of the major Feminist scholar Judith Butler due to his theories regarding the genealogy of maleness and femaleness, power, sexuality, and bodies.
Critiques and engagements
Douglas Murray, writing in his book The War on The West, argued that "Foucault's obsessive analysis of everything through a quasi-Marxist lens of power relations diminished almost everything in society into a transactional, punitive and meaningless dystopia".
Crypto-normativity, self-refutation, defeatism
A prominent critique of Foucault's thought concerns his refusal to propose positive solutions to the social and political issues that he critiques. Since no human relation is devoid of power, freedom becomes elusive—even as an ideal. This stance which critiques normativity as socially constructed and contingent, but which relies on an implicit norm to mount the critique led philosopher Jürgen Habermas to describe Foucault's thinking as "crypto-normativist", covertly reliant on the very Enlightenment principles he attempts to argue against. A similar critique has been advanced by Diana Taylor, and by Nancy Fraser who argues that "Foucault's critique encompasses traditional moral systems, he denies himself by recourse to concepts such as 'freedom' and 'justice', and therefore lacks the ability to generate positive alternatives."
Genealogy as historical method and defeatism
The philosopher Richard Rorty has argued that Foucault's "archaeology of knowledge" is fundamentally negative, and thus fails to adequately establish any "new" theory of knowledge per se. Rather, Foucault simply provides a few valuable maxims regarding the reading of history. Rorty writes:
Foucault has frequently been criticized by historians for what they consider to be a lack of rigor in his analyses. For example, Hans-Ulrich Wehler harshly criticized Foucault in 1998. Although Wehler recognizes that Foucault has highlighted some problems of modernity that until then had been removed or completely ignored, he regards the latter as a bad philosopher who wrongfully received a good response by the humanities and by social sciences. According to Wehler, Foucault's works are not only insufficient in their empirical historical aspects, but also often contradictory and lacking in clarity. For example, Foucault's concept of power is "desperatingly undifferentiated", and Foucault's thesis of a "disciplinary society" is, according to Wehler, only possible because Foucault does not properly differentiate between authority, force, power, violence and legitimacy. In addition, his thesis is based on a one-sided choice of sources (prisons and psychiatric institutions) and neglects other types of organizations such as factories. Also, Wehler criticizes Foucault's "francocentrism" because he did not take into consideration major German-speaking theorists of social sciences like Max Weber and Norbert Elias. In all, Wehler concludes that Foucault is "because of the endless series of flaws in his so-called empirical studies ... an intellectually dishonest ('intellektuell unredlicher') empirically absolutely unreliable ('empirisch absolut unzuverlaessiger'), crypto-normativist seducer ('Rattenfaenger') of Postmodernism".
Feminist critiques
Though American feminists have built on Foucault's critiques of the historical construction of gender roles and sexuality, some feminists note the limitations of the masculinist subjectivity and ethical orientation that he describes. A related issue raised by scholars Elizabeth Povinelli and Kathryn Yusoff is the almost complete absence of any discussion of race in his writings. Yusoff (2018, p. 211) says "Povinelli draws our attention to the provinciality of Foucault's project in its conceptualization of a Western European genealogy".
Sexuality
The philosopher Roger Scruton argues in Sexual Desire (1986) that Foucault was incorrect to claim, in The History of Sexuality, that sexual morality is culturally relative. He criticizes Foucault for assuming that there could be societies in which a "problematisation" of the sexual did not occur, concluding that, "No history of thought could show the 'problematisation' of sexual experience to be peculiar to certain specific social formations: it is characteristic of personal experience generally, and therefore of every genuine social order."
Foucault's approach to sexuality, which he sees as socially constructed, has become influential in queer theory. Foucault's resistance to identity politics, and his rejection of the psychoanalytic concept of "object choice", stand at odds with some theories of queer identity.
Social constructionism and human nature
Foucault is sometimes criticized for his purported social constructionism, which some see as an affront to the concept of truth. In Foucault's 1971 televised debate with Noam Chomsky, Foucault argued against the possibility of any fixed human nature, as posited by Chomsky's concept of innate human faculties. Chomsky argued that concepts of justice were rooted in human reason, whereas Foucault rejected the universal basis for a concept of justice. Following the debate, Chomsky was stricken with Foucault's total rejection of the possibility of a universal morality, stating "He struck me as completely amoral, I'd never met anyone who was so totally amoral [...] I mean, I liked him personally, it's just that I couldn't make sense of him. It's as if he was from a different species, or something."
Defeatism in education and authority
Peruvian writer Mario Vargas Llosa, while acknowledging that Foucault contributed to give a right of citizenship in cultural life to certain marginal and eccentric experiences (of sexuality, of cultural repression, of madness), asserts that his radical critique of authority was detrimental to education.
Psychology of the self
One of Foucault's claims regarding the subjectivity of the self has been disputed. Opposing Foucault's view of subjectivity, Terje Sparby, Friedrich Edelhäuser, and Ulrich W. Weger argue that other factors, such as biological, environmental, and cultural are explanations for the self.
Forget Foucault
Jean Baudrillard, in his 1977 tract Oublier Foucault (trans. Forget Foucault), asserted that "Foucault's discourse is a mirror of the powers it
describes." Since "it is possible at last to talk with such definitive understanding about power, sexuality, the body, and discipline [...] it is because at some point all this is here and now over with." Therefore, with "the coincidence between this new version of power and the new version of desire proposed by Deleuze and Lyotard [...] [which was] not accidental: it's simply that in Foucault power takes the place of desire [...] That is why there is no desire in Foucault: its place is already taken [...] When power blends into desire and desire blends into power, let's forget them both."
See also
Biopolitics
Governmentality
List of atheist philosophers
List of French philosophers
Philip Rieff
Thomas Szasz
References
Sources
Further reading
Artières, Philippe, Jean-François Bert, Frédéric Gros, and Judith Revel (eds). 2011. Cahier Foucault. France: L'Herne.
Derrida, Jacques. 1978. "Cogito and the History of Madness", pp. 31–63 in Writing and Difference, translated by Alan Bass. Chicago: Chicago University Press.
Dreyfus, Herbert L., and Paul Rabinow. 1983. Michel Foucault: Beyond Structuralism and Hermeneutics (2nd edn). Chicago: University of Chicago Press.
Foucault, Michel. "Sexual Morality and the Law" [originally published as "La loi de la pudeur"], pp. 271–285 in Politics, Philosophy, Culture.
Foucault, Michel, Ignacio Ramonet, Daniel Mermet, Jorge Majfud, and Federico Kukso. 2018. Cinco entrevistas a Noam Chomsky (in Spanish). Santiago: Aun Creemos en los Sueños. .
Garland, David. 1997. Governmentality' and the Problem of Crime: Foucalt, Criminology, Sociology". Theoretical Criminology 1(2):173–214.
Ghamari-Tabrizi, Behrooz. 2016. Foucault in Iran: Islamic Revolution after the Enlightenment. Minneapolis: University of Minnesota Press.
Deleuze, Gilles. 1988. Foucault. Minneapolis: University of Minnesota Press. Retrieved via University of Minnesota Press.
Deleuze, Gilles, and Félix Guattari. 1983. Anti-Oedipus. Minneapolis: University of Minnesota Press.
MacIntyre, Alasdair. 1990. Three Rival Versions of Moral Enquiry: Encyclopaedia, Genealogy, and Tradition. Notre Dame, IN: University of Notre Dame Press.
Merquior, J. G. 1987. Foucault. Berkeley, Calif.: University of California Press. A critical view of Foucault's work.
Olssen, M. 2009. Toward a Global Thin Community: Nietzsche, Foucault and the Cosmopolitan Commitment. Boulder, CO: Paradigm Press.
Roudinesco, Élisabeth. 2008. Philosophy in Turbulent Times: Canguilhem, Sartre, Foucault, Althusser, Deleuze, Derrida. New York: Columbia University Press.
Veyne, Paul. 2008. Foucault. Sa pensée, sa personne. Paris: Éditions Albin Michel.
Wolin, Richard. 1987. Telos 67, Foucault's Aesthetic Decisionism. New York: Telos Press Ltd.
External links
Foucault Studies
Foucault.info. Large resource site which includes extracts from Foucault's work and a comprehensive bibliography of all of Foucault's work in French
Foucault News. Large resource site, which includes a blog with news related to Foucault research, bibliographies and other resources
Foucault bibliographies. Bibliographies and links to bibliographies of, and relating to Foucault, on the Foucault News site
Progressive Geographies. Stuart Elden's blog and resource site. Includes extensive resources on Foucault
1926 births
1984 deaths
20th-century French anthropologists
20th-century French historians
20th-century French philosophers
20th-century French writers
Academic staff of Paris 8 University Vincennes-Saint-Denis
Academic staff of the Collège de France
Academic staff of the University of Lille Nord de France
Academic staff of the University of Warsaw
Academic staff of Tunis University
Academic staff of Uppsala University
AIDS-related deaths in France
Anti-psychiatry
Atheist philosophers
Critical theorists
Cultural historians
École Normale Supérieure alumni
Former Roman Catholics
French anti-capitalists
French anti-fascists
French atheists
French Christian Zionists
French epistemologists
French gay writers
French LGBTQ rights activists
French literary critics
French philosophers of culture
French philosophers of science
French philosophers of technology
French political philosophers
French psychedelic drug advocates
French sociologists
Gay academics
Historians of science
Historians of sexuality
Historians of technology
History of psychiatry
LGBTQ historians
LGBTQ philosophers
Lycée Henri-IV alumni
People from Poitiers
Philosophers of literature
Philosophers of medicine
Philosophers of sexuality
Philosophy articles needing expert attention
Postmodern theory
Poststructuralists
Proto-queer theorists
Rhetoric theorists
Social anthropologists
Social constructionism
Social historians
Structuralists
University of California, Berkeley faculty | Michel Foucault | [
"Biology"
] | 19,823 | [
"Behavior",
"Sexuality",
"Historians of sexuality"
] |
47,651 | https://en.wikipedia.org/wiki/Reproducibility | Reproducibility, closely related to replicability and repeatability, is a major principle underpinning the scientific method. For the findings of a study to be reproducible means that results obtained by an experiment or an observational study or in a statistical analysis of a data set should be achieved again with a high degree of reliability when the study is replicated. There are different kinds of replication but typically replication studies involve different researchers using the same methodology. Only after one or several such successful replications should a result be recognized as scientific knowledge.
With a narrower scope, reproducibility has been defined in computational sciences as having the following quality: the results should be documented by making all data and code available in such a way that the computations can be executed again with identical results.
In recent decades, there has been a rising concern that many published scientific results fail the test of reproducibility, evoking a reproducibility or replication crisis.
History
The first to stress the importance of reproducibility in science was the Anglo-Irish chemist Robert Boyle, in England in the 17th century. Boyle's air pump was designed to generate and study vacuum, which at the time was a very controversial concept. Indeed, distinguished philosophers such as René Descartes and Thomas Hobbes denied the very possibility of vacuum existence. Historians of science Steven Shapin and Simon Schaffer, in their 1985 book Leviathan and the Air-Pump, describe the debate between Boyle and Hobbes, ostensibly over the nature of vacuum, as fundamentally an argument about how useful knowledge should be gained. Boyle, a pioneer of the experimental method, maintained that the foundations of knowledge should be constituted by experimentally produced facts, which can be made believable to a scientific community by their reproducibility. By repeating the same experiment over and over again, Boyle argued, the certainty of fact will emerge.
The air pump, which in the 17th century was a complicated and expensive apparatus to build, also led to one of the first documented disputes over the reproducibility of a particular scientific phenomenon. In the 1660s, the Dutch scientist Christiaan Huygens built his own air pump in Amsterdam, the first one outside the direct management of Boyle and his assistant at the time Robert Hooke. Huygens reported an effect he termed "anomalous suspension", in which water appeared to levitate in a glass jar inside his air pump (in fact suspended over an air bubble), but Boyle and Hooke could not replicate this phenomenon in their own pumps. As Shapin and Schaffer describe, "it became clear that unless the phenomenon could be produced in England with one of the two pumps available, then no one in England would accept the claims Huygens had made, or his competence in working the pump". Huygens was finally invited to England in 1663, and under his personal guidance Hooke was able to replicate anomalous suspension of water. Following this Huygens was elected a Foreign Member of the Royal Society. However, Shapin and Schaffer also note that "the accomplishment of replication was dependent on contingent acts of judgment. One cannot write down a formula saying when replication was or was not achieved".
The philosopher of science Karl Popper noted briefly in his famous 1934 book The Logic of Scientific Discovery that "non-reproducible single occurrences are of no significance to science". The statistician Ronald Fisher wrote in his 1935 book The Design of Experiments, which set the foundations for the modern scientific practice of hypothesis testing and statistical significance, that "we may say that a phenomenon is experimentally demonstrable when we know how to conduct an experiment which will rarely fail to give us statistically significant results". Such assertions express a common dogma in modern science that reproducibility is a necessary condition (although not necessarily sufficient) for establishing a scientific fact, and in practice for establishing scientific authority in any field of knowledge. However, as noted above by Shapin and Schaffer, this dogma is not well-formulated quantitatively, such as statistical significance for instance, and therefore it is not explicitly established how many times must a fact be replicated to be considered reproducible.
Terminology
Replicability and repeatability are related terms broadly or loosely synonymous with reproducibility (for example, among the general public), but they are often usefully differentiated in more precise senses, as follows.
Two major steps are naturally distinguished in connection with reproducibility of experimental or observational studies:
When new data is obtained in the attempt to achieve it, the term replicability is often used, and the new study is a replication or replicate of the original one. Obtaining the same results when analyzing the data set of the original study again with the same procedures, many authors use the term reproducibility in a narrow, technical sense coming from its use in computational research.
Repeatability is related to the repetition of the experiment within the same study by the same researchers.
Reproducibility in the original, wide sense is only acknowledged if a replication performed by an independent researcher team is successful.
The terms reproducibility and replicability sometimes appear even in the scientific literature with reversed meaning, as different research fields settled on their own definitions for the same terms.
Measures of reproducibility and repeatability
In chemistry, the terms reproducibility and repeatability are used with a specific quantitative meaning. In inter-laboratory experiments, a concentration or other quantity of a chemical substance is measured repeatedly in different laboratories to assess the variability of the measurements. Then, the standard deviation of the difference between two values obtained within the same laboratory is called repeatability. The standard deviation for the difference between two measurement from different laboratories is called reproducibility.
These measures are related to the more general concept of variance components in metrology.
Reproducible research
Reproducible research method
The term reproducible research refers to the idea that scientific results should be documented in such a way that their deduction is fully transparent. This requires a detailed description of the methods used to obtain the data
and making the full dataset and the code to calculate the results easily accessible.
This is the essential part of open science.
To make any research project computationally reproducible, general practice involves all data and files being clearly separated, labelled, and documented. All operations should be fully documented and automated as much as practicable, avoiding manual intervention where feasible. The workflow should be designed as a sequence of smaller steps that are combined so that the intermediate outputs from one step directly feed as inputs into the next step. Version control should be used as it lets the history of the project be easily reviewed and allows for the documenting and tracking of changes in a transparent manner.
A basic workflow for reproducible research involves data acquisition, data processing and data analysis. Data acquisition primarily consists of obtaining primary data from a primary source such as surveys, field observations, experimental research, or obtaining data from an existing source. Data processing involves the processing and review of the raw data collected in the first stage, and includes data entry, data manipulation and filtering and may be done using software. The data should be digitized and prepared for data analysis. Data may be analysed with the use of software to interpret or visualise statistics or data to produce the desired results of the research such as quantitative results including figures and tables. The use of software and automation enhances the reproducibility of research methods.
There are systems that facilitate such documentation, like the R Markdown language
or the Jupyter notebook.
The Open Science Framework provides a platform and useful tools to support reproducible research.
Reproducible research in practice
Psychology has seen a renewal of internal concerns about irreproducible results (see the entry on replicability crisis for empirical results on success rates of replications). Researchers showed in a 2006 study that, of 141 authors of a publication from the American Psychological Association (APA) empirical articles, 103 (73%) did not respond with their data over a six-month period. In a follow-up study published in 2015, it was found that 246 out of 394 contacted authors of papers in APA journals did not share their data upon request (62%). In a 2012 paper, it was suggested that researchers should publish data along with their works, and a dataset was released alongside as a demonstration. In 2017, an article published in Scientific Data suggested that this may not be sufficient and that the whole analysis context should be disclosed.
In economics, concerns have been raised in relation to the credibility and reliability of published research. In other sciences, reproducibility is regarded as fundamental and is often a prerequisite to research being published, however in economic sciences it is not seen as a priority of the greatest importance. Most peer-reviewed economic journals do not take any substantive measures to ensure that published results are reproducible, however, the top economics journals have been moving to adopt mandatory data and code archives. There is low or no incentives for researchers to share their data, and authors would have to bear the costs of compiling data into reusable forms. Economic research is often not reproducible as only a portion of journals have adequate disclosure policies for datasets and program code, and even if they do, authors frequently do not comply with them or they are not enforced by the publisher. A Study of 599 articles published in 37 peer-reviewed journals revealed that while some journals have achieved significant compliance rates, significant portion have only partially complied, or not complied at all. On an article level, the average compliance rate was 47.5%; and on a journal level, the average compliance rate was 38%, ranging from 13% to 99%.
A 2018 study published in the journal PLOS ONE found that 14.4% of a sample of public health statistics researchers had shared their data or code or both.
There have been initiatives to improve reporting and hence reproducibility in the medical literature for many years, beginning with the CONSORT initiative, which is now part of a wider initiative, the EQUATOR Network.
This group has recently turned its attention to how better reporting might reduce waste in research, especially biomedical research.
Reproducible research is key to new discoveries in pharmacology. A Phase I discovery will be followed by Phase II reproductions as a drug develops towards commercial production. In recent decades Phase II success has fallen from 28% to 18%. A 2011 study found that 65% of medical studies were inconsistent when re-tested, and only 6% were completely reproducible.
Noteworthy irreproducible results
Hideyo Noguchi became famous for correctly identifying the bacterial agent of syphilis, but also claimed that he could culture this agent in his laboratory. Nobody else has been able to produce this latter result.
In March 1989, University of Utah chemists Stanley Pons and Martin Fleischmann reported the production of excess heat that could only be explained by a nuclear process ("cold fusion"). The report was astounding given the simplicity of the equipment: it was essentially an electrolysis cell containing heavy water and a palladium cathode which rapidly absorbed the deuterium produced during electrolysis. The news media reported on the experiments widely, and it was a front-page item on many newspapers around the world (see science by press conference). Over the next several months others tried to replicate the experiment, but were unsuccessful.
Nikola Tesla claimed as early as 1899 to have used a high frequency current to light gas-filled lamps from over away without using wires. In 1904 he built Wardenclyffe Tower on Long Island to demonstrate means to send and receive power without connecting wires. The facility was never fully operational and was not completed due to economic problems, so no attempt to reproduce his first result was ever carried out.
Other examples which contrary evidence has refuted the original claim:
N-rays, a hypothesized form of radiation subsequently found to be illusory
Polywater, a hypothesized polymerized form of water found to be just water with common contaminations
Stimulus-triggered acquisition of pluripotency, revealed to be the result of fraud
GFAJ-1, a bacterium that could purportedly incorporate arsenic into its DNA in place of phosphorus
MMR vaccine controversy — a study in The Lancet claiming the MMR vaccine caused autism was revealed to be fraudulent
Schön scandal — semiconductor "breakthroughs" revealed to be fraudulent
Power posing — a social psychology phenomenon that went viral after being the subject of a very popular TED talk, but was unable to be replicated in dozens of studies
See also
Metascience
Accuracy
ANOVA gauge R&R
Contingency
Corroboration
Reproducible builds
Falsifiability
Hypothesis
Measurement uncertainty
Pathological science
Pseudoscience
Replication (statistics)
Replication crisis
ReScience C (journal)
Retraction in academic publishing
Tautology
Testability
Verification and validation
References
Further reading
"Science is not irrevocably broken, [epidemiologist John Ioannidis] asserts. It just needs some improvements. "Despite the fact that I've published papers with pretty depressive titles, I'm actually an optimist," Ioannidis says. "I find no other investment of a society that is better placed than science.""
External links
Transparency and Openness Promotion Guidelines from the Center for Open Science
Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results of the National Institute of Standards and Technology
Reproducible papers with artifacts by the CTuning foundation
ReproducibleResearch.net
Measurement
Philosophy of science
Scientific method
Tests
Validity (statistics)
Discovery and invention controversies
Metascience
Statistical reliability | Reproducibility | [
"Physics",
"Mathematics"
] | 2,801 | [
"Quantity",
"Physical quantities",
"Measurement",
"Size"
] |
47,671 | https://en.wikipedia.org/wiki/Toxic%20heavy%20metal | A toxic heavy metal is a common but misleading term for a metal-like element noted for its potential toxicity. Not all heavy metals are toxic and some toxic metals are not heavy. Elements often discussed as toxic include cadmium, mercury and lead, all of which appear in the World Health Organization's list of 10 chemicals of major public concern. Other examples include chromium and nickel, thallium, bismuth, arsenic, antimony and tin.
These toxic elements are found naturally in the earth. They become concentrated as a result of human caused activities and can enter plant and animal (including human) tissues via inhalation, diet, and manual handling. Then, they can bind to and interfere with the functioning of vital cellular components. The toxic effects of arsenic, mercury, and lead were known to the ancients, but methodical studies of the toxicity of some heavy metals appear to date from only 1868. In humans, heavy metal poisoning is generally treated by the administration of chelating agents. Some elements otherwise regarded as toxic heavy metals are essential, in small quantities, for human health.
Controversial terminology
The International Union of Pure and Applied Chemistry (IUPAC), which standardizes nomenclature, says the term “heavy metals” is both meaningless and misleading". The IUPAC report focuses on the legal and toxicological implications of describing "heavy metals" as toxins when there is no scientific evidence to support a connection. The density implied by the adjective "heavy" has almost no biological consequences and pure metals are rarely the biologically active substance.
This characterization has been echoed by numerous reviews. The most widely used toxicology textbook, Casarett and Doull’s toxicology uses "toxic metal" not "heavy metals". Nevertheless many scientific and science related articles continue to use "heavy metal" as a term for toxic substances.
Major and minor metal toxins
Metals with multiple toxic effects include arsenic (As), beryllium (Be), cadmium (Cd), chromium (Cr), lead (Pb), mercury (Hg), and nickel (Ni).
Elements that are nutritionally essential for animal or plant life but which are considered toxic metals in high doses or other forms include cobalt (Co), copper (Cu), iron (Fe), magnesium (Mg), manganese (Mn), molybdenum (Mo), selenium (Se), and zinc (Zn).
Contamination sources
Toxic metals are found naturally in the earth, and become concentrated as a result of human activities, or, in some cases geochemical processes, such as accumulation in peat soils that are then released when drained for agriculture. Common sources include fertilisers; aging water supply infrastructure; and microplastics floating in the world's oceans. Arsenic is thought to be used in connection with coloring dyes. Rat poison used in grain and mash stores may be another source of the arsenic.
The geographical extent of sources may be very large. For example, up to one-sixth of China's arable land might be affected by heavy metal contamination.
Lead is the most prevalent heavy metal contaminant. As a component of tetraethyl lead, , it was used extensively in gasoline during the 1930s–1970s. Lead levels in the aquatic environments of industrialised societies have been estimated to be two to three times those of pre-industrial levels. Although the use of leaded gasoline was largely phased out in North America by 1996, soils next to roads built before this time retain high lead concentrations. Lead (from lead(II) azide or lead styphnate used in firearms) gradually accumulates at firearms training grounds, contaminating the local environment and exposing range employees to a risk of lead poisoning.
Entry routes
Toxic metals enter plant, animal and human tissues via air inhalation, diet, and manual handling. Welding, galvanizing, brazing, and soldering exposes workers to fumes that may be inhaled and result in metal fume fever. Motor vehicle emissions are a major source of airborne contaminants including arsenic, cadmium, cobalt, nickel, lead, antimony, vanadium, zinc, platinum, palladium and rhodium. Water sources (groundwater, lakes, streams and rivers) can be polluted by toxic metals leaching from industrial and consumer waste; acid rain can exacerbate this process by releasing toxic metals trapped in soils. Transport through soil can be facilitated by the presence of preferential flow paths (macropores) and dissolved organic compounds. Plants are exposed to toxic metals through the uptake of water; animals eat these plants; ingestion of plant- and animal-based foods are the largest sources of toxic metals in humans. Absorption through skin contact, for example from contact with soil, or metal containing toys and jewelry, is another potential source of toxic metal contamination. Toxic metals can bioaccumulate in organisms as they are hard to metabolize.
Detrimental effects
Toxic metals "can bind to vital cellular components, such as structural proteins, enzymes, and nucleic acids, and interfere with their functioning". Symptoms and effects can vary according to the metal or metal compound, and the dose involved. Broadly, long-term exposure to toxic heavy metals can have carcinogenic, central and peripheral nervous system, and circulatory effects. For humans, typical presentations associated with exposure to any of the "classical" toxic heavy metals, or chromium (another toxic heavy metal) or arsenic (a metalloid), are shown in the table.
History
The toxic effects of arsenic, mercury and lead were known to the ancients but methodical studies of the overall toxicity of heavy metals appear to date from only 1868. In that year, Wanklyn and Chapman speculated on the adverse effects of the heavy metals "arsenic, lead, copper, zinc, iron and manganese" in drinking water. They noted an "absence of investigation" and were reduced to "the necessity of pleading for the collection of data". In 1884, Blake described an apparent connection between toxicity and the atomic weight of an element. The following sections provide historical thumbnails for the "classical" toxic heavy metals (arsenic, mercury and lead) and some more recent examples (chromium and cadmium).
Arsenic
Arsenic, as realgar () and orpiment (), was known in ancient times. Strabo (64–50 BCE – c. AD 24?), a Greek geographer and historian, wrote that only slaves were employed in realgar and orpiment mines since they would inevitably die from the toxic effects of the fumes given off from the ores. Arsenic-contaminated beer poisoned over 6,000 people in the Manchester area of England in 1900, and is thought to have killed at least 70 victims. Clare Luce, American ambassador to Italy from 1953 to 1956, suffered from arsenic poisoning. Its source was traced to flaking arsenic-laden paint on the ceiling of her bedroom. She may also have eaten food contaminated by arsenic in flaking ceiling paint in the embassy dining room. Ground water contaminated by arsenic, as of 2014, "is still poisoning millions of people in Asia".
Mercury
The first emperor of unified China, Qin Shi Huang, it is reported, died of ingesting mercury pills that were intended to give him eternal life. The phrase "mad as a hatter" is likely a reference to mercury poisoning among milliners (so-called "mad hatter disease"), as mercury-based compounds were once used in the manufacture of felt hats in the 18th and 19th century. Historically, gold amalgam (an alloy with mercury) was widely used in gilding, leading to numerous casualties among the workers. It is estimated that during the construction of Saint Isaac's Cathedral alone, 60 workers died from the gilding of the main dome. Outbreaks of methylmercury poisoning occurred in several places in Japan during the 1950s due to industrial discharges of mercury into rivers and coastal waters. The best-known instances were in Minamata and Niigata. In Minamata alone, more than 600 people died due to what became known as Minamata disease. More than 21,000 people filed claims with the Japanese government, of which almost 3000 became certified as having the disease. In 22 documented cases, pregnant women who consumed contaminated fish showed mild or no symptoms but gave birth to infants with severe developmental disabilities. Since the Industrial Revolution, mercury levels have tripled in many near-surface seawaters, especially around Iceland and Antarctica.
Lead
The adverse effects of lead were known to the ancients. In the 2nd century BC the Greek botanist Nicander described the colic and paralysis seen in lead-poisoned people. Dioscorides, a Greek physician who is thought to have lived in the 1st century CE, wrote that lead "makes the mind give way". Lead was used extensively in Roman aqueducts from about 500 BC to 300 AD. Julius Caesar's engineer, Vitruvius, reported, "water is much more wholesome from earthenware pipes than from lead pipes. For it seems to be made injurious by lead, because white lead is produced by it, and this is said to be harmful to the human body." During the Mongol period in China (1271−1368 AD), lead pollution due to silver smelting in the Yunnan region exceeded contamination levels from modern mining activities by nearly four times. In the 17th and 18th centuries, people in Devon were afflicted by a condition referred to as Devon colic; this was discovered to be due to the imbibing of lead-contaminated cider. In 2013, the World Health Organization estimated that lead poisoning resulted in 143,000 deaths, and "contribute[d] to 600,000 new cases of children with intellectual disabilities", each year. In the U.S. city of Flint, Michigan, lead contamination in drinking water has been an issue since 2014. The source of the contamination has been attributed to "corrosion in the lead and iron pipes that distribute water to city residents". In 2015, the lead concentration of drinking water in north-eastern Tasmania, Australia, reached a level over 50 times the prescribed national drinking water guidelines. The source of the contamination was attributed to "a combination of dilapidated drinking water infrastructure, including lead jointed pipelines, end-of-life polyvinyl chloride pipes and household plumbing".
Chromium
Chromium(III) compounds and chromium metal are not considered a health hazard, while the toxicity and carcinogenic properties of chromium(VI) have been known since at least the late 19th century. In 1890, Newman described the elevated cancer risk of workers in a chromate dye company. Chromate-induced dermatitis was reported in aircraft workers during World War II. In 1963, an outbreak of dermatitis, ranging from erythema to exudative eczema, occurred amongst 60 automobile factory workers in England. The workers had been wet-sanding chromate-based primer paint that had been applied to car bodies. In Australia, chromium was released from the Newcastle Orica explosives plant on August 8, 2011. Up to 20 workers at the plant were exposed as were 70 nearby homes in Stockton. The town was only notified three days after the release and the accident sparked a major public controversy, with Orica criticised for playing down the extent and possible risks of the leak, and the state Government attacked for their slow response to the incident.
Cadmium
Cadmium exposure is a phenomenon of the early 20th century, and onwards. In Japan in 1910, the Mitsui Mining & Smelting Company began discharging cadmium into the Jinzū River, as a byproduct of mining operations. Residents in the surrounding area subsequently consumed rice grown in cadmium-contaminated irrigation water. They experienced softening of the bones and kidney failure. The origin of these symptoms was not clear; possibilities raised at the time included "a regional or bacterial disease or lead poisoning". In 1955, cadmium was identified as the likely cause and in 1961 the source was directly linked to mining operations in the area. In February 2010, cadmium was found in Walmart exclusive Miley Cyrus jewelry. Wal-Mart continued to sell the jewelry until May, when covert testing organised by Associated Press confirmed the original results. In June 2010 cadmium was detected in the paint used on promotional drinking glasses for the movie Shrek Forever After, sold by McDonald's Restaurants, triggering a recall of 12 million glasses.
Remediation
Human
In humans, heavy metal poisoning is generally treated by the administration of chelating agents. These are chemical compounds, such as (calcium disodium ethylenediaminetetraacetate) that convert heavy metals to chemically inert forms that can be excreted without further interaction with the body. Chelates are not without side effects and can also remove beneficial metals from the body. Vitamin and mineral supplements are sometimes co-administered for this reason
Environment
Soils contaminated by heavy metals can be remediated by one or more of the following technologies: isolation; immobilization; toxicity reduction; physical separation; or extraction. Isolation involves the use of caps, membranes or below-ground barriers in an attempt to quarantine the contaminated soil. Immobilization aims to alter the properties of the soil so as to hinder the mobility of the heavy contaminants. Toxicity reduction attempts to oxidise or reduce the toxic heavy metal ions, via chemical or biological means into less toxic or mobile forms. Physical separation involves the removal of the contaminated soil and the separation of the metal contaminants by mechanical means. Extraction is an on or off-site process that uses chemicals, high-temperature volatization, or electrolysis to extract contaminants from soils. The process or processes used will vary according to contaminant and the characteristics of the site.
Benefits
Some elements otherwise regarded as toxic heavy metals are essential, in small quantities, for human health. These elements include vanadium, manganese, iron, cobalt, copper, zinc, selenium, strontium and molybdenum. A deficiency of these essential metals may increase susceptibility to heavy metal poisoning.
Selenium is the most toxic of the heavy metals that are essential for mammals. Selenium is normally excreted and only becomes toxic when the intake exceeds the excretory capacity.
See also
Bento Rodrigues dam disaster
Heavy metal detoxification
Kingston Fossil Plant coal fly ash slurry spill
Light metal
Metal toxicity
Citations
General references
Sets of chemical elements
Toxicology | Toxic heavy metal | [
"Environmental_science"
] | 3,007 | [
"Toxicology"
] |
47,687 | https://en.wikipedia.org/wiki/Cosmic%20ray | Cosmic rays or astroparticles are high-energy particles or clusters of particles (primarily represented by protons or atomic nuclei) that move through space at nearly the speed of light. They originate from the Sun, from outside of the Solar System in our own galaxy, and from distant galaxies. Upon impact with Earth's atmosphere, cosmic rays produce showers of secondary particles, some of which reach the surface, although the bulk are deflected off into space by the magnetosphere or the heliosphere.
Cosmic rays were discovered by Victor Hess in 1912 in balloon experiments, for which he was awarded the 1936 Nobel Prize in Physics.
Direct measurement of cosmic rays, especially at lower energies, has been possible since the launch of the first satellites in the late 1950s. Particle detectors similar to those used in nuclear and high-energy physics are used on satellites and space probes for research into cosmic rays.
Data from the Fermi Space Telescope (2013) have been interpreted as evidence that a significant fraction of primary cosmic rays originate from the supernova explosions of stars. Based on observations of neutrinos and gamma rays from blazar TXS 0506+056 in 2018, active galactic nuclei also appear to produce cosmic rays.
Etymology
The term ray (as in optical ray) seems to have arisen from an initial belief, due to their penetrating power, that cosmic rays were mostly electromagnetic radiation. Nevertheless, following wider recognition of cosmic rays as being various high-energy particles with intrinsic mass, the term "rays" was still consistent with then known particles such as cathode rays, canal rays, alpha rays, and beta rays. Meanwhile "cosmic" ray photons, which are quanta of electromagnetic radiation (and so have no intrinsic mass) are known by their common names, such as gamma rays or X-rays, depending on their photon energy.
Composition
Of primary cosmic rays, which originate outside of Earth's atmosphere, about 99% are the bare nuclei of common atoms (stripped of their electron shells), and about 1% are solitary electrons (that is, one type of beta particle). Of the nuclei, about 90% are simple protons (i.e., hydrogen nuclei); 9% are alpha particles, identical to helium nuclei; and 1% are the nuclei of heavier elements, called HZE ions. These fractions vary highly over the energy range of cosmic rays. A very small fraction are stable particles of antimatter, such as positrons or antiprotons. The precise nature of this remaining fraction is an area of active research. An active search from Earth orbit for anti-alpha particles as of 2019 had found no unequivocal evidence.
Upon striking the atmosphere, cosmic rays violently burst atoms into other bits of matter, producing large amounts of pions and muons (produced from the decay of charged pions, which have a short half-life) as well as neutrinos. The neutron composition of the particle cascade increases at lower elevations, reaching between 40% and 80% of the radiation at aircraft altitudes.
Of secondary cosmic rays, the charged pions produced by primary cosmic rays in the atmosphere swiftly decay, emitting muons. Unlike pions, these muons do not interact strongly with matter, and can travel through the atmosphere to penetrate even below ground level. The rate of muons arriving at the surface of the Earth is such that about one per second passes through a volume the size of a person's head. Together with natural local radioactivity, these muons are a significant cause of the ground level atmospheric ionisation that first attracted the attention of scientists, leading to the eventual discovery of the primary cosmic rays arriving from beyond our atmosphere.
Energy
Cosmic rays attract great interest practically, due to the damage they inflict on microelectronics and life outside the protection of an atmosphere and magnetic field, and scientifically, because the energies of the most energetic ultra-high-energy cosmic rays have been observed to approach (This is slightly greater than 21 million times the design energy of particles accelerated by the Large Hadron Collider, .) One can show that such enormous energies might be achieved by means of the centrifugal mechanism of acceleration in active galactic nuclei. At , the highest-energy ultra-high-energy cosmic rays (such as the OMG particle recorded in 1991) have energies comparable to the kinetic energy of a baseball. As a result of these discoveries, there has been interest in investigating cosmic rays of even greater energies. Most cosmic rays, however, do not have such extreme energies; the energy distribution of cosmic rays peaks at .
History
After the discovery of radioactivity by Henri Becquerel in 1896, it was generally believed that atmospheric electricity, ionization of the air, was caused only by radiation from radioactive elements in the ground or the radioactive gases or isotopes of radon they produce. Measurements of increasing ionization rates at increasing heights above the ground during the decade from 1900 to 1910 could be explained as due to absorption of the ionizing radiation by the intervening air.
Discovery
In 1909, Theodor Wulf developed an electrometer, a device to measure the rate of ion production inside a hermetically sealed container, and used it to show higher levels of radiation at the top of the Eiffel Tower than at its base. However, his paper published in Physikalische Zeitschrift was not widely accepted. In 1911, Domenico Pacini observed simultaneous variations of the rate of ionization over a lake, over the sea, and at a depth of 3 metres from the surface. Pacini concluded from the decrease of radioactivity underwater that a certain part of the ionization must be due to sources other than the radioactivity of the Earth.
In 1912, Victor Hess carried three enhanced-accuracy Wulf electrometers to an altitude of 5,300 metres in a free balloon flight. He found the ionization rate increased to twice the rate at ground level. Hess ruled out the Sun as the radiation's source by making a balloon ascent during a near-total eclipse. With the moon blocking much of the Sun's visible radiation, Hess still measured rising radiation at rising altitudes. He concluded that "The results of the observations seem most likely to be explained by the assumption that radiation of very high penetrating power enters from above into our atmosphere." In 1913–1914, Werner Kolhörster confirmed Victor Hess's earlier results by measuring the increased ionization enthalpy rate at an altitude of 9 km. Hess received the Nobel Prize in Physics in 1936 for his discovery.
Identification
Bruno Rossi wrote in 1964:
In the late 1920s and early 1930s the technique of self-recording electroscopes carried by balloons into the highest layers of the atmosphere or sunk to great depths under water was brought to an unprecedented degree of perfection by the German physicist Erich Regener and his group. To these scientists we owe some of the most accurate measurements ever made of cosmic-ray ionization as a function of altitude and depth.
Ernest Rutherford stated in 1931 that "thanks to the fine experiments of Professor Millikan and the even more far-reaching experiments of Professor Regener, we have now got for the first time, a curve of absorption of these radiations in water which we may safely rely upon".
In the 1920s, the term cosmic ray was coined by Robert Millikan who made measurements of ionization due to cosmic rays from deep under water to high altitudes and around the globe. Millikan believed that his measurements proved that the primary cosmic rays were gamma rays; i.e., energetic photons. And he proposed a theory that they were produced in interstellar space as by-products of the fusion of hydrogen atoms into the heavier elements, and that secondary electrons were produced in the atmosphere by Compton scattering of gamma rays. In 1927, while sailing from Java to the Netherlands, Jacob Clay found evidence, later confirmed in many experiments, that cosmic ray intensity increases from the tropics to mid-latitudes, which indicated that the primary cosmic rays are deflected by the geomagnetic field and must therefore be charged particles, not photons. In 1929, Bothe and Kolhörster discovered charged cosmic-ray particles that could penetrate 4.1 cm of gold. Charged particles of such high energy could not possibly be produced by photons from Millikan's proposed interstellar fusion process.
In 1930, Bruno Rossi predicted a difference between the intensities of cosmic rays arriving from the east and the west that depends upon the charge of the primary particles—the so-called "east–west effect". Three independent experiments found that the intensity is, in fact, greater from the west, proving that most primaries are positive. During the years from 1930 to 1945, a wide variety of investigations confirmed that the primary cosmic rays are mostly protons, and the secondary radiation produced in the atmosphere is primarily electrons, photons and muons. In 1948, observations with nuclear emulsions carried by balloons to near the top of the atmosphere showed that approximately 10% of the primaries are helium nuclei (alpha particles) and 1% are nuclei of heavier elements such as carbon, iron, and lead.
During a test of his equipment for measuring the east–west effect, Rossi observed that the rate of near-simultaneous discharges of two widely separated Geiger counters was larger than the expected accidental rate. In his report on the experiment, Rossi wrote "... it seems that once in a while the recording equipment is struck by very extensive showers of particles, which causes coincidences between the counters, even placed at large distances from one another." In 1937, Pierre Auger, unaware of Rossi's earlier report, detected the same phenomenon and investigated it in some detail. He concluded that high-energy primary cosmic-ray particles interact with air nuclei high in the atmosphere, initiating a cascade of secondary interactions that ultimately yield a shower of electrons, and photons that reach ground level.
Soviet physicist Sergei Vernov was the first to use radiosondes to perform cosmic ray readings with an instrument carried to high altitude by a balloon. On 1 April 1935, he took measurements at heights up to 13.6 kilometres using a pair of Geiger counters in an anti-coincidence circuit to avoid counting secondary ray showers.
Homi J. Bhabha derived an expression for the probability of scattering positrons by electrons, a process now known as Bhabha scattering. His classic paper, jointly with Walter Heitler, published in 1937 described how primary cosmic rays from space interact with the upper atmosphere to produce particles observed at the ground level. Bhabha and Heitler explained the cosmic ray shower formation by the cascade production of gamma rays and positive and negative electron pairs.
Energy distribution
Measurements of the energy and arrival directions of the ultra-high-energy primary cosmic rays by the techniques of density sampling and fast timing of extensive air showers were first carried out in 1954 by members of the Rossi Cosmic Ray Group at the Massachusetts Institute of Technology. The experiment employed eleven scintillation detectors arranged within a circle 460 metres in diameter on the grounds of the Agassiz Station of the Harvard College Observatory. From that work, and from many other experiments carried out all over the world, the energy spectrum of the primary cosmic rays is now known to extend beyond 1020 eV. A huge air shower experiment called the Auger Project is currently operated at a site on the Pampas of Argentina by an international consortium of physicists. The project was first led by James Cronin, winner of the 1980 Nobel Prize in Physics from the University of Chicago, and Alan Watson of the University of Leeds, and later by scientists of the international Pierre Auger Collaboration. Their aim is to explore the properties and arrival directions of the very highest-energy primary cosmic rays. The results are expected to have important implications for particle physics and cosmology, due to a theoretical Greisen–Zatsepin–Kuzmin limit to the energies of cosmic rays from long distances (about 160 million light years) which occurs above 1020 eV because of interactions with the remnant photons from the Big Bang origin of the universe. Currently the Pierre Auger Observatory is undergoing an upgrade to improve its accuracy and find evidence for the yet unconfirmed origin of the most energetic cosmic rays.
High-energy gamma rays (>50MeV photons) were finally discovered in the primary cosmic radiation by an MIT experiment carried on the OSO-3 satellite in 1967. Components of both galactic and extra-galactic origins were separately identified at intensities much less than 1% of the primary charged particles. Since then, numerous satellite gamma-ray observatories have mapped the gamma-ray sky. The most recent is the Fermi Observatory, which has produced a map showing a narrow band of gamma ray intensity produced in discrete and diffuse sources in our galaxy, and numerous point-like extra-galactic sources distributed over the celestial sphere.
Modulation
The solar cycle causes variations in the magnetic field of the solar wind through which cosmic rays propagate to Earth.
This results in a modulation of the arriving fluxes at lower energies, as detected indirectly by the globally distributed neutron monitor network.
Sources
Early speculation on the sources of cosmic rays included a 1934 proposal by Baade and Zwicky suggesting cosmic rays originated from supernovae. A 1948 proposal by Horace W. Babcock suggested that magnetic variable stars could be a source of cosmic rays. Subsequently, Sekido et al. (1951) identified the Crab Nebula as a source of cosmic rays. Since then, a wide variety of potential sources for cosmic rays began to surface, including supernovae, active galactic nuclei, quasars, and gamma-ray bursts.
Later experiments have helped to identify the sources of cosmic rays with greater certainty. In 2009, a paper presented at the International Cosmic Ray Conference by scientists at the Pierre Auger Observatory in Argentina showed ultra-high energy cosmic rays originating from a location in the sky very close to the radio galaxy Centaurus A, although the authors specifically stated that further investigation would be required to confirm Centaurus A as a source of cosmic rays. However, no correlation was found between the incidence of gamma-ray bursts and cosmic rays, causing the authors to set upper limits as low as 3.4 × 10−6× erg·cm−2 on the flux of cosmic rays from gamma-ray bursts.
In 2009, supernovae were said to have been "pinned down" as a source of cosmic rays, a discovery made by a group using data from the Very Large Telescope. This analysis, however, was disputed in 2011 with data from PAMELA, which revealed that "spectral shapes of [hydrogen and helium nuclei] are different and cannot be described well by a single power law", suggesting a more complex process of cosmic ray formation. In February 2013, though, research analyzing data from Fermi revealed through an observation of neutral pion decay that supernovae were indeed a source of cosmic rays, with each explosion producing roughly 3 × 1042 – 3 × 1043J of cosmic rays.
Supernovae do not produce all cosmic rays, however, and the proportion of cosmic rays that they do produce is a question which cannot be answered without deeper investigation. To explain the actual process in supernovae and active galactic nuclei that accelerates the stripped atoms, physicists use shock front acceleration as a plausibility argument (see picture at right).
In 2017, the Pierre Auger Collaboration published the observation of a weak anisotropy in the arrival directions of the highest energy cosmic rays. Since the Galactic Center is in the deficit region, this anisotropy can be interpreted as evidence for the extragalactic origin of cosmic rays at the highest energies. This implies that there must be a transition energy from galactic to extragalactic sources, and there may be different types of cosmic-ray sources contributing to different energy ranges.
Types
Cosmic rays can be divided into two types:
galactic cosmic rays (GCR) and extragalactic cosmic rays, i.e., high-energy particles originating outside the solar system, and
solar energetic particles, high-energy particles (predominantly protons) emitted by the sun, primarily in solar eruptions.
However, the term "cosmic ray" is often used to refer to only the extrasolar flux.
Cosmic rays originate as primary cosmic rays, which are those originally produced in various astrophysical processes. Primary cosmic rays are composed mainly of protons and alpha particles (99%), with a small amount of heavier nuclei (≈1%) and an extremely minute proportion of positrons and antiprotons. Secondary cosmic rays, caused by a decay of primary cosmic rays as they impact an atmosphere, include photons, hadrons, and leptons, such as electrons, positrons, muons, and pions. The latter three of these were first detected in cosmic rays.
Primary cosmic rays
Primary cosmic rays mostly originate from outside the Solar System and sometimes even outside the Milky Way. When they interact with Earth's atmosphere, they are converted to secondary particles. The mass ratio of helium to hydrogen nuclei, 28%, is similar to the primordial elemental abundance ratio of these elements, 24%. The remaining fraction is made up of the other heavier nuclei that are typical nucleosynthesis end products, primarily lithium, beryllium, and boron. These nuclei appear in cosmic rays in greater abundance (≈1%) than in the solar atmosphere, where they are only about 10 as abundant (by number) as helium. Cosmic rays composed of charged nuclei heavier than helium are called HZE ions. Due to the high charge and heavy nature of HZE ions, their contribution to an astronaut's radiation dose in space is significant even though they are relatively scarce.
This abundance difference is a result of the way in which secondary cosmic rays are formed. Carbon and oxygen nuclei collide with interstellar matter to form lithium, beryllium, and boron, an example of cosmic ray spallation. Spallation is also responsible for the abundances of scandium, titanium, vanadium, and manganese ions in cosmic rays produced by collisions of iron and nickel nuclei with interstellar matter.
At high energies the composition changes and heavier nuclei have larger abundances in some energy ranges. Current experiments aim at more accurate measurements of the composition at high energies.
Primary cosmic ray antimatter
Satellite experiments have found evidence of positrons and a few antiprotons in primary cosmic rays, amounting to less than 1% of the particles in primary cosmic rays. These do not appear to be the products of large amounts of antimatter from the Big Bang, or indeed complex antimatter in the universe. Rather, they appear to consist of only these two elementary particles, newly made in energetic processes.
Preliminary results from the presently operating Alpha Magnetic Spectrometer (AMS-02) on board the International Space Station show that positrons in the cosmic rays arrive with no directionality. In September 2014, new results with almost twice as much data were presented in a talk at CERN and published in Physical Review Letters. A new measurement of positron fraction up to 500 GeV was reported, showing that positron fraction peaks at a maximum of about 16% of total electron+positron events, around an energy of . At higher energies, up to 500 GeV, the ratio of positrons to electrons begins to fall again. The absolute flux of positrons also begins to fall before 500 GeV, but peaks at energies far higher than electron energies, which peak about 10 GeV. These results on interpretation have been suggested to be due to positron production in annihilation events of massive dark matter particles.
Cosmic ray antiprotons also have a much higher average energy than their normal-matter counterparts (protons). They arrive at Earth with a characteristic energy maximum of 2 GeV, indicating their production in a fundamentally different process from cosmic ray protons, which on average have only one-sixth of the energy.
There is no evidence of complex antimatter atomic nuclei, such as antihelium nuclei (i.e., anti-alpha particles), in cosmic rays. These are actively being searched for. A prototype of the AMS-02 designated AMS-01, was flown into space aboard the on STS-91 in June 1998. By not detecting any antihelium at all, the AMS-01 established an upper limit of for the antihelium to helium flux ratio.
Secondary cosmic rays
When cosmic rays enter the Earth's atmosphere, they collide with atoms and molecules, mainly oxygen and nitrogen. The interaction produces a cascade of lighter particles, a so-called air shower secondary radiation that rains down, including x-rays, protons, alpha particles, pions, muons, electrons, neutrinos, and neutrons. All of the secondary particles produced by the collision continue onward on paths within about one degree of the primary particle's original path.
Typical particles produced in such collisions are neutrons and charged mesons such as positive or negative pions and kaons. Some of these subsequently decay into muons and neutrinos, which are able to reach the surface of the Earth. Some high-energy muons even penetrate for some distance into shallow mines, and most neutrinos traverse the Earth without further interaction. Others decay into photons, subsequently producing electromagnetic cascades. Hence, next to photons, electrons and positrons usually dominate in air showers. These particles as well as muons can be easily detected by many types of particle detectors, such as cloud chambers, bubble chambers, water-Cherenkov, or scintillation detectors. The observation of a secondary shower of particles in multiple detectors at the same time is an indication that all of the particles came from that event.
Cosmic rays impacting other planetary bodies in the Solar System are detected indirectly by observing high-energy gamma ray emissions by gamma-ray telescope. These are distinguished from radioactive decay processes by their higher energies above about 10 MeV.
Cosmic-ray flux
The flux of incoming cosmic rays at the upper atmosphere is dependent on the solar wind, the Earth's magnetic field, and the energy of the cosmic rays. At distances of ≈94 AU from the Sun, the solar wind undergoes a transition, called the termination shock, from supersonic to subsonic speeds. The region between the termination shock and the heliopause acts as a barrier to cosmic rays, decreasing the flux at lower energies (≤ 1 GeV) by about 90%. However, the strength of the solar wind is not constant, and hence it has been observed that cosmic ray flux is correlated with solar activity.
In addition, the Earth's magnetic field acts to deflect cosmic rays from its surface, giving rise to the observation that the flux is apparently dependent on latitude, longitude, and azimuth angle.
The combined effects of all of the factors mentioned contribute to the flux of cosmic rays at Earth's surface. The following table of participial frequencies reach the planet and are inferred from lower-energy radiation reaching the ground.
{| class="wikitable"
|+Relative particle energies and rates of cosmic rays
|-
!scope="col"| Particle energy (eV)
!scope="col"| Particle rate (ms)
|-
!scope="row"| (GeV)
|
|-
!scope="row"| (TeV)
| 1
|-
!scope="row"| (10 PeV)
| (a few times a year)
|-
!scope="row"| (100 EeV)
| (once a century)
|-
|}
In the past, it was believed that the cosmic ray flux remained fairly constant over time. However, recent research suggests one-and-a-half- to two-fold millennium-timescale changes in the cosmic ray flux in the past forty thousand years.
The magnitude of the energy of cosmic ray flux in interstellar space is very comparable to that of other deep space energies: cosmic ray energy density averages about one electron-volt per cubic centimetre of interstellar space, or ≈1 eV/cm3, which is comparable to the energy density of visible starlight at 0.3 eV/cm3, the galactic magnetic field energy density (assumed 3 microgauss) which is ≈0.25 eV/cm3, or the cosmic microwave background (CMB) radiation energy density at ≈0.25 eV/cm3.
Detection methods
There are two main classes of detection methods. First, the direct detection of the primary cosmic rays in space or at high altitude by balloon-borne instruments. Second, the indirect detection of secondary particle, i.e., extensive air showers at higher energies. While there have been proposals and prototypes for space and balloon-borne detection of air showers, currently operating experiments for high-energy cosmic rays are ground based. Generally direct detection is more accurate than indirect detection. However the flux of cosmic rays decreases with energy, which hampers direct detection for the energy range above 1 PeV. Both direct and indirect detection are realized by several techniques.
Direct detection
Direct detection is possible by all kinds of particle detectors at the ISS, on satellites, or high-altitude balloons. However, there are constraints in weight and size limiting the choices of detectors.
An example for the direct detection technique is a method based on nuclear tracks developed by Robert Fleischer, P. Buford Price, and Robert M. Walker for use in high-altitude balloons. In this method, sheets of clear plastic, like 0.25 mm Lexan polycarbonate, are stacked together and exposed directly to cosmic rays in space or high altitude. The nuclear charge causes chemical bond breaking or ionization in the plastic. At the top of the plastic stack the ionization is less, due to the high cosmic ray speed. As the cosmic ray speed decreases due to deceleration in the stack, the ionization increases along the path. The resulting plastic sheets are "etched" or slowly dissolved in warm caustic sodium hydroxide solution, that removes the surface material at a slow, known rate. The caustic sodium hydroxide dissolves the plastic at a faster rate along the path of the ionized plastic. The net result is a conical etch pit in the plastic. The etch pits are measured under a high-power microscope (typically 1600× oil-immersion), and the etch rate is plotted as a function of the depth in the stacked plastic.
This technique yields a unique curve for each atomic nucleus from 1 to 92, allowing identification of both the charge and energy of the cosmic ray that traverses the plastic stack. The more extensive the ionization along the path, the higher the charge. In addition to its uses for cosmic-ray detection, the technique is also used to detect nuclei created as products of nuclear fission.
Indirect detection
There are several ground-based methods of detecting cosmic rays currently in use, which can be divided in two main categories: the detection of secondary particles forming extensive air showers (EAS) by various types of particle detectors, and the detection of electromagnetic radiation emitted by EAS in the atmosphere.
Extensive air shower arrays made of particle detectors measure the charged particles which pass through them. EAS arrays can observe a broad area of the sky and can be active more than 90% of the time. However, they are less able to segregate background effects from cosmic rays than can air Cherenkov telescopes. Most state-of-the-art EAS arrays employ plastic scintillators. Also water (liquid or frozen) is used as a detection medium through which particles pass and produce Cherenkov radiation to make them detectable. Therefore, several arrays use water/ice-Cherenkov detectors as alternative or in addition to scintillators.
By the combination of several detectors, some EAS arrays have the capability to distinguish muons from lighter secondary particles (photons, electrons, positrons). The fraction of muons among the secondary particles is one traditional way to estimate the mass composition of the primary cosmic rays.
An historic method of secondary particle detection still used for demonstration purposes involves the use of cloud chambers to detect the secondary muons created when a pion decays. Cloud chambers in particular can be built from widely available materials and can be constructed even in a high-school laboratory. A fifth method, involving bubble chambers, can be used to detect cosmic ray particles.
More recently, the CMOS devices in pervasive smartphone cameras have been proposed as a practical distributed network to detect air showers from ultra-high-energy cosmic rays. The first app to exploit this proposition was the CRAYFIS (Cosmic RAYs Found in Smartphones) experiment. In 2017, the CREDO (Cosmic-Ray Extremely Distributed Observatory) Collaboration released the first version of its completely open source app for Android devices. Since then the collaboration has attracted the interest and support of many scientific institutions, educational institutions, and members of the public around the world. Future research has to show in what aspects this new technique can compete with dedicated EAS arrays.
The first detection method in the second category is called the air Cherenkov telescope, designed to detect low-energy (<200 GeV) cosmic rays by means of analyzing their Cherenkov radiation, which for cosmic rays are gamma rays emitted as they travel faster than the speed of light in their medium, the atmosphere. While these telescopes are extremely good at distinguishing between background radiation and that of cosmic-ray origin, they can only function well on clear nights without the Moon shining, have very small fields of view, and are only active for a few percent of the time.
A second method detects the light from nitrogen fluorescence caused by the excitation of nitrogen in the atmosphere by particles moving through the atmosphere. This method is the most accurate for cosmic rays at highest energies, in particular when combined with EAS arrays of particle detectors. Similar to the detection of Cherenkov-light, this method is restricted to clear nights.
Another method detects radio waves emitted by air showers. This technique has a high duty cycle similar to that of particle detectors. The accuracy of this technique was improved in the last years as shown by various prototype experiments, and may become an alternative to the detection of atmospheric Cherenkov-light and fluorescence light, at least at high energies.
Effects
Changes in atmospheric chemistry
Cosmic rays ionize nitrogen and oxygen molecules in the atmosphere, which leads to a number of chemical reactions. Cosmic rays are also responsible for the continuous production of a number of unstable isotopes, such as carbon-14, in the Earth's atmosphere through the reaction:
Cosmic rays kept the level of carbon-14 in the atmosphere roughly constant (70 tons) for at least the past 100,000 years, until the beginning of above-ground nuclear weapons testing in the early 1950s. This fact is used in radiocarbon dating.
Reaction products of primary cosmic rays, radioisotope half-lifetime, and production reaction
Role in ambient radiation
Cosmic rays constitute a fraction of the annual radiation exposure of human beings on the Earth, averaging 0.39mSv out of a total of 3mSv per year (13% of total background) for the Earth's population. However, the background radiation from cosmic rays increases with altitude, from 0.3mSv per year for sea-level areas to 1.0mSv per year for higher-altitude cities, raising cosmic radiation exposure to a quarter of total background radiation exposure for populations of said cities. Airline crews flying long-distance high-altitude routes can be exposed to 2.2mSv of extra radiation each year due to cosmic rays, nearly doubling their total exposure to ionizing radiation.
Figures are for the time before the Fukushima Daiichi nuclear disaster. Human-made values by UNSCEAR are from the Japanese National Institute of Radiological Sciences, which summarized the UNSCEAR data.
Effect on electronics
Cosmic rays have sufficient energy to alter the states of circuit components in electronic integrated circuits, causing transient errors to occur (such as corrupted data in electronic memory devices or incorrect performance of CPUs) often referred to as "soft errors". This has been a problem in electronics at extremely high-altitude, such as in satellites, but with transistors becoming smaller and smaller, this is becoming an increasing concern in ground-level electronics as well. Studies by IBM in the 1990s suggest that computers typically experience about one cosmic-ray-induced error per 256 megabytes of RAM per month. To alleviate this problem, the Intel Corporation has proposed a cosmic ray detector that could be integrated into future high-density microprocessors, allowing the processor to repeat the last command following a cosmic-ray event. ECC memory is used to protect data against data corruption caused by cosmic rays.
In 2008, data corruption in a flight control system caused an Airbus A330 airliner to twice plunge hundreds of feet, resulting in injuries to multiple passengers and crew members. Cosmic rays were investigated among other possible causes of the data corruption, but were ultimately ruled out as being very unlikely.
In August 2020, scientists reported that ionizing radiation from environmental radioactive materials and cosmic rays may substantially limit the coherence times of qubits if they are not shielded adequately which may be critical for realizing fault-tolerant superconducting quantum computers in the future.
Significance to aerospace travel
Galactic cosmic rays are one of the most important barriers standing in the way of plans for interplanetary travel by crewed spacecraft. Cosmic rays also pose a threat to electronics placed aboard outgoing probes. In 2010, a malfunction aboard the Voyager 2 space probe was credited to a single flipped bit, probably caused by a cosmic ray. Strategies such as physical or magnetic shielding for spacecraft have been considered in order to minimize the damage to electronics and human beings caused by cosmic rays.
On 31 May 2013, NASA scientists reported that a possible crewed mission to Mars may involve a greater radiation risk than previously believed, based on the amount of energetic particle radiation detected by the RAD on the Mars Science Laboratory while traveling from the Earth to Mars in 2011–2012.
Flying high, passengers and crews of jet airliners are exposed to at least 10 times the cosmic ray dose that people at sea level receive. Aircraft flying polar routes near the geomagnetic poles are at particular risk.
Role in lightning
Cosmic rays have been implicated in the triggering of electrical breakdown in lightning. It has been proposed that essentially all lightning is triggered through a relativistic process, or "runaway breakdown", seeded by cosmic ray secondaries. Subsequent development of the lightning discharge then occurs through "conventional breakdown" mechanisms.
Postulated role in climate change
A role for cosmic rays in climate was suggested by Edward P. Ney in 1959 and by Robert E. Dickinson in 1975. It has been postulated that cosmic rays may have been responsible for major climatic change and mass extinction in the past. According to Adrian Mellott and Mikhail Medvedev, 62-million-year cycles in biological marine populations correlate with the motion of the Earth relative to the galactic plane and increases in exposure to cosmic rays. The researchers suggest that this and gamma ray bombardments deriving from local supernovae could have affected cancer and mutation rates, and might be linked to decisive alterations in the Earth's climate, and to the mass extinctions of the Ordovician.
Danish physicist Henrik Svensmark has controversially argued that because solar variation modulates the cosmic ray flux on Earth, it would consequently affect the rate of cloud formation and hence be an indirect cause of global warming. Svensmark is one of several scientists outspokenly opposed to the mainstream scientific assessment of global warming, leading to concerns that the proposition that cosmic rays are connected to global warming could be ideologically biased rather than scientifically based. Other scientists have vigorously criticized Svensmark for sloppy and inconsistent work: one example is adjustment of cloud data that understates error in lower cloud data, but not in high cloud data; another example is "incorrect handling of the physical data" resulting in graphs that do not show the correlations they claim to show. Despite Svensmark's assertions, galactic cosmic rays have shown no statistically significant influence on changes in cloud cover, and have been demonstrated in studies to have no causal relationship to changes in global temperature.
Possible mass extinction factor
A handful of studies conclude that a nearby supernova or series of supernovas caused the Pliocene marine megafauna extinction event by substantially increasing radiation levels to hazardous amounts for large seafaring animals.
Research and experiments
There are a number of cosmic-ray research initiatives, listed below.
Ground-based
Akeno Giant Air Shower Array
Chicago Air Shower Array
CHICOS
CLOUD
CRIPT
GAMMA
GRAPES-3
HAWC
HEGRA
High Energy Stereoscopic System
High Resolution Fly's Eye Cosmic Ray Detector
IceCube
KASCADE
MAGIC
MARIACHI
Milagro
NMDB
Pierre Auger Observatory
QuarkNet
Spaceship Earth
Telescope Array Project
Tunka experiment
VERITAS
Washington Large Area Time Coincidence Array
Satellite
ACE (Advanced Composition Explorer)
Alpha Magnetic Spectrometer
Cassini–Huygens
Fermi Gamma-ray Space Telescope
HEAO 1, HEAO 2, HEAO 3
Interstellar Boundary Explorer
Langton Ultimate Cosmic-Ray Intensity Detector
PAMELA
Solar and Heliospheric Observatory
Voyager 1 and Voyager 2
Balloon-borne
Advanced Thin Ionization Calorimeter
BESS
Cosmic Ray Energetics and Mass (CREAM)
HEAT (High Energy Antimatter Telescope)
PERDaix
TIGER
TRACER (cosmic ray detector)
See also
References
Further references
R.G. Harrison and D.B. Stephenson, Detection of a galactic cosmic ray influence on clouds, Geophysical Research Abstracts, Vol. 8, 07661, 2006 SRef-ID: 1607-7962/gra/EGU06-A-07661
R. Clay and B. Dawson, Cosmic Bullets, Allen & Unwin, 1997.
T. K. Gaisser, Cosmic Rays and Particle Physics, Cambridge University Press, 1990.
P. K. F. Grieder, Cosmic Rays at Earth: Researcher's Reference Manual and Data Book, Elsevier, 2001.
A. M. Hillas, Cosmic Rays, Pergamon Press, Oxford, 1972
M. D. Ngobeni and M. S. Potgieter, Cosmic ray anisotropies in the outer heliosphere, Advances in Space Research, 2007.
M. D. Ngobeni, Aspects of the modulation of cosmic rays in the outer heliosphere, MSc Dissertation, Northwest University (Potchefstroom campus) South Africa 2006.
D. Perkins, Particle Astrophysics, Oxford University Press, 2003.
C. E. Rolfs and S. R. William, Cauldrons in the Cosmos, The University of Chicago Press, 1988.
B. B. Rossi, Cosmic Rays, McGraw-Hill, New York, 1964.
Martin Walt, Introduction to Geomagnetically Trapped Radiation, 1994.
TRACER Long Duration Balloon Project: the largest cosmic ray detector launched on balloons.
External links
Aspera European network portal
BBC news, Cosmic rays find uranium, 2003.
Introduction to Cosmic Ray Showers by Konrad Bernlöhr.
Astroparticle physics
Ionizing radiation
Stellar phenomena
Solar phenomena
Concepts in astronomy
1912 in science | Cosmic ray | [
"Physics",
"Astronomy"
] | 8,043 | [
"Ionizing radiation",
"Physical phenomena",
"Concepts in astronomy",
"Astroparticle physics",
"Astrophysics",
"Radiation",
"Particle physics",
"Solar phenomena",
"Stellar phenomena",
"Cosmic rays"
] |
47,700 | https://en.wikipedia.org/wiki/Coral | Corals are colonial marine invertebrates within the subphylum Anthozoa of the phylum Cnidaria. They typically form compact colonies of many identical individual polyps. Coral species include the important reef builders that inhabit tropical oceans and secrete calcium carbonate to form a hard skeleton.
A coral "group" is a colony of very many genetically identical polyps. Each polyp is a sac-like animal typically only a few millimeters in diameter and a few centimeters in height. A set of tentacles surround a central mouth opening. Each polyp excretes an exoskeleton near the base. Over many generations, the colony thus creates a skeleton characteristic of the species which can measure up to several meters in size. Individual colonies grow by asexual reproduction of polyps. Corals also breed sexually by spawning: polyps of the same species release gametes simultaneously overnight, often around a full moon. Fertilized eggs form planulae, a mobile early form of the coral polyp which, when mature, settles to form a new colony.
Although some corals are able to catch plankton and small fish using stinging cells on their tentacles, most corals obtain the majority of their energy and nutrients from photosynthetic unicellular dinoflagellates of the genus Symbiodinium that live within their tissues. These are commonly known as zooxanthellae and give the coral color. Such corals require sunlight and grow in clear, shallow water, typically at depths less than , but corals in the genus Leptoseris have been found as deep as . Corals are major contributors to the physical structure of the coral reefs that develop in tropical and subtropical waters, such as the Great Barrier Reef off the coast of Australia. These corals are increasingly at risk of bleaching events where polyps expel the zooxanthellae in response to stress such as high water temperature or toxins.
Other corals do not rely on zooxanthellae and can live globally in much deeper water, such as the cold-water genus Lophelia which can survive as deep as . Some have been found as far north as the Darwin Mounds, northwest of Cape Wrath, Scotland, and others off the coast of Washington state and the Aleutian Islands.
Taxonomy
The classification of corals has been discussed for millennia, owing to having similarities to both plants and animals. Aristotle's pupil Theophrastus described the red coral, korallion, in his book on stones, implying it was a mineral, but he described it as a deep-sea plant in his Enquiries on Plants, where he also mentions large stony plants that reveal bright flowers when under water in the Gulf of Heroes. Pliny the Elder stated boldly that several sea creatures including sea nettles and sponges "are neither animals nor plants, but are possessed of a third nature (tertia natura)". Petrus Gyllius copied Pliny, introducing the term zoophyta for this third group in his 1535 book On the French and Latin Names of the Fishes of the Marseilles Region; it is popularly but wrongly supposed that Aristotle created the term. Gyllius further noted, following Aristotle, how hard it was to define what was a plant and what was an animal. The Babylonian Talmud refers to coral among a list of types of trees, and the 11th-century French commentator Rashi describes it as "a type of tree (מין עץ) that grows underwater that goes by the (French) name 'coral'."
The Persian polymath Al-Biruni (d.1048) classified sponges and corals as animals, arguing that they respond to touch. Nevertheless, people believed corals to be plants until the eighteenth century when William Herschel used a microscope to establish that coral had the characteristic thin cell membranes of an animal.
Presently, corals are classified as species of animals within the sub-classes Hexacorallia and Octocorallia of the class Anthozoa in the phylum Cnidaria. Hexacorallia includes the stony corals and these groups have polyps that generally have a 6-fold symmetry. Octocorallia includes blue coral and soft corals and species of Octocorallia have polyps with an eightfold symmetry, each polyp having eight tentacles and eight mesenteries. The group of corals is paraphyletic because the sea anemones are also in the sub-class Hexacorallia.
Systematics
The delineation of coral species is challenging as hypotheses based on morphological traits contradict hypotheses formed via molecular tree-based processes. As of 2020, there are 2175 identified separate coral species, 237 of which are currently endangered, making distinguishing corals to be the utmost of importance in efforts to curb extinction. Adaptation and delineation continues to occur in species of coral in order to combat the dangers posed by the climate crisis. Corals are colonial modular organisms formed by asexually produced and genetically identical modules called polyps. Polyps are connected by living tissue to produce the full organism. The living tissue allows for inter module communication (interaction between each polyp), which appears in colony morphologies produced by corals, and is one of the main identifying characteristics for a species of coral.
There are two main classifications for corals: hard coral (scleractinian and stony coral) which form reefs by a calcium carbonate base, with polyps that bear six stiff tentacles, and soft coral (Alcyonacea and ahermatypic coral) which are pliable and formed by a colony of polyps with eight feather-like tentacles. These two classifications arose from differentiation in gene expressions in their branch tips and bases that arose through developmental signaling pathways such as Hox, Hedgehog, Wnt, BMP etc.
Scientists typically select Acropora as research models since they are the most diverse genus of hard coral, having over 120 species. Most species within this genus have polyps which are dimorphic: axial polyps grow rapidly and have lighter coloration, while radial polyps are small and are darker in coloration. In the Acropora genus, gamete synthesis and photosynthesis occur at the basal polyps, growth occurs mainly at the radial polyps. Growth at the site of the radial polyps encompasses two processes: asexual reproduction via mitotic cell proliferation, and skeleton deposition of the calcium carbonate via extra cellular matrix (EMC) proteins acting as differentially expressed (DE) signaling genes between both branch tips and bases. These processes lead to colony differentiation, which is the most accurate distinguisher between coral species. In the Acropora genus, colony differentiation through up-regulation and down-regulation of DEs.
Systematic studies of soft coral species have faced challenges due to a lack of taxonomic knowledge. Researchers have not found enough variability within the genus to confidently delineate similar species, due to a low rate in mutation of mitochondrial DNA.
Environmental factors, such as the rise of temperatures and acid levels in our oceans account for some speciation of corals in the form of species lost. Various coral species have heat shock proteins (HSP) that are also in the category of DE across species. These HSPs help corals combat the increased temperatures they are facing which lead to protein denaturing, growth loss, and eventually coral death. Approximately 33% of coral species are on the International Union for Conservation of Nature's endangered species list and at risk of species loss. Ocean acidification (falling pH levels in the oceans) is threatening the continued species growth and differentiation of corals. Mutation rates of Vibrio shilonii, the reef pathogen responsible for coral bleaching, heavily outweigh the typical reproduction rates of coral colonies when pH levels fall. Thus, corals are unable to mutate their HSPs and other climate change preventative genes to combat the increase in temperature and decrease in pH at a competitive rate to these pathogens responsible for coral bleaching, resulting in species loss.
Anatomy
For most of their life corals are sessile animals of colonies of genetically identical polyps. Each polyp varies from millimeters to centimeters in diameter, and colonies can be formed from many millions of individual polyps. Stony coral, also known as hard coral, polyps produce a skeleton composed of calcium carbonate to strengthen and protect the organism. This is deposited by the polyps and by the coenosarc, the living tissue that connects them. The polyps sit in cup-shaped depressions in the skeleton known as corallites. Colonies of stony coral are markedly variable in appearance; a single species may adopt an encrusting, plate-like, bushy, columnar or massive solid structure, the various forms often being linked to different types of habitat, with variations in light level and water movement being significant.
The body of the polyp may be roughly compared in a structure to a sac, the wall of which is composed of two layers of cells. The outer layer is known technically as the ectoderm, the inner layer as the endoderm. Between ectoderm and endoderm is a supporting layer of gelatinous substance termed mesoglea, secreted by the cell layers of the body wall. The mesoglea can contain skeletal elements derived from cells migrated from the ectoderm.
The sac-like body built up in this way is attached to a hard surface, which in hard corals are cup-shaped depressions in the skeleton known as corallites. At the center of the upper end of the sac lies the only opening called the mouth, surrounded by a circle of tentacles which resemble glove fingers. The tentacles are organs which serve both for tactile sense and for the capture of food. Polyps extend their tentacles, particularly at night, often containing coiled stinging cells (cnidocytes) which pierce, poison and firmly hold living prey paralyzing or killing them. Polyp prey includes plankton such as copepods and fish larvae. Longitudinal muscular fibers formed from the cells of the ectoderm allow tentacles to contract to convey the food to the mouth. Similarly, circularly disposed muscular fibres formed from the endoderm permit tentacles to be protracted or thrust out once they are contracted. In both stony and soft corals, the polyps can be retracted by contracting muscle fibres, with stony corals relying on their hard skeleton and cnidocytes for defense. Soft corals generally secrete terpenoid toxins to ward off predators.
In most corals, the tentacles are retracted by day and spread out at night to catch plankton and other small organisms. Shallow-water species of both stony and soft corals can be zooxanthellate, the corals supplementing their plankton diet with the products of photosynthesis produced by these symbionts. The polyps interconnect by a complex and well-developed system of gastrovascular canals, allowing significant sharing of nutrients and symbionts.
The external form of the polyp varies greatly. The column may be long and slender, or may be so short in the axial direction that the body becomes disk-like. The tentacles may number many hundreds or may be very few, in rare cases only one or two. They may be simple and unbranched, or feathery in pattern. The mouth may be level with the surface of the peristome, or may be projecting and trumpet-shaped.
Soft corals
Soft corals have no solid exoskeleton as such. However, their tissues are often reinforced by small supportive elements known as sclerites made of calcium carbonate. The polyps of soft corals have eight-fold symmetry, which is reflected in the Octo in Octocorallia.
Soft corals vary considerably in form, and most are colonial. A few soft corals are stolonate, but the polyps of most are connected by sheets of tissue called coenosarc, and in some species these sheets are thick and the polyps deeply embedded in them. Some soft corals encrust other sea objects or form lobes. Others are tree-like or whip-like and have a central axial skeleton embedded at their base in the matrix of the supporting branch. These branches are composed of a fibrous protein called gorgonin or of a calcified material.
Stony corals
The polyps of stony corals have six-fold symmetry. In stony corals, the tentacles are cylindrical and taper to a point, but in soft corals they are pinnate with side branches known as pinnules. In some tropical species, these are reduced to mere stubs and in some, they are fused to give a paddle-like appearance.
Coral skeletons are biocomposites (mineral + organics) of calcium carbonate, in the form of calcite or aragonite. In scleractinian corals, "centers of calcification" and fibers are clearly distinct structures differing with respect to both morphology and chemical compositions of the crystalline units. The organic matrices extracted from diverse species are acidic, and comprise proteins, sulphated sugars and lipids; they are species specific. The soluble organic matrices of the skeletons allow to differentiate zooxanthellae and non-zooxanthellae specimens.
Ecology
Feeding
Polyps feed on a variety of small organisms, from microscopic zooplankton to small fish. The polyp's tentacles immobilize or kill prey using stinging cells called nematocysts. These cells carry venom which they rapidly release in response to contact with another organism. A dormant nematocyst discharges in response to nearby prey touching the trigger (Cnidocil). A flap (operculum) opens and its stinging apparatus fires the barb into the prey. The venom is injected through the hollow filament to immobilise the prey; the tentacles then manoeuvre the prey into the stomach. Once the prey is digested the stomach reopens allowing the elimination of waste products and the beginning of the next hunting cycle.
Intracellular symbionts
Many corals, as well as other cnidarian groups such as sea anemones form a symbiotic relationship with a class of dinoflagellate algae, zooxanthellae of the genus Symbiodinium, which can form as much as 30% of the tissue of a polyp. Typically, each polyp harbors one species of alga, and coral species show a preference for Symbiodinium. Young corals are not born with zooxanthellae, but acquire the algae from the surrounding environment, including the water column and local sediment. The main benefit of the zooxanthellae is their ability to photosynthesize which supplies corals with the products of photosynthesis, including glucose, glycerol, also amino acids, which the corals can use for energy. Zooxanthellae also benefit corals by aiding in calcification, for the coral skeleton, and waste removal. In addition to the soft tissue, microbiomes are also found in the coral's mucus and (in stony corals) the skeleton, with the latter showing the greatest microbial richness.
The zooxanthellae benefit from a safe place to live and consume the polyp's carbon dioxide, phosphate and nitrogenous waste. Stressed corals will eject their zooxanthellae, a process that is becoming increasingly common due to strain placed on coral by rising ocean temperatures. Mass ejections are known as coral bleaching because the algae contribute to coral coloration; some colors, however, are due to host coral pigments, such as green fluorescent proteins (GFPs). Ejection increases the polyp's chance of surviving short-term stress and if the stress subsides they can regain algae, possibly of a different species, at a later time. If the stressful conditions persist, the polyp eventually dies. Zooxanthellae are located within the coral cytoplasm and due to the algae's photosynthetic activity the internal pH of the coral can be raised; this behavior indicates that the zooxanthellae are responsible to some extent for the metabolism of their host corals. Stony Coral Tissue Loss Disease has been associated with the breakdown of host-zooxanthellae physiology. Moreover, Vibrio bacterium are known to have virulence traits used for host coral tissue damage and photoinhibition of algal symbionts. Therefore, both coral and their symbiotic microorganisms could have evolved to harbour traits resistant to disease and transmission.
Reproduction
Corals can be both gonochoristic (unisexual) and hermaphroditic, each of which can reproduce sexually and asexually. Reproduction also allows coral to settle in new areas. Reproduction is coordinated by chemical communication.
Sexual
Corals predominantly reproduce sexually. About 25% of hermatypic corals (reef-building stony corals) form single-sex (gonochoristic) colonies, while the rest are hermaphroditic. It is estimated more than 67% of coral are simultaneous hermaphrodites.
Broadcasters
About 75% of all hermatypic corals "broadcast spawn" by releasing gametes—eggs and sperm—into the water where they meet and fertilize to spread offspring. Corals often synchronize their time of spawning. This reproductive synchrony is essential so that male and female gametes can meet. Spawning frequently takes place in the evening or at night, and can occur as infrequently as once a year, and within a window of 10–30 minutes.
Synchronous spawning is very typical on the coral reef, and often, all corals spawn on the same night even when multiple species are present. Synchronous spawning may form hybrids and is perhaps involved in coral speciation.
Environmental cues that influence the release of gametes into the water vary from species to species. The cues involve temperature change, lunar cycle, day length, and possibly chemical signalling.
Other factors that affect the rhythmicity of organisms in marine habitats include salinity, mechanical forces, and pressure or magnetic field changes.
Mass coral spawning often occurs at night on days following a full moon. A full moon is equivalent to four to six hours of continuous dim light exposure, which can cause light-dependent reactions in protein. Corals contain light-sensitive cryptochromes, proteins whose light-absorbing flavin structures are sensitive to different types of light. This allows corals such as Dipsastraea speciosa to detect and respond to changes in sunlight and moonlight.
Moonlight itself may actually suppress coral spawning. The most immediate cue to cause spawning appears to be the dark portion of the night between sunset and moonrise.
Over the lunar cycle, moonrise shifts progressively later, occurring after sunset on the day of the full moon. The resulting dark period between day-light and night-light removes the suppressive effect of moonlight and enables coral to spawn.
The spawning event can be visually dramatic, clouding the usually clear water with gametes. Once released, gametes fertilize at the water's surface and form a microscopic larva called a planula, typically pink and elliptical in shape. A typical coral colony needs to release several thousand larvae per year to overcome the odds against formation of a new colony.
Studies suggest that light pollution desynchronizes spawning in some coral species.
In areas such as the Red Sea, as many as 10 out of 50 species may be showing spawning asynchrony, compared to 30 years ago. The establishment of new corals in the area has decreased and in some cases ceased. The area was previously considered a refuge for corals because mass bleaching events due to climate change had not been observed there. Coral restoration techniques for coral reef management are being developed to increase fertilization rates, larval development, and settlement of new corals.
Brooders
Brooding species are most often ahermatypic (not reef-building) in areas of high current or wave action. Brooders release only sperm, which is negatively buoyant, sinking onto the waiting egg carriers that harbor unfertilized eggs for weeks. Synchronous spawning events sometimes occur even with these species. After fertilization, the corals release planula that are ready to settle.
Planulae
The time from spawning to larval settlement is usually two to three days but can occur immediately or up to two months. Broadcast-spawned planula larvae develop at the water's surface before descending to seek a hard surface on the benthos to which they can attach and begin a new colony. The larvae often need a biological cue to induce settlement such as specific crustose coralline algae species or microbial biofilms. High failure rates afflict many stages of this process, and even though thousands of eggs are released by each colony, few new colonies form. During settlement, larvae are inhibited by physical barriers such as sediment, as well as chemical (allelopathic) barriers. The larvae metamorphose into a single polyp and eventually develops into a juvenile and then adult by asexual budding and growth.
Asexual
Within a coral head, the genetically identical polyps reproduce asexually, either by budding (gemmation) or by dividing, whether longitudinally or transversely.
Budding involves splitting a smaller polyp from an adult. As the new polyp grows, it forms its body parts. The distance between the new and adult polyps grows, and with it, the coenosarc (the common body of the colony). Budding can be intratentacular, from its oral discs, producing same-sized polyps within the ring of tentacles, or extratentacular, from its base, producing a smaller polyp.
Division forms two polyps that each become as large as the original. Longitudinal division begins when a polyp broadens and then divides its coelenteron (body), effectively splitting along its length. The mouth divides and new tentacles form. The two polyps thus created then generate their missing body parts and exoskeleton. Transversal division occurs when polyps and the exoskeleton divide transversally into two parts. This means one has the basal disc (bottom) and the other has the oral disc (top); the new polyps must separately generate the missing pieces.
Asexual reproduction offers the benefits of high reproductive rate, delaying senescence, and replacement of dead modules, as well as geographical distribution.
Colony division
Whole colonies can reproduce asexually, forming two colonies with the same genotype. The possible mechanisms include fission, bailout and fragmentation. Fission occurs in some corals, especially among the family Fungiidae, where the colony splits into two or more colonies during early developmental stages. Bailout occurs when a single polyp abandons the colony and settles on a different substrate to create a new colony. Fragmentation involves individuals broken from the colony during storms or other disruptions. The separated individuals can start new colonies.
Coral microbiomes
Corals are one of the more common examples of an animal host whose symbiosis with microalgae can turn to dysbiosis, and is visibly detected as bleaching. Coral microbiomes have been examined in a variety of studies, which demonstrate how oceanic environmental variations, most notably temperature, light, and inorganic nutrients, affect the abundance and performance of the microalgal symbionts, as well as calcification and physiology of the host.
Studies have also suggested that resident bacteria, archaea, and fungi additionally contribute to nutrient and organic matter cycling within the coral, with viruses also possibly playing a role in structuring the composition of these members, thus providing one of the first glimpses at a multi-domain marine animal symbiosis. The gammaproteobacterium Endozoicomonas is emerging as a central member of the coral's microbiome, with flexibility in its lifestyle. Given the recent mass bleaching occurring on reefs, corals will likely continue to be a useful and popular system for symbiosis and dysbiosis research.
Astrangia poculata, the northern star coral, is a temperate stony coral, widely documented along the eastern coast of the United States. The coral can live with and without zooxanthellae (algal symbionts), making it an ideal model organism to study microbial community interactions associated with symbiotic state. However, the ability to develop primers and probes to more specifically target key microbial groups has been hindered by the lack of full-length 16S rRNA sequences, since sequences produced by the Illumina platform are of insufficient length (approximately 250 base pairs) for the design of primers and probes. In 2019, Goldsmith et al. demonstrated Sanger sequencing was capable of reproducing the biologically relevant diversity detected by deeper next-generation sequencing, while also producing longer sequences useful to the research community for probe and primer design (see diagram on right).
Holobionts
Reef-building corals are well-studied holobionts that include the coral itself together with its symbiont zooxanthellae (photosynthetic dinoflagellates), as well as its associated bacteria and viruses. Co-evolutionary patterns exist for coral microbial communities and coral phylogeny.
It is known that the coral's microbiome and symbiont influence host health, however, the historic influence of each member on others is not well understood. Scleractinian corals have been diversifying for longer than many other symbiotic systems, and their microbiomes are known to be partially species-specific. It has been suggested that Endozoicomonas, a commonly highly abundant bacterium in corals, has exhibited codiversification with its host. This hints at an intricate set of relationships between the members of the coral holobiont that have been developing as evolution of these members occurs.
A study published in 2018 revealed evidence of phylosymbiosis between corals and their tissue and skeleton microbiomes. The coral skeleton, which represents the most diverse of the three coral microbiomes, showed the strongest evidence of phylosymbiosis. Coral microbiome composition and richness were found to reflect coral phylogeny. For example, interactions between bacterial and eukaryotic coral phylogeny influence the abundance of Endozoicomonas, a highly abundant bacterium in the coral holobiont. However, host-microbial cophylogeny appears to influence only a subset of coral-associated bacteria.
Reefs
Many corals in the order Scleractinia are hermatypic, meaning that they are involved in building reefs. Most such corals obtain some of their energy from zooxanthellae in the genus Symbiodinium. These are symbiotic photosynthetic dinoflagellates which require sunlight; reef-forming corals are therefore found mainly in shallow water. They secrete calcium carbonate to form hard skeletons that become the framework of the reef. However, not all reef-building corals in shallow water contain zooxanthellae, and some deep water species, living at depths to which light cannot penetrate, form reefs but do not harbour the symbionts.
There are various types of shallow-water coral reef, including fringing reefs, barrier reefs and atolls; most occur in tropical and subtropical seas. They are very slow-growing, adding perhaps one centimetre (0.4 in) in height each year. The Great Barrier Reef is thought to have been laid down about two million years ago. Over time, corals fragment and die, sand and rubble accumulates between the corals, and the shells of clams and other molluscs decay to form a gradually evolving calcium carbonate structure. Coral reefs are extremely diverse marine ecosystems hosting over 4,000 species of fish, massive numbers of cnidarians, molluscs, crustaceans, and many other animals.
Evolution
At certain times in the geological past, corals were very abundant. Like modern corals, their ancestors built reefs, some of which ended as great structures in sedimentary rocks. Fossils of fellow reef-dwellers algae, sponges, and the remains of many echinoids, brachiopods, bivalves, gastropods, and trilobites appear along with coral fossils. This makes some corals useful index fossils. Coral fossils are not restricted to reef remnants, and many solitary fossils are found elsewhere, such as Cyclocyathus, which occurs in England's Gault clay formation.
Early corals
Corals first appeared in the Cambrian about . Fossils are extremely rare until the Ordovician period, 100 million years later, when Heliolitida, rugose, and tabulate corals became widespread. Paleozoic corals often contained numerous endobiotic symbionts.
Tabulate corals occur in limestones and calcareous shales of the Ordovician period, with a gap in the fossil record due to extinction events at the end of the Ordovician. Corals reappeared some millions of years later during the Silurian period, and tabulate corals often form low cushions or branching masses of calcite alongside rugose corals. Tabulate coral numbers began to decline during the middle of the Silurian period.
Rugose or horn corals became dominant by the middle of the Silurian period, and during the Devonian, corals flourished with more than 200 genera. The rugose corals existed in solitary and colonial forms, and were also composed of calcite. Both rugose and tabulate corals became extinct in the Permian–Triassic extinction event (along with 85% of marine species), and there is a gap of tens of millions of years until new forms of coral evolved in the Triassic.
Modern corals
The currently ubiquitous stony corals, Scleractinia, appeared in the Middle Triassic to fill the niche vacated by the extinct rugose and tabulate orders and is not closely related to the earlier forms. Unlike the corals prevalent before the Permian extinction, which formed skeletons of a form of calcium carbonate known as calcite, modern stony corals form skeletons composed of the aragonite. Their fossils are found in small numbers in rocks from the Triassic period, and become common in the Jurassic and later periods. Although they are geologically younger than the tabulate and rugose corals, the aragonite of their skeletons is less readily preserved, and their fossil record is accordingly less complete.
Status
Threats
Coral reefs are under stress around the world. In particular, coral mining, agricultural and urban runoff, pollution (organic and inorganic), overfishing, blast fishing, disease, and the digging of canals and access into islands and bays are localized threats to coral ecosystems. Broader threats are sea temperature rise, sea level rise and pH changes from ocean acidification, all associated with greenhouse gas emissions. In 1998, 16% of the world's reefs died as a result of increased water temperature.
Approximately 10% of the world's coral reefs are dead. About 60% of the world's reefs are at risk due to human-related activities. The threat to reef health is particularly strong in Southeast Asia, where 80% of reefs are endangered. Over 50% of the world's coral reefs may be destroyed by 2030; as a result, most nations protect them through environmental laws.
In the Caribbean and tropical Pacific, direct contact between ~40–70% of common seaweeds and coral causes bleaching and death to the coral via transfer of lipid-soluble metabolites. Seaweed and algae proliferate given adequate nutrients and limited grazing by herbivores such as parrotfish.
Water temperature changes of more than or salinity changes can kill some species of coral. Under such environmental stresses, corals expel their Symbiodinium; without them, coral tissues reveal the white of their skeletons, an event known as coral bleaching.
Submarine springs found along the coast of Mexico's Yucatán Peninsula produce water with a naturally low pH (relatively high acidity) providing conditions similar to those expected to become widespread as the oceans absorb carbon dioxide. Surveys discovered multiple species of live coral that appeared to tolerate the acidity. The colonies were small and patchily distributed and had not formed structurally complex reefs such as those that compose the nearby Mesoamerican Barrier Reef System.
Coral health
To assess the threat level of coral, scientists developed a coral imbalance ratio, Log (Average abundance of disease-associated taxa / Average abundance of healthy associated taxa). The lower the ratio the healthier the microbial community is. This ratio was developed after the microbial mucus of coral was collected and studied.
Climate change impacts
Increasing sea surface temperatures in tropical regions (~) the last century have caused major coral bleaching, death, and therefore shrinking coral populations. Although coral are able to adapt and acclimate, it is uncertain if this evolutionary process will happen quickly enough to prevent major reduction of their numbers. Climate change causes more frequent and more severe storms that can destroy coral reefs.
Annual growth bands in some corals, such as the deep sea bamboo corals (Isididae), may be among the first signs of the effects of ocean acidification on marine life. The growth rings allow geologists to construct year-by-year chronologies, a form of incremental dating, which underlie high-resolution records of past climatic and environmental changes using geochemical techniques.
Certain species form communities called microatolls, which are colonies whose top is dead and mostly above the water line, but whose perimeter is mostly submerged and alive. Average tide level limits their height. By analyzing the various growth morphologies, microatolls offer a low-resolution record of sea level change. Fossilized microatolls can also be dated using radiocarbon dating. Such methods can help to reconstruct Holocene sea levels.
Though coral have large sexually-reproducing populations, their evolution can be slowed by abundant asexual reproduction. Gene flow is variable among coral species. According to the biogeography of coral species, gene flow cannot be counted on as a dependable source of adaptation as they are very stationary organisms. Also, coral longevity might factor into their adaptivity.
However, adaptation to climate change has been demonstrated in many cases, which is usually due to a shift in coral and zooxanthellae genotypes. These shifts in allele frequency have progressed toward more tolerant types of zooxanthellae. Scientists found that a certain scleractinian zooxanthella is becoming more common where sea temperature is high. Symbionts able to tolerate warmer water seem to photosynthesise more slowly, implying an evolutionary trade-off.
In the Gulf of Mexico, where sea temperatures are rising, cold-sensitive staghorn and elkhorn coral have shifted in location.
Not only have the symbionts and specific species been shown to shift, but there seems to be a certain growth rate favorable to selection. Slower-growing but more heat-tolerant corals have become more common. The changes in temperature and acclimation are complex. Some reefs in current shadows represent a refugium location that will help them adjust to the disparity in the environment even if eventually the temperatures may rise more quickly there than in other locations. This separation of populations by climatic barriers causes a realized niche to shrink greatly in comparison to the old fundamental niche.
Geochemistry
Corals are shallow, colonial organisms that integrate oxygen and trace elements into their skeletal aragonite (polymorph of calcite) crystalline structures as they grow. Geochemical anomalies within the crystalline structures of corals represent functions of temperature, salinity and oxygen isotopic composition. Such geochemical analysis can help with climate modeling. The ratio of oxygen-18 to oxygen-16 (δ18O), for example, is a proxy for temperature.
Strontium/calcium ratio anomaly
Time can be attributed to coral geochemistry anomalies by correlating strontium/calcium minimums with sea surface temperature (SST) maximums to data collected from NINO 3.4 SSTA.
Oxygen isotope anomaly
The comparison of coral strontium/calcium minimums with sea surface temperature maximums, data recorded from NINO 3.4 SSTA, time can be correlated to coral strontium/calcium and δ18O variations. To confirm the accuracy of the annual relationship between Sr/Ca and δ18O variations, a perceptible association to annual coral growth rings confirms the age conversion. Geochronology is established by the blending of Sr/Ca data, growth rings, and stable isotope data. El Nino-Southern Oscillation (ENSO) is directly related to climate fluctuations that influence coral δ18O ratio from local salinity variations associated with the position of the South Pacific convergence zone (SPCZ) and can be used for ENSO modeling.
Sea surface temperature and sea surface salinity
The global moisture budget is primarily being influenced by tropical sea surface temperatures from the position of the Intertropical Convergence Zone (ITCZ). The Southern Hemisphere has a unique meteorological feature positioned in the southwestern Pacific Basin called the South Pacific Convergence Zone (SPCZ), which contains a perennial position within the Southern Hemisphere. During ENSO warm periods, the SPCZ reverses orientation extending from the equator down south through Solomon Islands, Vanuatu, Fiji and towards the French Polynesian Islands; and due east towards South America affecting geochemistry of corals in tropical regions.
Geochemical analysis of skeletal coral can be linked to sea surface salinity (SSS) and sea surface temperature (SST), from El Nino 3.4 SSTA data, of tropical oceans to seawater δ18O ratio anomalies from corals. ENSO phenomenon can be related to variations in sea surface salinity (SSS) and sea surface temperature (SST) that can help model tropical climate activities.
Limited climate research on current species
Climate research on live coral species is limited to a few studied species. Studying Porites coral provides a stable foundation for geochemical interpretations that is much simpler to physically extract data in comparison to Platygyra species where the complexity of Platygyra species skeletal structure creates difficulty when physically sampled, which happens to be one of the only multidecadal living coral records used for coral paleoclimate modeling.
Protection
Marine Protected Areas, Biosphere reserves, marine parks, national monuments world heritage status, fishery management and habitat protection can protect reefs from anthropogenic damage.
Many governments now prohibit removal of coral from reefs, and inform coastal residents about reef protection and ecology. While local action such as habitat restoration and herbivore protection can reduce local damage, the longer-term threats of acidification, temperature change and sea-level rise remain a challenge.
Protecting networks of diverse and healthy reefs, not only climate refugia, helps ensure the greatest chance of genetic diversity, which is critical for coral to adapt to new climates. A variety of conservation methods applied across marine and terrestrial threatened ecosystems makes coral adaption more likely and effective.
To eliminate destruction of corals in their indigenous regions, projects have been started to grow corals in non-tropical countries.
Relation to humans
Local economies near major coral reefs benefit from an abundance of fish and other marine creatures as a food source. Reefs also provide recreational scuba diving and snorkeling tourism. These activities can damage coral but international projects such as Green Fins that encourage dive and snorkel centres to follow a Code of Conduct have been proven to mitigate these risks.
Jewelry
Corals' many colors give it appeal for necklaces and other jewelry. Intensely red coral is prized as a gemstone. Sometimes called fire coral, it is not the same as fire coral. Red coral is very rare because of overharvesting. In general, it is inadvisable to give coral as gifts since they are in decline from stressors like climate change, pollution, and unsustainable fishing.
Always considered a precious mineral, "the Chinese have long associated red coral with auspiciousness and longevity because of its color and its resemblance to deer antlers (so by association, virtue, long life, and high rank". It reached its height of popularity during the Manchu or Qing Dynasty (1644–1911) when it was almost exclusively reserved for the emperor's use either in the form of coral beads (often combined with pearls) for court jewelry or as decorative Penjing (decorative miniature mineral trees). Coral was known as shanhu in Chinese. The "early-modern 'coral network' [began in] the Mediterranean Sea [and found its way] to Qing China via the English East India Company". There were strict rules regarding its use in a code established by the Qianlong Emperor in 1759.
Medicine
In medicine, chemical compounds from corals can potentially be used to treat cancer, neurological diseases, inflammation including arthritis, pain, bone loss, high blood pressure and for other therapeutic uses. Coral skeletons, e.g. Isididae are being researched for their potential near-future use for bone grafting in humans.
Coral Calx, known as Praval Bhasma in Sanskrit, is widely used in traditional system of Indian medicine as a supplement in the treatment of a variety of bone metabolic disorders associated with calcium deficiency. In classical times ingestion of pulverized coral, which consists mainly of the weak base calcium carbonate, was recommended for calming stomach ulcers by Galen and Dioscorides.
Construction
Coral reefs in places such as the East African coast are used as a source of building material. Ancient (fossil) coral limestone, notably including the Coral Rag Formation of the hills around Oxford (England), was once used as a building stone, and can be seen in some of the oldest buildings in that city including the Saxon tower of St Michael at the Northgate, St. George's Tower of Oxford Castle, and the medieval walls of the city.
Shoreline protection
Healthy coral reefs absorb 97 percent of a wave's energy, which buffers shorelines from currents, waves, and storms, helping to prevent loss of life and property damage. Coastlines protected by coral reefs are also more stable in terms of erosion than those without.
Local economies
Coastal communities near coral reefs rely heavily on them. Worldwide, more than 500 million people depend on coral reefs for food, income, coastal protection, and more. The total economic value of coral reef services in the United States – including fisheries, tourism, and coastal protection – is more than $3.4 billion a year.
Aquaria
The saltwater fishkeeping hobby has expanded, over recent years, to include reef tanks, fish tanks that include large amounts of live rock on which coral is allowed to grow and spread. These tanks are either kept in a natural-like state, with algae (sometimes in the form of an algae scrubber) and a deep sand bed providing filtration, or as "show tanks", with the rock kept largely bare of the algae and microfauna that would normally populate it, in order to appear neat and clean.
The most popular kind of coral kept is soft coral, especially zoanthids and mushroom corals, which are especially easy to grow and propagate in a wide variety of conditions, because they originate in enclosed parts of reefs where water conditions vary and lighting may be less reliable and direct. More serious fishkeepers may keep small polyp stony coral, which is from open, brightly lit reef conditions and therefore much more demanding, while large polyp stony coral is a sort of compromise between the two.
Aquaculture
Coral aquaculture, also known as coral farming or coral gardening, is the cultivation of corals for commercial purposes or coral reef restoration. Aquaculture is showing promise as a potentially effective tool for restoring coral reefs, which have been declining around the world. The process bypasses the early growth stages of corals when they are most at risk of dying. Coral fragments known as "seeds" are grown in nurseries then replanted on the reef. Coral is farmed by coral farmers who live locally to the reefs and farm for reef conservation or for income. It is also farmed by scientists for research, by businesses for the supply of the live and ornamental coral trade and by private aquarium hobbyists.
Gallery
Further images: commons:Category:Coral reefs and commons:Category:Corals
See also
Keystone species
Ringstead Coral Bed
References
Sources
External links
Coral Reefs The Ocean Portal by the Smithsonian Institution
NOAA – Coral Reef Conservation Program
NOAA CoRIS – Coral Reef Biology
NOAA Office for Coastal Management – Fast Facts – Coral Reefs
NOAA Ocean Service Education – Corals
Anthozoa | Coral | [
"Biology"
] | 9,272 | [
"Biogeomorphology",
"Coral reefs"
] |
47,702 | https://en.wikipedia.org/wiki/Torture | Torture is the deliberate infliction of severe pain or suffering on a person for reasons including punishment, extracting a confession, interrogation for information, or intimidating third parties.
Some definitions restrict torture to acts carried out by the state, while others include non-state organizations. Most victims of torture are poor and marginalized people suspected of crimes, although torture against political prisoners, or during armed conflict, has received disproportionate attention. Judicial corporal punishment and capital punishment are sometimes seen as forms of torture, but this label is internationally controversial. A variety of methods of torture are used, often in combination; the most common form of physical torture is beatings. Beginning in the twentieth century, many torturers have preferred non-scarring or psychological methods to maintain deniability.
Torturers more commonly act out of fear, or due to limited resources, rather than sadism. Although most torturers are thought to learn about torture techniques informally and rarely receive explicit orders, they are enabled by organizations that facilitate and encourage their behavior. Once a torture program begins, it usually escalates beyond what is intended initially and often leads to involved agencies losing effectiveness. Torture aims to break the victim's will, destroy their agency and personality, and is cited as one of the most damaging experiences that a person can undergo. Many victims suffer both physical damage—chronic pain is particularly common—and mental sequelae. Although torture survivors have some of the highest rates of post-traumatic stress disorder, many are psychologically resilient.
Torture has been carried out since ancient times. However, in the eighteenth and nineteenth centuries, many Western countries abolished the official use of torture in the judicial system, although it continued to be used throughout the world. Public opinion research shows general opposition to torture. It is prohibited under international law for all states under all circumstances and is explicitly forbidden by several treaties. Opposition to torture stimulated the formation of the human rights movement after World War II, and it continues to be an important human rights issue. Although prevention efforts have been of mixed effectiveness, institutional reforms and the elimination of incommunicado detention have had positive effects. Despite its decline, torture is still practiced in or by most countries.
Definitions
Torture is defined as the deliberate infliction of severe pain or suffering on someone under the control of the perpetrator. The treatment must be inflicted for a specific purpose, such as punishment and forcing the victim to confess or provide information. The definition put forth by the United Nations Convention against Torture only considers torture carried out by the state. Most legal systems include agents acting on behalf of the state, and some definitions add non-state armed groups, organized crime, or private individuals working in state-monitored facilities (such as hospitals). The most expansive definitions encompass anyone as a potential perpetrator. Although torture is usually classified as more severe than cruel, inhuman, or degrading treatment (CIDT), the threshold at which treatment can be classified as torture is the most controversial aspect of its definition; the interpretation of torture has broadened over time. Another approach, preferred by scholars such as Manfred Nowak and Malcolm Evans, distinguishes torture from CIDT by considering only the torturer's purpose, and not the severity. Other definitions, such as that in the Inter-American Convention to Prevent and Punish Torture, focus on the torturer's aim "to obliterate the personality of the victim".
History
Pre-abolition
Torture was legally and morally acceptable in most ancient, medieval, and early modern societies. There is archaeological evidence of torture in Early Neolithic Europe, about 7,000 years ago. Torture is commonly mentioned in historical sources on Assyria and Achaemenid Persia. Societies used torture both as part of the judicial process and as punishment, although some historians make a distinction between torture and painful punishments. Historically, torture was seen as a reliable way to elicit the truth, a suitable punishment, and deterrence against future offenses. When torture was legally regulated, there were restrictions on the allowable methods; common methods in Europe included the rack and strappado. In most societies, citizens could be judicially tortured only under exceptional circumstances and for a serious crime such as treason, often only when some evidence already existed. In contrast, non-citizens such as foreigners and slaves were commonly tortured.
Torture was rare in early medieval Europe but became more common between 1200 and 1400. Because medieval judges used an exceptionally high standard of proof, they would sometimes authorize torture when circumstantial evidence tied a person to a capital crime if there were fewer than the two eyewitnesses required to convict someone in the absence of a confession. Torture was still a labor-intensive process reserved for the most severe crimes; most torture victims were men accused of murder, treason, or theft. Medieval ecclesiastical courts and the Inquisition used torture under the same procedural rules as secular courts. The Ottoman Empire and Qajar Iran used torture in cases where circumstantial evidence tied someone to a crime, although Islamic law has traditionally considered evidence obtained under torture to be inadmissible.
Abolition and continued use
Torture remained legal in Europe during the seventeenth century, but its practice declined. Torture was already of marginal importance to European criminal justice systems by its formal abolition in the 18th and early 19th centuries. Theories for why torture was abolished include the rise of Enlightenment ideas about the value of the human person, the lowering of the standard of proof in criminal cases, popular views that no longer saw pain as morally redemptive, and the expansion of imprisonment as an alternative to executions or painful punishments. It is not known if torture also declined in non-Western states or European colonies during the nineteenth century. In China, judicial torture, which had been practiced for more than two millennia, was banned in 1905 along with flogging and lingchi (dismemberment) as a means of execution, although torture in China continued throughout the twentieth and twenty-first centuries.
Torture was widely used by colonial powers to subdue resistance and reached a peak during the anti-colonial wars in the twentieth century. An estimated 300,000 people were tortured during the Algerian War of Independence (1954–1962), and the United Kingdom and Portugal also used torture in attempts to retain their respective empires. Independent states in Africa, the Middle East, and Asia often used torture in the twentieth century, but it is unknown whether their use of torture increased or decreased compared to nineteenth-century levels. During the first half of the twentieth century, torture became more prevalent in Europe with the advent of secret police, World War I and World War II, and the rise of communist and fascist states.
Torture was also used by both communist and anti-communist governments during the Cold War in Latin America, with an estimated 100,000 to 150,000 victims of torture by United States–backed regimes. The only countries in which torture was rare during the twentieth century were the liberal democracies of the West, but torture was still used there, against ethnic minorities or criminal suspects from marginalized classes, and during overseas wars against foreign populations. After the September 11 attacks, the US government embarked on an overseas torture program as part of its war on terror. It is disputed whether torture increases, decreases, or remains constant.
Prevalence
Most countries practice torture, although few acknowledge it. The international prohibition of torture has not completely stopped torture; instead, states have changed which techniques are used and denied, covered up, or outsourced torture programs. Measuring the rate at which torture occurs is difficult because it is typically committed in secrecy, and abuses are likelier to come to light in open societies where there is a commitment to protecting human rights. Many torture survivors, especially those from poor or marginalized populations, are unwilling to report. Monitoring has focused on police stations and prisons, although torture can also occur in other facilities such as immigration detention and youth detention centers. Torture that occurs outside of custody—including extrajudicial punishment, intimidation, and crowd control—has traditionally not been counted, even though some studies have suggested it is more common than torture in places of detention. There is even less information on the prevalence of torture before the twentieth century. Although it is often assumed that men suffer torture at a higher rate than women, there is a lack of evidence. Some quantitative research has estimated that torture rates are either stagnant or increasing over time, but this may be a measurement effect.
Although liberal democracies are less likely to abuse their citizens, they may practice torture against marginalized citizens and non-citizens to whom they are not democratically accountable. Voters may support violence against out-groups seen as threatening; majoritarian institutions are ineffective at preventing torture against minorities or foreigners. Torture is more likely when a society feels threatened because of wars or crises, but studies have not found a consistent relationship between the use of torture and terrorist attacks.
Torture is directed against certain segments of the population, who are denied the protection against torture given to others. Torture of political prisoners and torture during armed conflicts receive more attention compared to torture of the poor or criminal suspects. Most victims of torture are suspected of crimes; a disproportionate number of victims are from poor or marginalized communities. Groups especially vulnerable to torture include unemployed young men, the urban poor, LGBT people, refugees and migrants, ethnic and racial minorities, indigenous people, and people with disabilities. Relative poverty and the resulting inequality in particular leave poor people vulnerable to torture. Criminalization of the poor, through laws targeting homelessness, sex work, or working in the informal economy, can lead to violent and arbitrary policing. Routine violence against poor and marginalized people is often not seen as torture, and its perpetrators justify the violence as a legitimate policing tactic; victims lack the resources or standing to seek redress.
Perpetrators
Since most research has focused on torture victims, less is known about the perpetrators of torture. Many torturers see their actions as serving a higher political or ideological goal that justifies torture as a legitimate means of protecting the state. Fear is often the motivation for torture, and it is typically not a rational response as it is usually ineffective or even counterproductive at achieving the desired aim. Torture victims are often viewed by the perpetrators as severe threats and enemies of the state. Studies of perpetrators do not support the common assumption that they are psychologically pathological. Most perpetrators do not volunteer to be torturers; many have an innate reluctance to employ violence, and rely on coping mechanisms, such as alcohol or drugs. Psychiatrist Pau Pérez-Sales finds that torturers act from a variety of motives such as ideological commitment, personal gain, group belonging, avoiding punishment, or avoiding guilt from previous acts of torture.
Although it is often assumed that torture is ordered from above at the highest levels of government, sociologist Jonathan Luke Austin argues that government authorization is a necessary but not sufficient condition for torture to occur, given that a specific order to torture rarely can be identified. In many cases, a combination of dispositional and situational effects lead a person to become a torturer. In most cases of systematic torture, the torturers were desensitized to violence by being exposed to physical or psychological abuse during training which can be a deliberate tactic to create torturers. Even when not explicitly ordered by the government to torture, perpetrators may feel peer pressure due to competitive masculinity. Elite and specialized police units are especially prone to torturing, perhaps because of their tight-knit nature and insulation from oversight. Although some torturers are formally trained, most are thought to learn about torture techniques informally.
Torture can be a side effect of a broken criminal justice system in which underfunding, lack of judicial independence, or corruption undermines effective investigations and fair trials. In this context, people who cannot afford bribes are likely to become victims of torture. Understaffed or poorly trained police are more likely to resort to torture when interrogating suspects. In some countries, such as Kyrgyzstan, suspects are more likely to be tortured at the end of the month because of performance quotas.
The contribution of bureaucracy to torture is under-researched and poorly understood. Torturers rely on both active supporters and those who ignore it. Military, intelligence, psychology, medical, and legal professionals can all be complicit in torture. Incentives can favor the use of torture on an institutional or individual level, and some perpetrators are motivated by the prospect of career advancement. Bureaucracy can diffuse responsibility for torture and help perpetrators excuse their actions. Maintaining secrecy is often essential to maintaining a torture program, which can be accomplished in ways ranging from direct censorship, denial, or mislabeling torture as something else, to offshoring abuses to outside a state's territory. Along with official denials, torture is enabled by moral disengagement from the victims and impunity for the perpetrators. Public demand for decisive action against crime or even support for torture against criminals can facilitate its use.
Once a torture program is begun, it is difficult or impossible to prevent it from escalating to more severe techniques and expanding to larger groups of victims, beyond what is originally intended or desired by decision-makers. Sociologist Christopher J. Einolf argues that "torture can create a vicious cycle in which a fear of internal enemies leads to torture, torture creates false confessions, and false confessions reinforce torturers' fears, leading to a spiral of paranoia and ever-increasing torture"—similar to a witch hunt. Escalation of torture is especially difficult to contain in counterinsurgency operations. Torture and specific techniques spread between different countries, especially by soldiers returning home from overseas wars, although this process is poorly understood.
Purpose
Punishment
Torture for punishment dates back to antiquity and is still employed in the twenty-first century.
A common practice in countries with dysfunctional justice systems or overcrowded prisons is for police to apprehend suspects, torture them, and release them without a charge. Such torture could be performed in a police station, the victim's home, or a public place. In South Africa, the police have been observed handing suspects over to vigilantes to be tortured. This type of extrajudicial violence is often carried out in public to deter others. It discriminatorily targets minorities and marginalized groups and may be supported by the public, especially if people do not trust the official justice system.
The classification of judicial corporal punishment as torture is internationally controversial, although it is explicitly prohibited under the Geneva Conventions. Some authors, such as John D. Bessler, argue that capital punishment is inherently a form of torture carried out for punishment. Executions may be carried out in brutal ways, such as stoning, death by burning, or dismemberment. The psychological harm of capital punishment is sometimes considered a form of psychological torture. Others do not consider corporal punishment with a fixed penalty to be torture, as it does not seek to break the victim's will.
Deterrence
Torture may also be used indiscriminately to terrorize people other than the direct victim or to deter opposition to the government. In the United States, torture was used to deter slaves from escaping or rebelling. Some defenders of judicial torture prior to its abolition argued that it deterred crime; reformers contended that because torture was carried out in secret, it could not be an effective deterrent. In the twentieth century, well-known examples include the Khmer Rouge and anti-communist regimes in Latin America, who tortured and murdered their victims as part of forced disappearance. Authoritarian regimes often resort to indiscriminate repression because they cannot accurately identify potential opponents. Many insurgencies lack the necessary infrastructure for a torture program and instead intimidate by killing. Research has found that state torture can extend the lifespan of terrorist organizations, increase incentives for insurgents to use violence, and radicalize the opposition. Another form of torture for deterrence is violence against migrants, as has been reported during pushbacks on the European Union's external borders.
Confession
Torture has been used throughout history to extract confessions from detainees. In 1764, Italian reformer Cesare Beccaria denounced torture as "a sure way to acquit robust scoundrels and to condemn weak but innocent people". Similar doubts about torture's effectiveness had been voiced for centuries previously, including by Aristotle. Despite the abolition of judicial torture, it sees continued use to elicit confessions, especially in judicial systems placing a high value on confessions in criminal matters. The use of torture to force suspects to confess is facilitated by laws allowing extensive pre-trial detention. Research has found that coercive interrogation is slightly more effective than cognitive interviewing for extracting a confession from a suspect, but presents a higher risk of false confession. Many torture victims will say whatever the torturer wants to hear to end the torture. Others who are guilty refuse to confess, especially if they believe it would only bring more torture or punishment. Medieval justice systems attempted to counteract the risk of false confession under torture by requiring confessors to provide falsifiable details about the crime, and only allowing torture if there was already some evidence against the accused. In some countries, political opponents are tortured to force them to confess publicly as a form of state propaganda.
Interrogation
The use of torture to obtain information during interrogation accounts for a small percentage of worldwide torture cases; its use for obtaining confessions or intimidation is more common. Although interrogational torture has been used in conventional wars, it is even more common in asymmetric war or civil wars. The ticking time bomb scenario is extremely rare, if not impossible, but is cited to justify torture for interrogation. Fictional portrayals of torture as an effective interrogational method have fueled misconceptions that justify the use of torture. Experiments comparing torture with other interrogation methods cannot be performed for ethical and practical reasons, but most scholars of torture are skeptical about its efficacy in obtaining accurate information, although torture sometimes has obtained actionable intelligence. Interrogational torture can often shade into confessional torture or simply into entertainment, and some torturers do not distinguish between interrogation and confession.
Methods
A wide variety of techniques have been used for torture. Nevertheless, there are limited ways of inflicting pain while minimizing the risk of death. Survivors report that the exact method used is not significant. Most forms of torture include both physical and psychological elements and multiple methods are typically used on one person. Different methods of torture are popular in different countries. Low-tech methods are more commonly used than high-tech ones, and attempts to develop scientifically validated torture technology have failed. The prohibition of torture motivated a shift to methods that do not leave marks to aid in deniability and to deprive victims of legal redress. As they faced more pressure and scrutiny, democracies led the innovation in clean torture practices in the early twentieth century; such techniques diffused worldwide by the 1960s. Patterns of torture differ based on a torturer's time limits—for example, resulting from legal limits on pre-trial detention.
Beatings or blunt trauma are the most common form of physical torture reported by about two-thirds of survivors. They may be either unsystematic or focused on a specific part of the body, as in falanga (the soles of the feet), repeated strikes against both ears, or shaking the detainee so that their head moves back and forth. Often, people are suspended in painful positions such as strappado or upside-down hanging in combination with beatings. People may also be subjected to stabbings or puncture wounds, have their nails removed, or body parts amputated. Burns are also common, especially cigarette burns, but other instruments are also employed, including hot metal, hot fluids, the sun, or acid. Forced ingestion of water, food, or other substances, or injections are also used as torture. Electric shocks are often used to torture, especially to avoid other methods that are more likely to leave scars. Asphyxiation, of which waterboarding is a form, inflicts torture on the victim by cutting off their air supply.
Psychological torture includes methods that involve no physical element as well as forcing a person to do something and physical attacks that ultimately target the mind. Death threats, mock execution, or being forced to witness the torture of another person are often reported to be subjectively worse than being physically tortured and are associated with severe sequelae. Other torture techniques include sleep deprivation, overcrowding or solitary confinement, withholding of food or water, sensory deprivation (such as hooding), exposure to extremes of light or noise (e.g., musical torture), humiliation (which can be based on sexuality or the victim's religious or national identity), and the use of animals such as dogs to frighten or injure a prisoner. Positional torture works by forcing the person to adopt a stance, putting their weight on a few muscles, causing pain without leaving marks, for example standing or squatting for extended periods. Rape and sexual assault are universal torture methods and frequently instill a permanent sense of shame in the victim and in some cultures, humiliate their family and society. Cultural and individual differences affect how the victim perceives different torture methods.
Effects
Torture is one of the most devastating experiences that a person can undergo. Torture aims to break the victim's will and destroy the victim's agency and personality. Torture survivor Jean Améry argued that it was "the most horrible event a human being can retain within himself" and that "whoever was tortured, stays tortured". Many torture victims, including Améry, later die by suicide. Survivors often experience social and financial problems. Circumstances such as housing insecurity, family separation, and the uncertainty of applying for asylum in a safe country strongly impact survivors' well-being.
Death is not an uncommon outcome of torture. Understanding of the link between specific torture methods and health consequences is lacking. These consequences can include peripheral neuropathy, damage to teeth, rhabdomyolysis from extensive muscle damage, traumatic brain injury, sexually transmitted infection, and pregnancy from rape. Chronic pain and pain-related disability are commonly reported, but there is scant research into this effect or possible treatments. Common psychological problems affecting survivors include traumatic stress, anxiety, depression, and sleep disturbance. An average of 40 percent have long-term post-traumatic stress disorder (PTSD), a higher rate than for any other traumatic experience. Not all survivors or rehabilitation experts support using medical categories to define their experience, and many survivors remain psychologically resilient.
Criminal prosecutions for torture are rare and most victims who submit formal complaints are not believed. Despite the efforts for evidence-based evaluation of the scars from torture such as the Istanbul Protocol, most physical examinations are inconclusive. The effects of torture are one of several factors that usually result in inconsistent testimony from survivors, hampering their effort to be believed and secure either refugee status in a foreign country or criminal prosecution of the perpetrators.
Although there is less research on the effects of torture on perpetrators, they can experience moral injury or trauma symptoms similar to the victims, especially when they feel guilty about their actions. Torture has corrupting effects on the institutions and societies that perpetrate it. Torturers forget important investigative skills because torture can be an easier way than time-consuming police work to achieve high conviction rates, encouraging the continued and increased use of torture. Public disapproval of torture can harm the international reputation of countries that use it, strengthen and radicalize violent opposition to those states, and encourage adversaries to themselves use torture.
Public opinion
Studies have found that most people around the world oppose the use of torture in general. Some hold definite views on torture; for others, torture's acceptability depends on the victim. Support for torture in specific cases is correlated with the belief that torture is effective and used in ticking time bomb cases. Women are more likely to oppose torture than men. Nonreligious people are less likely to support the use of torture than religious people, although for the latter group, increased religiosity increases opposition to torture. The personality traits of right-wing authoritarianism, social dominance orientation, and retributivism are correlated with higher support for torture; embrace of democratic values such as liberty and equality reduces support for torture. Public opinion is most favorable to torture, on average, in countries with low per capita income and high levels of state repression. Public opinion is an important constraint on the use of torture by states.
Prohibition
The condemnation of torture as barbaric and uncivilized originated in the debates around its abolition. By the late nineteenth century, countries began to be condemned internationally for the use of torture. The ban on torture became part of the civilizing mission justifying colonial rule on the pretext of ending torture, despite the use of torture by colonial rulers themselves. The condemnation was strengthened during the twentieth century in reaction to the use of torture by Nazi Germany and the Soviet Union. Shocked by Nazi atrocities during World War II, the United Nations drew up the 1948 Universal Declaration of Human Rights, which prohibited torture. Torture is criticized based on all major ethical frameworks, including deontology, consequentialism, and virtue ethics. Some contemporary philosophers argue that torture is never morally acceptable; others propose exceptions to the general rule in real-life equivalents of the ticking time-bomb scenario.
Torture stimulated the creation of the human rights movement. In 1969, the Greek case was the first time that an international body—the European Commission on Human Rights—found that a state practiced torture and it, along with Ireland v. United Kingdom, formed much of the basis for the definition of torture in international law. In the early 1970s, Amnesty International launched a global campaign against torture, exposing its widespread use despite international prohibition and eventually leading to the United Nations Convention against Torture (CAT) in 1984. Successful civil society mobilizations against torture can prevent its use by governments that possess both motive and opportunity to use torture. Naming and shaming campaigns against torture have shown mixed results; they can be ineffective and even make things worse.
The prohibition of torture is a peremptory norm (jus cogens) in international law, meaning that it is forbidden for all states under all circumstances. Most jurists justify the absolute legal prohibition on torture based on its violation of human dignity. The CAT and its Optional Protocol focus on the prevention of torture, which was already prohibited in international human rights law under other treaties such as the International Covenant on Civil and Political Rights. The CAT specifies that torture must be a criminal offense under a country's laws, evidence obtained under torture may not be admitted in court, and deporting a person to another country where they are likely to face torture is forbidden. Even when it is illegal under national law, judges in many countries continue to admit evidence obtained under torture or ill treatment. It is disputed whether ratification of the CAT decreases, does not affect, or even increases the rate of torture in a country.
In international humanitarian law, which regulates the conduct of war, torture was first outlawed by the 1863 Lieber Code. Torture was prosecuted during the Nuremberg trials as a crime against humanity; it is recognized by both the 1949 Geneva Conventions and the 1998 Rome Statute of the International Criminal Court as a war crime. According to the Rome Statute, torture can also be a crime against humanity if committed as part of a systematic attack on a civilian population. In 1987, Israel became the only country in the world to purportedly legalize torture.
Prevention
Torture prevention is complicated both by lack of understanding about why torture occurs and by lack of application of what is known. Torture proliferates in situations of incommunicado detention. Because the risk of torture is highest directly after an arrest, procedural safeguards such as immediate access to a lawyer and notifying relatives of an arrest are the most effective ways of prevention. Visits by independent monitoring bodies to detention sites can also help reduce torture. Legal changes that are not implemented in practice have little effect on the incidence of torture. Legal changes can be particularly ineffective in places where the law has limited legitimacy or is routinely ignored.
Sociologically torture operates as a subculture, frustrating prevention efforts because torturers can find a way around rules. Safeguards against torture in detention can be evaded by beating suspects during round-ups or on the way to the police station. General training of police to improve their ability to investigate crime has been more effective at reducing torture than specific training focused on human rights. Institutional police reforms have been effective when abuse is systematic. Political scientist Darius Rejali criticizes torture prevention research for not figuring out "what to do when people are bad; institutions broken, understaffed, and corrupt; and habitual serial violence is routine".
References
Sources
Books
Book chapters
Journal articles
Human rights abuses
Philosophy of law
Political violence
State crime
Suffering
Violence
Crimes | Torture | [
"Biology"
] | 5,846 | [
"Behavior",
"Aggression",
"Human behavior",
"Violence"
] |
47,711 | https://en.wikipedia.org/wiki/FreeCell | FreeCell is a solitaire card game played using the standard 52-card deck. It is fundamentally different from most solitaire games in that very few deals are unsolvable, and all cards are dealt face-up from the beginning of the game. Microsoft has included a FreeCell computer game with every release of the Windows operating system since 1995, which has greatly contributed to the game's popularity.
Rules
One standard 52-card deck is used.
There are four open cells and four open foundations. Cards are dealt face-up into eight cascades, four of which comprise seven cards each and four of which comprise six cards each.
The top card of each cascade begins a sequence.
Tableaus must be built down by alternating colors.
Foundations are built up by suit. The Foundations begin with Ace and are built up to King.
Any cell card or top card of any cascade may be moved to build on a tableau, or moved to an empty cell, an empty cascade, or its foundation.
The game is won after all cards are moved to their foundation piles.
Supermoves
Unlike in many solitaire card games, the rules of Freecell only allow cards to be moved one at a time. Complete or partial tableaus may be moved to build on existing tableaus, or moved to empty cascades, only by a sequence of moves which recursively place and remove cards through intermediate locations.
For example, with one empty cell, the top card of one tableau can be moved to a free cell. The second card from the top of that tableau can now be moved onto another tableau. Then the original top card can be moved from the cell on top of it.
Such a sequence of moves is called a "supermove". Computer implementations often show this motion, but players using physical decks typically just move the tableau at once.
The maximum number of cards in a tableau that can be moved to another tableau equals the number of empty cells plus one, with that number doubling for each empty cascade: , where is the number of empty cascades and is the number of empty cells. The maximum number that can be moved to an empty cascade is .
Numbered hands
Although software implementations vary, most versions label the hands with a number derived from the seed value used by the random number generator to shuffle the cards.
Microsoft FreeCell is so definitive for FreeCell players that many other software implementations include compatibility with its random number generator in order to replicate its numbered hands.
History and variants
One of the oldest ancestors of FreeCell is Eight Off. In the June 1968 edition of Scientific American, Martin Gardner described in his "Mathematical Games" column a game by C. L. Baker which is similar to FreeCell, except that cards on the tableau are built by suit rather than by alternate colors. Gardner wrote, "The game was taught to Baker by his father, who in turn learned it from an Englishman during the 1920s." This variant is now called Baker's Game. FreeCell's origins may date back even further to 1945 and to a Scandinavian game called Napoleon in St. Helena (not the solitaire game Napoleon at St Helena, also known as Forty Thieves).
Paul Alfille changed Baker's Game by making cards build according to alternate colors, thus creating FreeCell. He implemented the first computerised version as a medical student at the University of Illinois, in the TUTOR programming language for the PLATO educational computer system in 1978. Alfille was able to display easily recognizable graphical images of playing cards on the monochrome display on the PLATO systems.
This original FreeCell environment allowed games with 4–10 columns and 1–10 cells in addition to the standard game. For each variant, the program stored a ranked list of the players with the longest winning streaks. There was also a tournament system that allowed people to compete to win difficult hand-picked deals. Paul Alfille described this early FreeCell environment in more detail in an interview from 2000.
In 2012, researchers used evolutionary computation methods to create winning FreeCell players.
A variant where card sequence movement is not limited by available cells is known as Relaxed FreeCell.
Other solitaire games related to or inspired by FreeCell include Seahaven Towers, Penguin, Stalactites, ForeCell, Antares (a cross with Scorpion).
Unsolvable hands
In 2018, Theodore Pringle and Shlomi Fish found that, of 8.6 billion FreeCell Pro deals, 102075 deals were impossible to solve, or approximately one impossible deal out of 84,000 random deals. It is estimated that around 99.999% of possible deals are solvable. Deal number 11982 from the Windows version of FreeCell is an example of an unsolvable FreeCell deal, the only deal among the original "Microsoft 32,000" which is unsolvable.
Solver complexity
The FreeCell game has a constant number of cards. This implies that in constant time, a person or computer could list all of the possible moves from a given start configuration and discover a winning set of moves or, assuming the game cannot be solved, the lack thereof. To perform an interesting complexity analysis, one must construct a generalized version of the FreeCell game with cards. This generalized version of the game is NP-complete; it is unlikely that any algorithm more efficient than a brute-force search exists which can find solutions for arbitrary generalized FreeCell configurations.
There are 52! (i.e., 52 factorial), or approximately 8, distinct deals. However, some games are effectively identical to others because suits assigned to cards are arbitrary or columns can be swapped. After taking these factors into account, there are approximately 1.75 distinct games.
References
Additional sources
See also
Eight Off
Baker's Game
Klondike
Penguin
List of solitaires
Glossary of solitaire
American card games
French deck card games
Patience video games
PLATO (computer system) games
Single-deck patience card games
NP-complete problems | FreeCell | [
"Mathematics"
] | 1,229 | [
"NP-complete problems",
"Mathematical problems",
"Computational problems"
] |
47,713 | https://en.wikipedia.org/wiki/Direct%20current | Direct current (DC) is one-directional flow of electric charge. An electrochemical cell is a prime example of DC power. Direct current may flow through a conductor such as a wire, but can also flow through semiconductors, insulators, or even through a vacuum as in electron or ion beams. The electric current flows in a constant direction, distinguishing it from alternating current (AC). A term formerly used for this type of current was galvanic current.
The abbreviations AC and DC are often used to mean simply alternating and direct, as when they modify current or voltage.
Direct current may be converted from an alternating current supply by use of a rectifier, which contains electronic elements (usually) or electromechanical elements (historically) that allow current to flow only in one direction. Direct current may be converted into alternating current via an inverter.
Direct current has many uses, from the charging of batteries to large power supplies for electronic systems, motors, and more. Very large quantities of electrical energy provided via direct-current are used in smelting of aluminum and other electrochemical processes. It is also used for some railways, especially in urban areas. High-voltage direct current is used to transmit large amounts of power from remote generation sites or to interconnect alternating current power grids.
History
Direct current was produced in 1800 by Italian physicist Alessandro Volta's battery, his Voltaic pile. The nature of how current flowed was not yet understood. French physicist André-Marie Ampère conjectured that current travelled in one direction from positive to negative. When French instrument maker Hippolyte Pixii built the first dynamo electric generator in 1832, he found that as the magnet used passed the loops of wire each half turn, it caused the flow of electricity to reverse, generating an alternating current. At Ampère's suggestion, Pixii later added a commutator, a type of "switch" where contacts on the shaft work with "brush" contacts to produce direct current.
The late 1870s and early 1880s saw electricity starting to be generated at power stations. These were initially set up to power arc lighting (a popular type of street lighting) running on very high voltage (usually higher than 3,000 volts) direct current or alternating current. This was followed by the widespread use of low voltage direct current for indoor electric lighting in business and homes after inventor Thomas Edison launched his incandescent bulb based electric "utility" in 1882. Because of the significant advantages of alternating current over direct current in using transformers to raise and lower voltages to allow much longer transmission distances, direct current was replaced over the next few decades by alternating current in power delivery. In the mid-1950s, high-voltage direct current transmission was developed, and is now an option instead of long-distance high voltage alternating current systems. For long distance undersea cables (e.g. between countries, such as NorNed), this DC option is the only technically feasible option. For applications requiring direct current, such as third rail power systems, alternating current is distributed to a substation, which utilizes a rectifier to convert the power to direct current.
Various definitions
The term DC is used to refer to power systems that use only one electrical polarity of voltage or current, and to refer to the constant, zero-frequency, or slowly varying local mean value of a voltage or current. For example, the voltage across a DC voltage source is constant as is the current through a direct current source. The DC solution of an electric circuit is the solution where all voltages and currents are constant. Any stationary voltage or current waveform can be decomposed into a sum of a DC component and a zero-mean time-varying component; the DC component is defined to be the expected value, or the average value of the voltage or current over all time.
Although DC stands for "direct current", DC often refers to "constant polarity". Under this definition, DC voltages can vary in time, as seen in the raw output of a rectifier or the fluctuating voice signal on a telephone line.
Some forms of DC (such as that produced by a voltage regulator) have almost no variations in voltage, but may still have variations in output power and current.
Circuits
A direct current circuit is an electrical circuit that consists of any combination of constant voltage sources, constant current sources, and resistors. In this case, the circuit voltages and currents are independent of time. A particular circuit voltage or current does not depend on the past value of any circuit voltage or current. This implies that the system of equations that represent a DC circuit do not involve integrals or derivatives with respect to time.
If a capacitor or inductor is added to a DC circuit, the resulting circuit is not, strictly speaking, a DC circuit. However, most such circuits have a DC solution. This solution gives the circuit voltages and currents when the circuit is in DC steady state. Such a circuit is represented by a system of differential equations. The solution to these equations usually contain a time varying or transient part as well as constant or steady state part. It is this steady state part that is the DC solution. There are some circuits that do not have a DC solution. Two simple examples are a constant current source connected to a capacitor and a constant voltage source connected to an inductor.
In electronics, it is common to refer to a circuit that is powered by a DC voltage source such as a battery or the output of a DC power supply as a DC circuit even though what is meant is that the circuit is DC powered.
In a DC circuit, a power source (e.g. a battery, capacitor, etc.) has a positive and negative terminal, and likewise, the load also has a positive and negative terminal. To complete the circuit, positive charges need to flow from the power source to the load. The charges will then return to the negative terminal of the load, which will then flow back to the negative terminal of the battery, completing the circuit. If either the positive or negative terminal is disconnected, the circuit will not be complete and the charges will not flow.
In some DC circuit applications, polarity does not matter, which means you can connect positive and negative backwards and the circuit will still be complete and the load will still function normally. However, in most DC applications, polarity does matter, and connecting the circuit backwards will result in the load not working properly.
Applications
Domestic and commercial buildings
DC is commonly found in many extra-low voltage applications and some low-voltage applications, especially where these are powered by batteries or solar power systems (since both can produce only DC).
Most electronic circuits or devices require a DC power supply.
Domestic DC installations usually have different types of sockets, connectors, switches, and fixtures from those suitable for alternating current. This is mostly due to the lower voltages used, resulting in higher currents to produce the same amount of power.
It is usually important with a DC appliance to observe polarity, unless the device has a diode bridge to correct for this.
Automotive
Most automotive applications use DC. An automotive battery provides power for engine starting, lighting, the ignition system, the climate controls, and the infotainment system among others. The alternator is an AC device which uses a rectifier to produce DC for battery charging. Most highway passenger vehicles use nominally 12 V systems. Many heavy trucks, farm equipment, or earth moving equipment with Diesel engines use 24 volt systems. In some older vehicles, 6 V was used, such as in the original classic Volkswagen Beetle. At one point a 42 V electrical system was considered for automobiles, but this found little use. To save weight and wire, often the metal frame of the vehicle is connected to one pole of the battery and used as the return conductor in a circuit. Often the negative pole is the chassis "ground" connection, but positive ground may be used in some wheeled or marine vehicles.
In a battery electric vehicle, there are usually two separate DC systems. The "low voltage" DC system typically operates at 12V, and serves the same purpose as in an internal combustion engine vehicle. The "high voltage" system operates at 300-400V (depending on the vehicle), and provides the power for the traction motors. Increasing the voltage for the traction motors reduces the current flowing through them, increasing efficiency.
Telecommunication
Telephone exchange communication equipment uses standard −48 V DC power supply. The negative polarity is achieved by grounding the positive terminal of power supply system and the battery bank. This is done to prevent electrolysis depositions. Telephone installations have a battery system to ensure power is maintained for subscriber lines during power interruptions.
Other devices may be powered from the telecommunications DC system using a DC-DC converter to provide any convenient voltage.
Many telephones connect to a twisted pair of wires, and use a bias tee to internally separate the AC component of the voltage between the two wires (the audio signal) from the DC component of the voltage between the two wires (used to power the phone).
High-voltage power transmission
High-voltage direct current (HVDC) electric power transmission systems use DC for the bulk transmission of electrical power, in contrast with the more common alternating current systems. For long-distance transmission, HVDC systems may be less expensive and suffer lower electrical losses.
Other
Applications using fuel cells (mixing hydrogen and oxygen together with a catalyst to produce electricity and water as byproducts) also produce only DC.
Light aircraft electrical systems are typically 12 V or 24 V DC similar to automobiles.
See also
CCS
DC bias
Electric current
High-voltage direct current power transmission.
Neutral direct-current telegraph system
Polarity symbols
Solar panel
State of health
State of charge
Smart battery
Battery management system
References
External links
AC/DC: What's the Difference? – PBS Learning Media
– ITACA
Electrical engineering
Electric current
Electric power | Direct current | [
"Physics",
"Engineering"
] | 2,028 | [
"Physical quantities",
"Electrical engineering",
"Power (physics)",
"Electric power",
"Electric current",
"Wikipedia categories named after physical quantities"
] |
47,717 | https://en.wikipedia.org/wiki/Srinivasa%20Ramanujan | Srinivasa Ramanujan Aiyangar
(22 December 188726 April 1920) was an Indian mathematician. Often regarded as one of the greatest mathematicians of all time, though he had almost no formal training in pure mathematics, he made substantial contributions to mathematical analysis, number theory, infinite series, and continued fractions, including solutions to mathematical problems then considered unsolvable.
Ramanujan initially developed his own mathematical research in isolation. According to Hans Eysenck, "he tried to interest the leading professional mathematicians in his work, but failed for the most part. What he had to show them was too novel, too unfamiliar, and additionally presented in unusual ways; they could not be bothered". Seeking mathematicians who could better understand his work, in 1913 he began a mail correspondence with the English mathematician G. H. Hardy at the University of Cambridge, England. Recognising Ramanujan's work as extraordinary, Hardy arranged for him to travel to Cambridge. In his notes, Hardy commented that Ramanujan had produced groundbreaking new theorems, including some that "defeated me completely; I had never seen anything in the least like them before", and some recently proven but highly advanced results.
During his short life, Ramanujan independently compiled nearly 3,900 results (mostly identities and equations). Many were completely novel; his original and highly unconventional results, such as the Ramanujan prime, the Ramanujan theta function, partition formulae and mock theta functions, have opened entire new areas of work and inspired further research. Of his thousands of results, most have been proven correct. The Ramanujan Journal, a scientific journal, was established to publish work in all areas of mathematics influenced by Ramanujan, and his notebooks—containing summaries of his published and unpublished results—have been analysed and studied for decades since his death as a source of new mathematical ideas. As late as 2012, researchers continued to discover that mere comments in his writings about "simple properties" and "similar outputs" for certain findings were themselves profound and subtle number theory results that remained unsuspected until nearly a century after his death. He became one of the youngest Fellows of the Royal Society and only the second Indian member, and the first Indian to be elected a Fellow of Trinity College, Cambridge.
In 1919, ill health—now believed to have been hepatic amoebiasis (a complication from episodes of dysentery many years previously)—compelled Ramanujan's return to India, where he died in 1920 at the age of 32. His last letters to Hardy, written in January 1920, show that he was still continuing to produce new mathematical ideas and theorems. His "lost notebook", containing discoveries from the last year of his life, caused great excitement among mathematicians when it was rediscovered in 1976.
Early life
Ramanujan (literally, "younger brother of Rama", a Hindu deity) was born on 22 December 1887 into a Tamil Brahmin Iyengar family in Erode, in present-day Tamil Nadu. His father, Kuppuswamy Srinivasa Iyengar, originally from Thanjavur district, worked as a clerk in a sari shop. His mother, Komalatammal, was a housewife and sang at a local temple. They lived in a small traditional home on Sarangapani Sannidhi Street in the town of Kumbakonam. The family home is now a museum. When Ramanujan was a year and a half old, his mother gave birth to a son, Sadagopan, who died less than three months later. In December 1889, Ramanujan contracted smallpox, but recovered, unlike the 4,000 others who died in a bad year in the Thanjavur district around this time. He moved with his mother to her parents' house in Kanchipuram, near Madras (now Chennai). His mother gave birth to two more children, in 1891 and 1894, both of whom died before their first birthdays.
On 1 October 1892, Ramanujan was enrolled at the local school. After his maternal grandfather lost his job as a court official in Kanchipuram, Ramanujan and his mother moved back to Kumbakonam, and he was enrolled in Kangayan Primary School. When his paternal grandfather died, he was sent back to his maternal grandparents, then living in Madras. He did not like school in Madras, and tried to avoid attending. His family enlisted a local constable to make sure he attended school. Within six months, Ramanujan was back in Kumbakonam.
Since Ramanujan's father was at work most of the day, his mother took care of the boy, and they had a close relationship. From her, he learned about tradition and puranas, to sing religious songs, to attend pujas at the temple, and to maintain particular eating habits—all part of Brahmin culture. At Kangayan Primary School, Ramanujan performed well. Just before turning 10, in November 1897, he passed his primary examinations in English, Tamil, geography, and arithmetic with the best scores in the district. That year, Ramanujan entered Town Higher Secondary School, where he encountered formal mathematics for the first time.
A child prodigy by age 11, he had exhausted the mathematical knowledge of two college students who were lodgers at his home. He was later lent a book written by S. L. Loney on advanced trigonometry. He mastered this by the age of 13 while discovering sophisticated theorems on his own. By 14, he received merit certificates and academic awards that continued throughout his school career, and he assisted the school in the logistics of assigning its 1,200 students (each with differing needs) to its approximately 35 teachers. He completed mathematical exams in half the allotted time, and showed a familiarity with geometry and infinite series. Ramanujan was shown how to solve cubic equations in 1902. He would later develop his own method to solve the quartic. In 1903, he tried to solve the quintic, not knowing that it was impossible to solve with radicals.
In 1903, when he was 16, Ramanujan obtained from a friend a library copy of A Synopsis of Elementary Results in Pure and Applied Mathematics, G. S. Carr's collection of 5,000 theorems. Ramanujan reportedly studied the contents of the book in detail. The next year, Ramanujan independently developed and investigated the Bernoulli numbers and calculated the Euler–Mascheroni constant up to 15 decimal places. His peers at the time said they "rarely understood him" and "stood in respectful awe" of him.
When he graduated from Town Higher Secondary School in 1904, Ramanujan was awarded the K. Ranganatha Rao prize for mathematics by the school's headmaster, Krishnaswami Iyer. Iyer introduced Ramanujan as an outstanding student who deserved scores higher than the maximum. He received a scholarship to study at Government Arts College, Kumbakonam, but was so intent on mathematics that he could not focus on any other subjects and failed most of them, losing his scholarship in the process. In August 1905, Ramanujan ran away from home, heading towards Visakhapatnam, and stayed in Rajahmundry for about a month. He later enrolled at Pachaiyappa's College in Madras. There, he passed in mathematics, choosing only to attempt questions that appealed to him and leaving the rest unanswered, but performed poorly in other subjects, such as English, physiology, and Sanskrit. Ramanujan failed his Fellow of Arts exam in December 1906 and again a year later. Without an FA degree, he left college and continued to pursue independent research in mathematics, living in extreme poverty and often on the brink of starvation.
In 1910, after a meeting between the 23-year-old Ramanujan and the founder of the Indian Mathematical Society, V. Ramaswamy Aiyer, Ramanujan began to get recognition in Madras's mathematical circles, leading to his inclusion as a researcher at the University of Madras.
Adulthood in India
On 14 July 1909, Ramanujan married Janaki (Janakiammal; 21 March 1899 – 13 April 1994), a girl his mother had selected for him a year earlier and who was ten years old when they married. It was not unusual then for marriages to be arranged with girls at a young age. Janaki was from Rajendram, a village close to Marudur (Karur district) Railway Station. Ramanujan's father did not participate in the marriage ceremony. As was common at that time, Janaki continued to stay at her maternal home for three years after marriage, until she reached puberty. In 1912, she and Ramanujan's mother joined Ramanujan in Madras.
After the marriage, Ramanujan developed a hydrocele testis. The condition could be treated with a routine surgical operation that would release the blocked fluid in the scrotal sac, but his family could not afford the operation. In January 1910, a doctor volunteered to do the surgery at no cost.
After his successful surgery, Ramanujan searched for a job. He stayed at a friend's house while he went from door to door around Madras looking for a clerical position. To make money, he tutored students at Presidency College who were preparing for their Fellow of Arts exam.
In late 1910, Ramanujan was sick again. He feared for his health, and told his friend R. Radakrishna Iyer to "hand [his notebooks] over to Professor Singaravelu Mudaliar [the mathematics professor at Pachaiyappa's College] or to the British professor Edward B. Ross, of the Madras Christian College." After Ramanujan recovered and retrieved his notebooks from Iyer, he took a train from Kumbakonam to Villupuram, a city under French control. In 1912, Ramanujan moved with his wife and mother to a house in Saiva Muthaiah Mudali street, George Town, Madras, where they lived for a few months. In May 1913, upon securing a research position at Madras University, Ramanujan moved with his family to Triplicane.
Pursuit of career in mathematics
In 1910, Ramanujan met deputy collector V. Ramaswamy Aiyer, who founded the Indian Mathematical Society. Wishing for a job at the revenue department where Aiyer worked, Ramanujan showed him his mathematics notebooks. As Aiyer later recalled:
I was struck by the extraordinary mathematical results contained in [the notebooks]. I had no mind to smother his genius by an appointment in the lowest rungs of the revenue department.
Aiyer sent Ramanujan, with letters of introduction, to his mathematician friends in Madras. Some of them looked at his work and gave him letters of introduction to R. Ramachandra Rao, the district collector for Nellore and the secretary of the Indian Mathematical Society. Rao was impressed by Ramanujan's research but doubted that it was his own work. Ramanujan mentioned a correspondence he had with Professor Saldhana, a notable Bombay mathematician, in which Saldhana expressed a lack of understanding of his work but concluded that he was not a fraud. Ramanujan's friend C. V. Rajagopalachari tried to quell Rao's doubts about Ramanujan's academic integrity. Rao agreed to give him another chance, and listened as Ramanujan discussed elliptic integrals, hypergeometric series, and his theory of divergent series, which Rao said ultimately convinced him of Ramanujan's brilliance. When Rao asked him what he wanted, Ramanujan replied that he needed work and financial support. Rao consented and sent him to Madras. He continued his research with Rao's financial aid. With Aiyer's help, Ramanujan had his work published in the Journal of the Indian Mathematical Society.
One of the first problems he posed in the journal was to find the value of:
He waited for a solution to be offered in three issues, over six months, but failed to receive any. At the end, Ramanujan supplied an incomplete solution to the problem himself. On page 105 of his first notebook, he formulated an equation that could be used to solve the infinitely nested radicals problem.
Using this equation, the answer to the question posed in the Journal was simply 3, obtained by setting , , and . Ramanujan wrote his first formal paper for the Journal on the properties of Bernoulli numbers. One property he discovered was that the denominators of the fractions of Bernoulli numbers are always divisible by six. He also devised a method of calculating based on previous Bernoulli numbers. One of these methods follows:
It will be observed that if n is even but not equal to zero,
is a fraction and the numerator of in its lowest terms is a prime number,
the denominator of contains each of the factors 2 and 3 once and only once,
is an integer and consequently is an odd integer.
In his 17-page paper "Some Properties of Bernoulli's Numbers" (1911), Ramanujan gave three proofs, two corollaries and three conjectures. His writing initially had many flaws. As Journal editor M. T. Narayana Iyengar noted:
Mr. Ramanujan's methods were so terse and novel and his presentation so lacking in clearness and precision, that the ordinary [mathematical reader], unaccustomed to such intellectual gymnastics, could hardly follow him.
Ramanujan later wrote another paper and also continued to provide problems in the Journal. In early 1912, he got a temporary job in the Madras Accountant General's office, with a monthly salary of 20 rupees. He lasted only a few weeks. Toward the end of that assignment, he applied for a position under the Chief Accountant of the Madras Port Trust.
In a letter dated 9 February 1912, Ramanujan wrote:
Sir,
I understand there is a clerkship vacant in your office, and I beg to apply for the same. I have passed the Matriculation Examination and studied up to the F.A. but was prevented from pursuing my studies further owing to several untoward circumstances. I have, however, been devoting all my time to Mathematics and developing the subject. I can say I am quite confident I can do justice to my work if I am appointed to the post. I therefore beg to request that you will be good enough to confer the appointment on me.
Attached to his application was a recommendation from E. W. Middlemast, a mathematics professor at the Presidency College, who wrote that Ramanujan was "a young man of quite exceptional capacity in Mathematics". Three weeks after he applied, on 1 March, Ramanujan learned that he had been accepted as a Class III, Grade IV accounting clerk, making 30 rupees per month. At his office, Ramanujan easily and quickly completed the work he was given and spent his spare time doing mathematical research. Ramanujan's boss, Sir Francis Spring, and S. Narayana Iyer, a colleague who was also treasurer of the Indian Mathematical Society, encouraged Ramanujan in his mathematical pursuits.
Contacting British mathematicians
In the spring of 1913, Narayana Iyer, Ramachandra Rao and E. W. Middlemast tried to present Ramanujan's work to British mathematicians. M. J. M. Hill of University College London commented that Ramanujan's papers were riddled with holes. He said that although Ramanujan had "a taste for mathematics, and some ability", he lacked the necessary educational background and foundation to be accepted by mathematicians. Although Hill did not offer to take Ramanujan on as a student, he gave thorough and serious professional advice on his work. With the help of friends, Ramanujan drafted letters to leading mathematicians at Cambridge University.
The first two professors, H. F. Baker and E. W. Hobson, returned Ramanujan's papers without comment. On 16 January 1913, Ramanujan wrote to G. H. Hardy, whom he knew from studying Orders of Infinity (1910). Coming from an unknown mathematician, the nine pages of mathematics made Hardy initially view Ramanujan's manuscripts as a possible fraud. Hardy recognised some of Ramanujan's formulae but others "seemed scarcely possible to believe". One of the theorems Hardy found amazing was on the bottom of page three (valid for ):
Hardy was also impressed by some of Ramanujan's other work relating to infinite series:
The first result had already been determined by G. Bauer in 1859. The second was new to Hardy, and was derived from a class of functions called hypergeometric series, which had first been researched by Euler and Gauss. Hardy found these results "much more intriguing" than Gauss's work on integrals. After seeing Ramanujan's theorems on continued fractions on the last page of the manuscripts, Hardy said the theorems "defeated me completely; I had never seen anything in the least like them before", and that they "must be true, because, if they were not true, no one would have the imagination to invent them". Hardy asked a colleague, J. E. Littlewood, to take a look at the papers. Littlewood was amazed by Ramanujan's genius. After discussing the papers with Littlewood, Hardy concluded that the letters were "certainly the most remarkable I have received" and that Ramanujan was "a mathematician of the highest quality, a man of altogether exceptional originality and power". One colleague, E. H. Neville, later remarked that "No one who was in the mathematical circles in Cambridge at that time can forget the sensation caused by this letter... not one [theorem] could have been set in the most advanced mathematical examination in the world".
On 8 February 1913, Hardy wrote Ramanujan a letter expressing interest in his work, adding that it was "essential that I should see proofs of some of your assertions". Before his letter arrived in Madras during the third week of February, Hardy contacted the Indian Office to plan for Ramanujan's trip to Cambridge. Secretary Arthur Davies of the Advisory Committee for Indian Students met with Ramanujan to discuss the overseas trip. In accordance with his Brahmin upbringing, Ramanujan refused to leave his country to "go to a foreign land", and his parents were also opposed for the same reason. Meanwhile, he sent Hardy a letter packed with theorems, writing, "I have found a friend in you who views my labour sympathetically."
To supplement Hardy's endorsement, Gilbert Walker, a former mathematical lecturer at Trinity College, Cambridge, looked at Ramanujan's work and expressed amazement, urging the young man to spend time at Cambridge. As a result of Walker's endorsement, B. Hanumantha Rao, a mathematics professor at an engineering college, invited Ramanujan's colleague Narayana Iyer to a meeting of the Board of Studies in Mathematics to discuss "what we can do for S. Ramanujan". The board agreed to grant Ramanujan a monthly research scholarship of 75 rupees for the next two years at the University of Madras.
While he was engaged as a research student, Ramanujan continued to submit papers to the Journal of the Indian Mathematical Society. In one instance, Iyer submitted some of Ramanujan's theorems on summation of series to the journal, adding, "The following theorem is due to S. Ramanujan, the mathematics student of Madras University." Later in November, British Professor Edward B. Ross of Madras Christian College, whom Ramanujan had met a few years before, stormed into his class one day with his eyes glowing, asking his students, "Does Ramanujan know Polish?" The reason was that in one paper, Ramanujan had anticipated the work of a Polish mathematician whose paper had just arrived in the day's mail. In his quarterly papers, Ramanujan drew up theorems to make definite integrals more easily solvable. Working off Giuliano Frullani's 1821 integral theorem, Ramanujan formulated generalisations that could be made to evaluate formerly unyielding integrals.
Hardy's correspondence with Ramanujan soured after Ramanujan refused to come to England. Hardy enlisted a colleague lecturing in Madras, E. H. Neville, to mentor and bring Ramanujan to England. Neville asked Ramanujan why he would not go to Cambridge. Ramanujan apparently had now accepted the proposal; Neville said, "Ramanujan needed no converting" and "his parents' opposition had been withdrawn". Apparently, Ramanujan's mother had a vivid dream in which Ramanujan was surrounded by Europeans, and the family goddess, the deity of Namagiri, commanded her "to stand no longer between her son and the fulfilment of his life's purpose". On 17 March 1914, Ramanujan travelled to England by ship, leaving his wife to stay with his parents in India.
Life in England
Ramanujan departed from Madras aboard the S.S. Nevasa on 17 March 1914. When he disembarked in London on 14 April, Neville was waiting for him with a car. Four days later, Neville took him to his house on Chesterton Road in Cambridge. Ramanujan immediately began his work with Littlewood and Hardy. After six weeks, Ramanujan moved out of Neville's house and took up residence on Whewell's Court, a five-minute walk from Hardy's room.
Hardy and Littlewood began to look at Ramanujan's notebooks. Hardy had already received 120 theorems from Ramanujan in the first two letters, but there were many more results and theorems in the notebooks. Hardy saw that some were wrong, others had already been discovered, and the rest were new breakthroughs. Ramanujan left a deep impression on Hardy and Littlewood. Littlewood commented, "I can believe that he's at least a Jacobi", while Hardy said he "can compare him only with Euler or Jacobi."
Ramanujan spent nearly five years in Cambridge collaborating with Hardy and Littlewood, and published part of his findings there. Hardy and Ramanujan had highly contrasting personalities. Their collaboration was a clash of different cultures, beliefs, and working styles. In the previous few decades, the foundations of mathematics had come into question and the need for mathematically rigorous proofs was recognised. Hardy was an atheist and an apostle of proof and mathematical rigour, whereas Ramanujan was a deeply religious man who relied very strongly on his intuition and insights. Hardy tried his best to fill the gaps in Ramanujan's education and to mentor him in the need for formal proofs to support his results, without hindering his inspiration—a conflict that neither found easy.
Ramanujan was awarded a Bachelor of Arts by Research degree (the predecessor of the PhD degree) in March 1916 for his work on highly composite numbers, sections of the first part of which had been published the preceding year in the Proceedings of the London Mathematical Society. The paper was more than 50 pages long and proved various properties of such numbers. Hardy disliked this topic area but remarked that though it engaged with what he called the 'backwater of mathematics', in it Ramanujan displayed 'extraordinary mastery over the algebra of inequalities'.
On 6 December 1917, Ramanujan was elected to the London Mathematical Society. On 2 May 1918, he was elected a Fellow of the Royal Society, the second Indian admitted, after Ardaseer Cursetjee in 1841. At age 31, Ramanujan was one of the youngest Fellows in the Royal Society's history. He was elected "for his investigation in elliptic functions and the Theory of Numbers." On 13 October 1918, he was the first Indian to be elected a Fellow of Trinity College, Cambridge.
Illness and death
Ramanujan had numerous health problems throughout his life. His health worsened in England; possibly he was also less resilient due to the difficulty of keeping to the strict dietary requirements of his religion there and because of wartime rationing in 1914–18. He was diagnosed with tuberculosis and a severe vitamin deficiency, and confined to a sanatorium. He attempted suicide in late 1917 or early 1918 by jumping on the tracks of a London underground station. Scotland Yard arrested him for attempting suicide (which was a crime), but released him after Hardy intervened. In 1919, Ramanujan returned to Kumbakonam, Madras Presidency, where he died in 1920 aged 32. After his death, his brother Tirunarayanan compiled Ramanujan's remaining handwritten notes, consisting of formulae on singular moduli, hypergeometric series and continued fractions. In his last days, though in severe pain, "he continued doing his mathematics filling sheet after sheet with numbers", Janaki Ammal recounts.
Ramanujan's widow, Smt. Janaki Ammal, moved to Bombay. In 1931, she returned to Madras and settled in Triplicane, where she supported herself on a pension from Madras University and income from tailoring. In 1950, she adopted a son, W. Narayanan, who eventually became an officer of the State Bank of India and raised a family. In her later years, she was granted a lifetime pension from Ramanujan's former employer, the Madras Port Trust, and pensions from, among others, the Indian National Science Academy and the state governments of Tamil Nadu, Andhra Pradesh and West Bengal. She continued to cherish Ramanujan's memory, and was active in efforts to increase his public recognition; prominent mathematicians, including George Andrews, Bruce C. Berndt and Béla Bollobás made it a point to visit her while in India. She died at her Triplicane residence in 1994.
A 1994 analysis of Ramanujan's medical records and symptoms by D. A. B. Young concluded that his medical symptoms—including his past relapses, fevers, and hepatic conditions—were much closer to those resulting from hepatic amoebiasis, an illness then widespread in Madras, than tuberculosis. He had two episodes of dysentery before he left India. When not properly treated, amoebic dysentery can lie dormant for years and lead to hepatic amoebiasis, whose diagnosis was not then well established. At the time, if properly diagnosed, amoebiasis was a treatable and often curable disease; British soldiers who contracted it during the First World War were being successfully cured of amoebiasis around the time Ramanujan left England.
Personality and spiritual life
Ramanujan has been described as a person of a somewhat shy and quiet disposition, a dignified man with pleasant manners. He lived a simple life at Cambridge. Ramanujan's first Indian biographers describe him as a rigorously orthodox Hindu. He credited his acumen to his family goddess, Namagiri Thayar (Goddess Mahalakshmi) of Namakkal. He looked to her for inspiration in his work and said he dreamed of blood drops that symbolised her consort, Narasimha. Later he had visions of scrolls of complex mathematical content unfolding before his eyes. He often said, "An equation for me has no meaning unless it expresses a thought of God."
Hardy cites Ramanujan as remarking that all religions seemed equally true to him. Hardy further argued that Ramanujan's religious belief had been romanticised by Westerners and overstated—in reference to his belief, not practice—by Indian biographers. At the same time, he remarked on Ramanujan's strict vegetarianism.
Similarly, in an interview with Frontline, Berndt said, "Many people falsely promulgate mystical powers to Ramanujan's mathematical thinking. It is not true. He has meticulously recorded every result in his three notebooks," further speculating that Ramanujan worked out intermediate results on slate that he could not afford the paper to record more permanently.
Berndt reported that Janaki said in 1984 that Ramanujan spent so much of his time on mathematics that he did not go to the temple, that she and her mother often fed him because he had no time to eat, and that most of the religious stories attributed to him originated with others. However, his orthopraxy was not in doubt.
Mathematical achievements
In mathematics, there is a distinction between insight and formulating or working through a proof. Ramanujan proposed an abundance of formulae that could be investigated later in depth. G. H. Hardy said that Ramanujan's discoveries are unusually rich and that there is often more to them than initially meets the eye. As a byproduct of his work, new directions of research were opened up. Examples of the most intriguing of these formulae include infinite series for , one of which is given below:
This result is based on the negative fundamental discriminant with class number . Further, and , which is related to the fact that
This might be compared to Heegner numbers, which have class number 1 and yield similar formulae.
Ramanujan's series for converges extraordinarily rapidly and forms the basis of some of the fastest algorithms used to calculate . Truncating the sum to the first term also gives the approximation for , which is correct to six decimal places; truncating it to the first two terms gives a value correct to 14 decimal places .
One of Ramanujan's remarkable capabilities was the rapid solution of problems, illustrated by the following anecdote about an incident in which P. C. Mahalanobis posed a problem:
His intuition also led him to derive some previously unknown identities, such as
for all such that and , where is the gamma function, and related to a special value of the Dedekind eta function. Expanding into series of powers and equating coefficients of , , and gives some deep identities for the hyperbolic secant.
In 1918, Hardy and Ramanujan studied the partition function extensively. They gave a non-convergent asymptotic series that permits exact computation of the number of partitions of an integer. In 1937, Hans Rademacher refined their formula to find an exact convergent series solution to this problem. Ramanujan and Hardy's work in this area gave rise to a powerful new method for finding asymptotic formulae called the circle method.
In the last year of his life, Ramanujan discovered mock theta functions. For many years, these functions were a mystery, but they are now known to be the holomorphic parts of harmonic weak Maass forms.
The Ramanujan conjecture
Although there are numerous statements that could have borne the name Ramanujan conjecture, one was highly influential in later work. In particular, the connection of this conjecture with conjectures of André Weil in algebraic geometry opened up new areas of research. That Ramanujan conjecture is an assertion on the size of the tau-function, which has a generating function as the discriminant modular form Δ(q), a typical cusp form in the theory of modular forms. It was finally proven in 1973, as a consequence of Pierre Deligne's proof of the Weil conjectures. The reduction step involved is complicated. Deligne won a Fields Medal in 1978 for that work.
In his paper "On certain arithmetical functions", Ramanujan defined the so-called delta-function, whose coefficients are called (the Ramanujan tau function). He proved many congruences for these numbers, such as for primes . This congruence (and others like it that Ramanujan proved) inspired Jean-Pierre Serre (1954 Fields Medalist) to conjecture that there is a theory of Galois representations that "explains" these congruences and more generally all modular forms. is the first example of a modular form to be studied in this way. Deligne (in his Fields Medal-winning work) proved Serre's conjecture. The proof of Fermat's Last Theorem proceeds by first reinterpreting elliptic curves and modular forms in terms of these Galois representations. Without this theory, there would be no proof of Fermat's Last Theorem.
Ramanujan's notebooks
While still in Madras, Ramanujan recorded the bulk of his results in four notebooks of looseleaf paper. They were mostly written up without any derivations. This is probably the origin of the misapprehension that Ramanujan was unable to prove his results and simply thought up the final result directly. Mathematician Bruce C. Berndt, in his review of these notebooks and Ramanujan's work, says that Ramanujan most certainly was able to prove most of his results, but chose not to record the proofs in his notes.
This may have been for any number of reasons. Since paper was very expensive, Ramanujan did most of his work and perhaps his proofs on slate, after which he transferred the final results to paper. At the time, slates were commonly used by mathematics students in the Madras Presidency. He was also quite likely to have been influenced by the style of G. S. Carr's book, which stated results without proofs. It is also possible that Ramanujan considered his work to be for his personal interest alone and therefore recorded only the results.
The first notebook has 351 pages with 16 somewhat organised chapters and some unorganised material. The second has 256 pages in 21 chapters and 100 unorganised pages, and the third 33 unorganised pages. The results in his notebooks inspired numerous papers by later mathematicians trying to prove what he had found. Hardy himself wrote papers exploring material from Ramanujan's work, as did G. N. Watson, B. M. Wilson, and Bruce Berndt.
In 1976, George Andrews rediscovered a fourth notebook with 87 unorganised pages, the so-called "lost notebook".
Hardy–Ramanujan number 1729
The number 1729 is known as the Hardy–Ramanujan number after a famous visit by Hardy to see Ramanujan at a hospital. In Hardy's words:
I remember once going to see him when he was ill at Putney. I had ridden in taxi cab number 1729 and remarked that the number seemed to me rather a dull one, and that I hoped it was not an unfavorable omen. "No", he replied, "it is a very interesting number; it is the smallest number expressible as the sum of two cubes in two different ways."
Immediately before this anecdote, Hardy quoted Littlewood as saying, "Every positive integer was one of [Ramanujan's] personal friends."
The two different ways are:
Generalisations of this idea have created the notion of "taxicab numbers".
Mathematicians' views of Ramanujan
In his obituary of Ramanujan, written for Nature in 1920, Hardy observed that Ramanujan's work primarily involved fields less known even among other pure mathematicians, concluding:
Hardy further said:
As an example, Hardy commented on 15 theorems in the first letter. Of those, the first 13 are correct and insightful, the 14th is incorrect but insightful, and the 15th is correct but misleading.
(14): The coefficient of in is the integer nearest toThis "was one of the most fruitful he ever made, since it ended by leading us to all our joint work on partitions".
When asked about the methods Ramanujan used to arrive at his solutions, Hardy said they were "arrived at by a process of mingled argument, intuition, and induction, of which he was entirely unable to give any coherent account." He also said that he had "never met his equal, and can compare him only with Euler or Jacobi". Hardy thought Ramanujan worked in a 19th-century style, where arriving at correct formulas was more important than systematic formal theories. Hardy thought his achievements were greatest in algebra, especially hypergeometric series and continued fractions.
He discovered fewer new things in analysis, possibly because he lacked the formal education and did not find books to learn it from, but rediscovered many results, including the prime number theorem. In analysis, he worked on the elliptic functions and the analytic theory of numbers. In analytic number theory, he was as imaginative as usual, but much of what he imagined was wrong. Hardy blamed this on the inherent difficulty of analytic number theory, where imagination had led many great mathematicians astray. In analytic number theory, rigorous proof is more important than imagination, the opposite of Ramanujan's style. His "one great failure" is that he knew "nothing at all about the theory of analytic functions".
Littlewood reportedly said that helping Ramanujan catch up with European mathematics beyond what was available in India was very difficult because each new point mentioned to Ramanujan caused him to produce original ideas that prevented Littlewood from continuing the lesson.
K. Srinivasa Rao has said, "As for his place in the world of Mathematics, we quote Bruce C. Berndt: 'Paul Erdős has passed on to us Hardy's personal ratings of mathematicians. Suppose that we rate mathematicians on the basis of pure talent on a scale from 0 to 100. Hardy gave himself a score of 25, J. E. Littlewood 30, David Hilbert 80 and Ramanujan 100. During a May 2011 lecture at IIT Madras, Berndt said that over the last 40 years, as nearly all of Ramanujan's conjectures had been proven, there had been greater appreciation of Ramanujan's work and brilliance, and that Ramanujan's work was now pervading many areas of modern mathematics and physics.
Posthumous recognition
The year after his death, Nature listed Ramanujan among other distinguished scientists and mathematicians on a "Calendar of Scientific Pioneers" who had achieved eminence. Ramanujan's home state of Tamil Nadu celebrates 22 December (Ramanujan's birthday) as 'State IT Day'. Stamps picturing Ramanujan were issued by the government of India in 1962, 2011, 2012 and 2016.
Since Ramanujan's centennial year, his birthday, 22 December, has been annually celebrated as Ramanujan Day by the Government Arts College, Kumbakonam, where he studied, and at the IIT Madras in Chennai. The International Centre for Theoretical Physics (ICTP) has created a prize in Ramanujan's name for young mathematicians from developing countries in cooperation with the International Mathematical Union, which nominates members of the prize committee. SASTRA University, a private university based in Tamil Nadu, has instituted the SASTRA Ramanujan Prize of US$10,000 to be given annually to a mathematician not exceeding age 32 for outstanding contributions in an area of mathematics influenced by Ramanujan.
Based on the recommendations of a committee appointed by the University Grants Commission (UGC), Government of India, the Srinivasa Ramanujan Centre, established by SASTRA, has been declared an off-campus centre under the ambit of SASTRA University. House of Ramanujan Mathematics, a museum of Ramanujan's life and work, is also on this campus. SASTRA purchased and renovated the house where Ramanujan lived at Kumabakonam.
In 2011, on the 125th anniversary of his birth, the Indian government declared that 22 December will be celebrated every year as National Mathematics Day. Then Indian Prime Minister Manmohan Singh also declared that 2012 would be celebrated as National Mathematics Year and 22 December as National Mathematics Day of India.
Ramanujan IT City is an information technology (IT) special economic zone (SEZ) in Chennai that was built in 2011. Situated next to the Tidel Park, it includes with two zones, with a total area of , including of office space.
Commemorative postal stamps
Commemorative stamps released by India Post (by year):
In popular culture
The Man Who Loved Numbers is a 1988 PBS NOVA documentary about Ramanujan (S15, E9).
The Man Who Knew Infinity is a 2015 film based on Kanigel's book of the same name. British actor Dev Patel portrays Ramanujan.
Ramanujan, an Indo-British collaboration film chronicling Ramanujan's life, was released in 2014 by the independent film company Camphor Cinema. The cast and crew include director Gnana Rajasekaran, cinematographer Sunny Joseph and editor B. Lenin. Indian and English stars Abhinay Vaddi, Suhasini Maniratnam, Bhama, Kevin McGowan and Michael Lieber star in pivotal roles.
Nandan Kudhyadi directed the Indian documentary films The Genius of Srinivasa Ramanujan (2013) and Srinivasa Ramanujan: The Mathematician and His Legacy (2016) about the mathematician.
Ramanujan (The Man Who Reshaped 20th Century Mathematics), an Indian docudrama film directed by Akashdeep released in 2018.
M. N. Krish's thriller novel The Steradian Trail weaves Ramanujan and his accidental discovery into its plot connecting religion, mathematics, finance and economics.
Partition, a play by Ira Hauptman about Hardy and Ramanujan, was first performed in 2013.
The play First Class Man by Alter Ego Productions was based on David Freeman's First Class Man. The play centres around Ramanujan and his complex and dysfunctional relationship with Hardy. On 16 October 2011 it was announced that Roger Spottiswoode, best known for his James Bond film Tomorrow Never Dies, is working on the film version, starring Siddharth.
A Disappearing Number is a British stage production by the company Complicite that explores the relationship between Hardy and Ramanujan.
David Leavitt's novel The Indian Clerk explores the events following Ramanujan's letter to Hardy.
Google honoured Ramanujan on his 125th birth anniversary by replacing its logo with a doodle on its home page.
Ramanujan was mentioned in the 1997 film Good Will Hunting, in a scene where professor Gerald Lambeau (Stellan Skarsgård) explains to Sean Maguire (Robin Williams) the genius of Will Hunting (Matt Damon) by comparing him to Ramanujan.
Selected papers
Posthumously published extract of a longer, unpublished manuscript.
Further works of Ramanujan's mathematics
George E. Andrews and Bruce C. Berndt, Ramanujan's Lost Notebook: Part I (Springer, 2005, )
George E. Andrews and Bruce C. Berndt, Ramanujan's Lost Notebook: Part II, (Springer, 2008, )
George E. Andrews and Bruce C. Berndt, Ramanujan's Lost Notebook: Part III, (Springer, 2012, )
George E. Andrews and Bruce C. Berndt, Ramanujan's Lost Notebook: Part IV, (Springer, 2013, )
George E. Andrews and Bruce C. Berndt, Ramanujan's Lost Notebook: Part V, (Springer, 2018, )
M. P. Chaudhary, A simple solution of some integrals given by Srinivasa Ramanujan, (Resonance: J. Sci. Education – publication of Indian Academy of Science, 2008)
M.P. Chaudhary, Mock theta functions to mock theta conjectures, SCIENTIA, Series A: Math. Sci., (22)(2012) 33–46.
M.P. Chaudhary, On modular relations for the Roger-Ramanujan type identities, Pacific J. Appl. Math., 7(3)(2016) 177–184.
Selected publications on Ramanujan and his work
Selected publications on works of Ramanujan
This book was originally published in 1927 after Ramanujan's death. It contains the 37 papers published in professional journals by Ramanujan during his lifetime. The third reprint contains additional commentary by Bruce C. Berndt.
These books contain photocopies of the original notebooks as written by Ramanujan.
This book contains photocopies of the pages of the "Lost Notebook".
Problems posed by Ramanujan, Journal of the Indian Mathematical Society.
This was produced from scanned and microfilmed images of the original manuscripts by expert archivists of Roja Muthiah Research Library, Chennai.
See also
1729 (number)
Brown numbers
List of amateur mathematicians
List of Indian mathematicians
Ramanujan graph
Ramanujan summation
Ramanujan's constant
Ramanujan's ternary quadratic form
Rank of a partition
Footnotes
References
External links
Media links
Feature Film on Mathematics Genius Ramanujan by Dev Benegal and Stephen Fry
BBC radio programme about Ramanujan – episode 5
A biographical song about Ramanujan's life
Biographical links
A short biography of Ramanujan
"Our Devoted Site for Great Mathematical Genius"
Other links
A Study Group For Mathematics: Srinivasa Ramanujan Iyengar
The Ramanujan Journal – An international journal devoted to Ramanujan
International Math Union Prizes, including a Ramanujan Prize
Hindu.com: Norwegian and Indian mathematical geniuses, Ramanujan – Essays and Surveys , Ramanujan's growing influence, Ramanujan's mentor
Hindu.com: The sponsor of Ramanujan
"Ramanujan's mock theta function puzzle solved"
Ramanujan's papers and notebooks
Sample page from the second notebook
Ramanujan on Fried Eye''
1887 births
1920 deaths
Scientists from Tamil Nadu
20th-century Indian mathematicians
Indian Hindus
Mental calculators
Indian combinatorialists
Indian number theorists
Fellows of Trinity College, Cambridge
Fellows of the Royal Society
Pi-related people
People from Erode district
University of Madras alumni
People from Thanjavur district
19th-century Indian mathematicians
Number theorists
19th-century Hindus
Infectious disease deaths in India
Mathematicians from British India
People from the Kingdom of Mysore
Fellows of the Royal Society of Arts | Srinivasa Ramanujan | [
"Mathematics"
] | 9,515 | [
"Pi-related people",
"Number theorists",
"Pi",
"Number theory"
] |
47,719 | https://en.wikipedia.org/wiki/Coulomb | The coulomb (symbol: C) is the unit of electric charge in the International System of Units (SI). It is defined to be equal to the electric charge delivered by a 1 ampere current in 1 second. It is used to define the elementary charge e.
Definition
The SI defines the coulomb as "the quantity of electricity carried in 1 second by a current of 1 ampere". Then the value of the elementary charge e is defined to be . Since the coulomb is the reciprocal of the elementary charge,
it is approximately and is thus not an integer multiple of the elementary charge.
The coulomb was previously defined in terms of the force between two wires. The coulomb was originally defined, using the latter definition of the ampere, as .
The 2019 redefinition of the ampere and other SI base units fixed the numerical value of the elementary charge when expressed in coulombs and therefore fixed the value of the coulomb when expressed as a multiple of the fundamental charge.
SI prefixes
Like other SI units, the coulomb can be modified by adding a prefix that multiplies it by a power of 10.
Conversions
The magnitude of the electrical charge of one mole of elementary charges (approximately , the Avogadro number) is known as a faraday unit of charge (closely related to the Faraday constant). One faraday equals In terms of the Avogadro constant (NA), one coulomb is equal to approximately × NA elementary charges.
Every farad of capacitance can hold one coulomb per volt across the capacitor.
One ampere hour equals , hence = .
One statcoulomb (statC), the obsolete CGS electrostatic unit of charge (esu), is approximately or about one-third of a nanocoulomb.
In everyday terms
The charges in static electricity from rubbing materials together are typically a few microcoulombs.
The amount of charge that travels through a lightning bolt is typically around 15 C, although for large bolts this can be up to 350 C.
The amount of charge that travels through a typical alkaline AA battery from being fully charged to discharged is about = ≈ .
A typical smartphone battery can hold ≈ .
Name and history
By 1878, the British Association for the Advancement of Science had defined the volt, ohm, and farad, but not the coulomb. In 1881, the International Electrical Congress, now the International Electrotechnical Commission (IEC), approved the volt as the unit for electromotive force, the ampere as the unit for electric current, and the coulomb as the unit of electric charge.
At that time, the volt was defined as the potential difference [i.e., what is nowadays called the "voltage (difference)"] across a conductor when a current of one ampere dissipates one watt of power.
The coulomb (later "absolute coulomb" or "abcoulomb" for disambiguation) was part of the EMU system of units. The "international coulomb" based on laboratory specifications for its measurement was introduced by the IEC in 1908. The entire set of "reproducible units" was abandoned in 1948 and the "international coulomb" became the modern coulomb.
See also
Abcoulomb, a cgs unit of charge
Ampère's circuital law
Coulomb's law
Electrostatics
Elementary charge
Faraday constant, the number of coulombs per mole of elementary charges
Notes and references
SI derived units
Units of electrical charge | Coulomb | [
"Physics",
"Mathematics"
] | 743 | [
"Physical quantities",
"Electric charge",
"Quantity",
"Units of electrical charge",
"Units of measurement"
] |
47,732 | https://en.wikipedia.org/wiki/Fourier-transform%20spectroscopy | Fourier-transform spectroscopy (FTS) is a measurement technique whereby spectra are collected based on measurements of the coherence of a radiative source, using time-domain or space-domain measurements of the radiation, electromagnetic or not. It can be applied to a variety of types of spectroscopy including optical spectroscopy, infrared spectroscopy (FTIR, FT-NIRS), nuclear magnetic resonance (NMR) and magnetic resonance spectroscopic imaging (MRSI), mass spectrometry and electron spin resonance spectroscopy.
There are several methods for measuring the temporal coherence of the light (see: field-autocorrelation), including the continuous-wave and the pulsed Fourier-transform spectrometer or Fourier-transform spectrograph.
The term "Fourier-transform spectroscopy" reflects the fact that in all these techniques, a Fourier transform is required to turn the raw data into the actual spectrum, and in many of the cases in optics involving interferometers, is based on the Wiener–Khinchin theorem.
Conceptual introduction
Measuring an emission spectrum
One of the most basic tasks in spectroscopy is to characterize the spectrum of a light source: how much light is emitted at each different wavelength. The most straightforward way to measure a spectrum is to pass the light through a monochromator, an instrument that blocks all of the light except the light at a certain wavelength (the un-blocked wavelength is set by a knob on the monochromator). Then the intensity of this remaining (single-wavelength) light is measured. The measured intensity directly indicates how much light is emitted at that wavelength. By varying the monochromator's wavelength setting, the full spectrum can be measured. This simple scheme in fact describes how some spectrometers work.
Fourier-transform spectroscopy is a less intuitive way to get the same information. Rather than allowing only one wavelength at a time to pass through to the detector, this technique lets through a beam containing many different wavelengths of light at once, and measures the total beam intensity. Next, the beam is modified to contain a different combination of wavelengths, giving a second data point. This process is repeated many times. Afterwards, a computer takes all this data and works backwards to infer how much light there is at each wavelength.
To be more specific, between the light source and the detector, there is a certain configuration of mirrors that allows some wavelengths to pass through but blocks others (due to wave interference). The beam is modified for each new data point by moving one of the mirrors; this changes the set of wavelengths that can pass through.
As mentioned, computer processing is required to turn the raw data (light intensity for each mirror position) into the desired result (light intensity for each wavelength). The processing required turns out to be a common algorithm called the Fourier transform (hence the name, "Fourier-transform spectroscopy"). The raw data is sometimes called an "interferogram". Because of the existing computer equipment requirements, and the ability of light to analyze very small amounts of substance, it is often beneficial to automate many aspects of the sample preparation. The sample can be better preserved and the results are much easier to replicate. Both of these benefits are important, for instance, in testing situations that may later involve legal action, such as those involving drug specimens.
Measuring an absorption spectrum
The method of Fourier-transform spectroscopy can also be used for absorption spectroscopy. The primary example is "FTIR Spectroscopy", a common technique in chemistry.
In general, the goal of absorption spectroscopy is to measure how well a sample absorbs or transmits light at each different wavelength. Although absorption spectroscopy and emission spectroscopy are different in principle, they are closely related in practice; any technique for emission spectroscopy can also be used for absorption spectroscopy. First, the emission spectrum of a broadband lamp is measured (this is called the "background spectrum"). Second, the emission spectrum of the same lamp shining through the sample is measured (this is called the "sample spectrum"). The sample will absorb some of the light, causing the spectra to be different. The ratio of the "sample spectrum" to the "background spectrum" is directly related to the sample's absorption spectrum.
Accordingly, the technique of "Fourier-transform spectroscopy" can be used both for measuring emission spectra (for example, the emission spectrum of a star), and absorption spectra (for example, the absorption spectrum of a liquid).
Continuous-wave Michelson or Fourier-transform spectrograph
The Michelson spectrograph is similar to the instrument used in the Michelson–Morley experiment. Light from the source is split into two beams by a half-silvered mirror, one is reflected off a fixed mirror and one off a movable mirror, which introduces a time delay—the Fourier-transform spectrometer is just a Michelson interferometer with a movable mirror. The beams interfere, allowing the temporal coherence of the light to be measured at each different time delay setting, effectively converting the time domain into a spatial coordinate. By making measurements of the signal at many discrete positions of the movable mirror, the spectrum can be reconstructed using a Fourier transform of the temporal coherence of the light. Michelson spectrographs are capable of very high spectral resolution observations of very bright sources.
The Michelson or Fourier-transform spectrograph was popular for infra-red applications at a time when infra-red astronomy only had single-pixel detectors. Imaging Michelson spectrometers are a possibility, but in general have been supplanted by imaging Fabry–Pérot instruments, which are easier to construct.
Extracting the spectrum
The intensity as a function of the path length difference (also denoted as retardation) in the interferometer and wavenumber is
where is the spectrum to be determined. Note that it is not necessary for to be modulated by the sample before the interferometer. In fact, most FTIR spectrometers place the sample after the interferometer in the optical path. The total intensity at the detector is
This is just a Fourier cosine transform. The inverse gives us our desired result in terms of the measured quantity :
Pulsed Fourier-transform spectrometer
A pulsed Fourier-transform spectrometer does not employ transmittance techniques. In the most general description of pulsed FT spectrometry, a sample is exposed to an energizing event which causes a periodic response. The frequency of the periodic response, as governed by the field conditions in the spectrometer, is indicative of the measured properties of the analyte.
Examples of pulsed Fourier-transform spectrometry
In magnetic spectroscopy (EPR, NMR), a microwave pulse (EPR) or a radio frequency pulse (NMR) in a strong ambient magnetic field is used as the energizing event. This turns the magnetic particles at an angle to the ambient field, resulting in gyration. The gyrating spins then induce a periodic current in a detector coil. Each spin exhibits a characteristic frequency of gyration (relative to the field strength) which reveals information about the analyte.
In Fourier-transform mass spectrometry, the energizing event is the injection of the charged sample into the strong electromagnetic field of a cyclotron. These particles travel in circles, inducing a current in a fixed coil on one point in their circle. Each traveling particle exhibits a characteristic cyclotron frequency-field ratio revealing the masses in the sample.
Free induction decay
Pulsed FT spectrometry gives the advantage of requiring a single, time-dependent measurement which can easily deconvolute a set of similar but distinct signals. The resulting composite signal, is called a free induction decay, because typically the signal will decay due to inhomogeneities in sample frequency, or simply unrecoverable loss of signal due to entropic loss of the property being measured.
Nanoscale spectroscopy with pulsed sources
Pulsed sources allow for the utilization of Fourier-transform spectroscopy principles in scanning near-field optical microscopy techniques. Particularly in nano-FTIR, where the scattering from a sharp probe-tip is used to perform spectroscopy of samples with nanoscale spatial resolution, a high-power illumination from pulsed infrared lasers makes up for a relatively small scattering efficiency (often < 1%) of the probe.
Stationary forms of Fourier-transform spectrometers
In addition to the scanning forms of Fourier-transform spectrometers, there are a number of stationary or self-scanned forms. While the analysis of the interferometric output is similar to that of the typical scanning interferometer, significant differences apply, as shown in the published analyses. Some stationary forms retain the Fellgett multiplex advantage, and their use in the spectral region where detector noise limits apply is similar to the scanning forms of the FTS. In the photon-noise limited region, the application of stationary interferometers is dictated by specific consideration for the spectral region and the application.
Fellgett advantage
One of the most important advantages of Fourier-transform spectroscopy was shown by P. B. Fellgett, an early advocate of the method. The Fellgett advantage, also known as the multiplex principle, states that when obtaining a spectrum when measurement noise is dominated by detector noise (which is independent of the power of radiation incident on the detector), a multiplex spectrometer such as a Fourier-transform spectrometer will produce a relative improvement in signal-to-noise ratio, compared to an equivalent scanning monochromator, of the order of the square root of m, where m is the number of sample points comprising the spectrum. However, if the detector is shot-noise dominated, the noise will be proportional to the square root of the power, thus for a broad boxcar spectrum (continuous broadband source), the noise is proportional to the square root of m, thus precisely offset the Fellgett's advantage. For line emission sources the situation is even worse and there is a distinct `multiplex disadvantage' as the shot noise from a strong emission component will overwhelm the fainter components of the spectrum. Shot noise is the main reason Fourier-transform spectrometry was never popular for ultraviolet (UV) and visible spectra.
See also
Applied spectroscopy
Forensic chemistry
Forensic polymer engineering
Nuclear magnetic resonance
Time stretch dispersive Fourier transform
Infrared spectroscopy
Infrared spectroscopy of metal carbonyls
nano-FTIR
Fellgett's advantage
References
External links
Description of how a Fourier transform spectrometer works
The Michelson or Fourier transform spectrograph
Internet Journal of Vibrational Spectroscopy – How FTIR works
Fourier Transform Spectroscopy Topical Meeting and Tabletop Exhibit
Spectroscopy
Fourier analysis
Scientific techniques | Fourier-transform spectroscopy | [
"Physics",
"Chemistry"
] | 2,184 | [
"Instrumental analysis",
"Molecular physics",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
47,742 | https://en.wikipedia.org/wiki/Brooklyn%20Bridge | The Brooklyn Bridge is a hybrid cable-stayed/suspension bridge in New York City, spanning the East River between the boroughs of Manhattan and Brooklyn. Opened on May 24, 1883, the Brooklyn Bridge was the first fixed crossing of the East River. It was also the longest suspension bridge in the world at the time of its opening, with a main span of and a deck above Mean High Water. The span was originally called the New York and Brooklyn Bridge or the East River Bridge but was officially renamed the Brooklyn Bridge in 1915.
Proposals for a bridge connecting Manhattan and Brooklyn were first made in the early 19th century, which eventually led to the construction of the current span, designed by John A. Roebling. The project's chief engineer, his son Washington Roebling, contributed further design work, assisted by the latter's wife, Emily Warren Roebling. Construction started in 1870 and was overseen by the New York Bridge Company, which in turn was controlled by the Tammany Hall political machine. Numerous controversies and the novelty of the design prolonged the project over thirteen years. After opening, the Brooklyn Bridge underwent several reconfigurations, having carried horse-drawn vehicles and elevated railway lines until 1950. To alleviate increasing traffic flows, additional bridges and tunnels were built across the East River. Following gradual deterioration, the Brooklyn Bridge was renovated several times, including in the 1950s, 1980s, and 2010s.
The Brooklyn Bridge is the southernmost of four vehicular bridges directly connecting Manhattan Island and Long Island, with the Manhattan Bridge, the Williamsburg Bridge, and the Queensboro Bridge to the north. Only passenger vehicles and pedestrian and bicycle traffic are permitted. A major tourist attraction since its opening, the Brooklyn Bridge has become an icon of New York City. Over the years, the bridge has been used as the location of various stunts and performances, as well as several crimes, attacks and vandalism. The Brooklyn Bridge is designated a National Historic Landmark, a New York City landmark, and a National Historic Civil Engineering Landmark.
Description
The Brooklyn Bridge, an early example of a steel-wire suspension bridge, uses a hybrid cable-stayed/suspension bridge design, with both vertical and diagonal suspender cables. Its stone towers are neo-Gothic, with characteristic pointed arches. The New York City Department of Transportation (NYCDOT), which maintains the bridge, says that its original paint scheme was "Brooklyn Bridge Tan" and "Silver", but other accounts state that it was originally entirely "Rawlins Red".
Deck
To provide sufficient clearance for shipping in the East River, the Brooklyn Bridge incorporates long approach viaducts on either end to raise it from low ground on both shores. Including approaches, the Brooklyn Bridge is a total of long when measured between the curbs at Park Row in Manhattan and Sands Street in Brooklyn. A separate measurement of is sometimes given; this is the distance from the curb at Centre Street in Manhattan.
Suspension span
The main span between the two suspension towers is long and wide. The bridge "elongates and contracts between the extremes of temperature from 14 to 16 inches". Navigational clearance is above Mean High Water (MHW). A 1909 Engineering Magazine article said that, at the center of the span, the height above MHW could fluctuate by more than due to temperature and traffic loads, while more rigid spans had a lower maximum deflection.
The side spans, between each suspension tower and each side's suspension anchorages, are long. At the time of construction, engineers had not yet discovered the aerodynamics of bridge construction, and bridge designs were not tested in wind tunnels. John Roebling designed the Brooklyn Bridge's truss system to be six to eight times as strong as he thought it needed to be. As such, the open truss structure supporting the deck is, by its nature, subject to fewer aerodynamic problems. However, due to a supplier's fraudulent substitution of inferior-quality wire in the initial construction, the bridge was reappraised at the time as being only four times as strong as necessary.
The main span and side spans are supported by a structure containing trusses that run parallel to the roadway, each of which is deep. Originally there were six trusses, but two were removed during a late-1940s renovation. The trusses allow the Brooklyn Bridge to hold a total load of , a design consideration from when it originally carried heavier elevated trains. These trusses are held up by suspender ropes, which hang downward from each of the four main cables. Crossbeams run between the trusses at the top, and diagonal and vertical stiffening beams run on the outside and inside of each roadway.
An elevated pedestrian-only promenade runs in between the two roadways and above them. It typically runs below the level of the crossbeams, except at the areas surrounding each tower. Here, the promenade rises to just above the level of the crossbeams, connecting to a balcony that slightly overhangs the two roadways. The path is generally wide. The iron railings were produced by Janes & Kirtland, a Bronx iron foundry that also made the United States Capitol dome and the Bow Bridge in Central Park.
Approaches
Each of the side spans is reached by an approach ramp. The approach ramp from the Brooklyn side is shorter than the approach ramp from the Manhattan side. The approaches are supported by Renaissance-style arches made of masonry; the arch openings themselves were filled with brick walls, with small windows within. The approach ramp contains nine arch or iron-girder bridges across side streets in Manhattan and Brooklyn.
Underneath the Manhattan approach, a series of brick slopes or "banks" was developed into a skate park, the Brooklyn Banks, in the late 1980s. The park uses the approach's support pillars as obstacles. In the mid-2010s, the Brooklyn Banks were closed to the public because the area was being used as a storage site during the bridge's renovation. The skateboarding community has attempted to save the banks on multiple occasions; after the city destroyed the smaller banks in the 2000s, the city government agreed to keep the larger banks for skateboarding. When the NYCDOT removed the bricks from the banks in 2020, skateboarders started an online petition. In the 2020s, local resident Rosa Chang advocated for the space under the Manhattan approach to be converted into a recreational area known as Gotham Park. Some of the space under the Manhattan approach reopened in May 2023 as a park called the Arches; this was followed in November 2024 by another section of parkland.
Cables
The Brooklyn Bridge contains four main cables, which descend from the tops of the suspension towers and help support the deck. Two are located to the outside of the bridge's roadways, while two are in the median of the roadways. Each main cable measures in diameter and contains 5,282 parallel, galvanized steel wires wrapped closely together in a cylindrical shape. These wires are bundled in 19 individual strands, with 278 wires to a strand. This was the first use of bundling in a suspension bridge and took several months for workers to tie together. Since the 2000s, the main cables have also supported a series of 24-watt LED lighting fixtures, referred to as "necklace lights" due to their shape.
In addition, either 1,088, 1,096, or 1,520 galvanized steel wire suspender cables hang downward from the main cables. Another 400 cable stays extend diagonally from the towers. The vertical suspender cables and diagonal cable stays hold up the truss structure around the bridge deck. The bridge's suspenders originally used wire rope, which was replaced in the 1980s with galvanized steel made by Bethlehem Steel. The vertical suspender cables measure long, and the diagonal stays measure long.
Anchorages
Each side of the bridge contains an anchorage for the main cables. The anchorages are trapezoidal limestone structures located slightly inland of the shore, measuring at the base and at the top. Each anchorage weighs . The Manhattan anchorage rests on a foundation of bedrock while the Brooklyn anchorage rests on clay.
The anchorages both have four anchor plates, one for each of the main cables, which are located near ground level and parallel to the ground. The anchor plates measure , with a thickness of and weigh each. Each anchor plate is connected to the respective main cable by two sets of nine eyebars, each of which is about long and up to thick. The chains of eyebars curve downward from the cables toward the anchor plates, and the eyebars vary in size depending on their position.
The anchorages also contain numerous passageways and compartments. Starting in 1876, in order to fund the bridge's maintenance, the New York City government made the large vaults under the bridge's Manhattan anchorage available for rent, and they were in constant use during the early 20th century. The vaults were used to store wine, as they were kept at a consistent temperature due to a lack of air circulation. The Manhattan vault was called the "Blue Grotto" because of a shrine to the Virgin Mary next to an opening at the entrance. The vaults were closed for public use in the late 1910s and 1920s during World War I and Prohibition but were reopened thereafter. When New York magazine visited one of the cellars in 1978, it discovered a "fading inscription" on a wall reading: "Who loveth not wine, women and song, he remaineth a fool his whole life long." Leaks found within the vault's spaces necessitated repairs during the late 1980s and early 1990s. By the late 1990s, the chambers were being used to store maintenance equipment.
Towers
The bridge's two suspension towers are tall with a footprint of at the high water line. They are built of limestone, granite, and Rosendale cement. The limestone was quarried at the Clark Quarry in Essex County, New York. The granite blocks were quarried and shaped on Vinalhaven Island, Maine, under a contract with the Bodwell Granite Company, and delivered from Maine to New York by schooner. The Manhattan tower contains of masonry, while the Brooklyn tower has of masonry. There are 56 LED lamps mounted onto the towers.
Each tower contains a pair of Gothic Revival pointed arches, through which the roadways run. The arch openings are tall and wide. The tops of the towers are located above the floor of each arch opening, while the floors of the openings are above mean water level, giving the towers a total height of above mean high water.
Caissons
The towers rest on underwater caissons made of southern yellow pine and filled with cement. Inside both caissons were spaces for construction workers. The Manhattan side's caisson is slightly larger, measuring and located below high water, while the Brooklyn side's caisson measures and is located below high water. The caissons were designed to hold at least the weight of the towers which would exert a pressure of when fully built, but the caissons were over-engineered for safety. During an accident on the Brooklyn side, when air pressure was lost and the partially-built towers dropped full-force down, the caisson sustained an estimated pressure of with only minor damage. Most of the timber used in the bridge's construction, including in the caissons, came from mills at Gascoigne Bluff on St. Simons Island, Georgia.
The Brooklyn side's caisson, which was built first, originally had a height of and a ceiling composed of five layers of timber, each layer tall. Ten more layers of timber were later added atop the ceiling, and the entire caisson was wrapped in tin and wood for further protection against flooding. The thickness of the caisson's sides was at both the bottom and the top. The caisson had six chambers: two each for dredging, supply shafts, and airlocks.
The caisson on the Manhattan side was slightly different because it had to be installed at a greater depth. To protect against the increased air pressure at that depth, the Manhattan caisson had 22 layers of timber on its roof, seven more than its Brooklyn counterpart had. The Manhattan caisson also had fifty pipes for sand removal, a fireproof iron-boilerplate interior, and different airlocks and communication systems.
History
Planning
Proposals for a bridge between the then-separate cities of Brooklyn and New York had been suggested as early as 1800. At the time, the only travel between the two cities was by a number of ferry lines. Engineers presented various designs, such as chain or link bridges, though these were never built because of the difficulties of constructing a high enough fixed-span bridge across the extremely busy East River. There were also proposals for tunnels under the East River, but these were considered prohibitively expensive. German immigrant engineer John Augustus Roebling proposed building a suspension bridge over the East River in 1857. He had previously designed and constructed shorter suspension bridges, such as Roebling's Delaware Aqueduct in Lackawaxen, Pennsylvania, and the Niagara Suspension Bridge. In 1867, Roebling erected what became the John A. Roebling Suspension Bridge over the Ohio River between Cincinnati, Ohio, and Covington, Kentucky.
In February 1867, the New York State Senate passed a bill that allowed the construction of a suspension bridge from Brooklyn to Manhattan. Two months later, the New York and Brooklyn Bridge Company was incorporated with a board of directors (later converted to a board of trustees). There were twenty trustees in total: eight each appointed by the mayors of New York and Brooklyn, as well as the mayors of each city and the auditor and comptroller of Brooklyn. The company was tasked with constructing what was then known as the New York and Brooklyn Bridge. Alternatively, the span was just referred to as the "Brooklyn Bridge", a name originating in a January 25, 1867, letter to the editor sent to the Brooklyn Daily Eagle. The act of incorporation, which became law on April 16, 1867, authorized the cities of New York (now Manhattan) and Brooklyn to subscribe to $5 million in capital stock, which would fund the bridge's construction.
Roebling was subsequently named the chief engineer of the work and, by September 1867, had presented a master plan. According to the plan, the bridge would be longer and taller than any suspension bridge previously built. It would incorporate roadways and elevated rail tracks, whose tolls and fares would provide the means to pay for the bridge's construction. It would also include a raised promenade that served as a leisurely pathway. The proposal received much acclaim in both cities, and residents predicted that the New York and Brooklyn Bridge's opening would have as much of an impact as the Suez Canal, the first transatlantic telegraph cable or the first transcontinental railroad. By early 1869, however, some individuals started to criticize the project, saying either that the bridge was too expensive, or that the construction process was too difficult.
To allay concerns about the design of the New York and Brooklyn Bridge, Roebling set up a "Bridge Party" in March 1869, where he invited engineers and members of U.S. Congress to see his other spans. Following the bridge party in April, Roebling and several engineers conducted final surveys. During the process, it was determined that the main span would have to be raised from above MHW, requiring several changes to the overall design. In June 1869, while conducting these surveys, Roebling sustained a crush injury to his foot when a ferry pinned it against a piling. After amputation of his crushed toes, he developed a tetanus infection that left him incapacitated and resulted in his death the following month. Washington Roebling, John Roebling's 32-year-old son, was then hired to fill his father's role. Tammany Hall leader William M. Tweed also became involved in the bridge's construction because, as a major landowner in New York City, he had an interest in the project's completion. The New York and Brooklyn Bridge Company—later known simply as the New York Bridge Company—was actually overseen by Tammany Hall, and it approved Roebling's plans and designated him as chief engineer of the project.
Construction
Caissons
Construction of the Brooklyn Bridge began on January 2, 1870. The first work entailed the construction of two caissons, upon which the suspension towers would be built. The Brooklyn side's caisson was built at the Webb & Bell shipyard in Greenpoint, Brooklyn, and was launched into the river on March 19, 1870. Compressed air was pumped into the caisson, and workers entered the space to dig the sediment until it sank to the bedrock. As one sixteen-year-old from Ireland, Frank Harris, described the fearful experience:The six of us were working naked to the waist in the small iron chamber with the temperature of about 80 degrees Fahrenheit: In five minutes the sweat was pouring from us, and all the while we were standing in icy water that was only kept from rising by the terrific pressure. No wonder the headaches were blinding. Once the caisson had reached the desired depth, it was to be filled in with vertical brick piers and concrete. However, due to the unexpectedly high concentration of large boulders atop the riverbed, the Brooklyn caisson took several months to sink to the desired depth. Furthermore, in December 1870, its timber roof caught fire, delaying construction further. The "Great Blowout", as the fire was called, delayed construction for several months, since the holes in the caisson had to be repaired. On March 6, 1871, the repairs were finished, and the caisson had reached its final depth of ; it was filled with concrete five days later. Overall, about 264 individuals were estimated to have worked in the caisson every day, but because of high worker turnover, the final total was thought to be about 2,500 men in total. In spite of this, only a few workers were paralyzed. At its final depth, the caisson's air pressure was .
The Manhattan side's caisson was the next structure to be built. To ensure that it would not catch fire like its counterpart had, the Manhattan caisson was lined with fireproof plate iron. It was launched from Webb & Bell's shipyard on May 11, 1871, and maneuvered into place that September. Due to the extreme underwater air pressure inside the much deeper Manhattan caisson, many workers became sick with "the bends"—decompression sickness—during this work, despite the incorporation of airlocks (which were believed to help with decompression sickness at the time). This condition was unknown at the time and was first called "caisson disease" by the project physician, Andrew Smith. Between January 25 and May 31, 1872, Smith treated 110 cases of decompression sickness, while three workers died from the disease. When iron probes underneath the Manhattan caisson found the bedrock to be even deeper than expected, Washington Roebling halted construction due to the increased risk of decompression sickness. After the Manhattan caisson reached a depth of with an air pressure of , Washington deemed the sandy subsoil overlying the bedrock beneath to be sufficiently firm, and subsequently infilled the caisson with concrete in July 1872.
Washington Roebling himself suffered a paralyzing injury as a result of caisson disease shortly after ground was broken for the Brooklyn tower foundation. His debilitating condition left him unable to supervise the construction in person, so he designed the caissons and other equipment from his apartment, directing "the completion of the bridge through a telescope from his bedroom." His wife, Emily Warren Roebling, not only provided written communications between her husband and the engineers on site, but also understood mathematics, calculations of catenary curves, strengths of materials, bridge specifications, and the intricacies of cable construction. She spent the next 11 years helping supervise the bridge's construction, taking over much of the chief engineer's duties, including day-to-day supervision and project management.
Towers
After the caissons were completed, piers were constructed on top of each of them upon which masonry towers would be built. The towers' construction was a complex process that took four years. Since the masonry blocks were heavy, the builders transported them to the base of the towers using a pulley system with a continuous -diameter steel wire rope, operated by steam engines at ground level. The blocks were then carried up on a timber track alongside each tower and maneuvered into the proper position using a derrick atop the towers. The blocks sometimes vibrated the ropes because of their weight, but only once did a block fall.
Construction on the suspension towers started in mid-1872, and by the time work was halted for the winter in late 1872, parts of each tower had already been built. By mid-1873, there was substantial progress on the towers' construction. The Brooklyn side's tower had reached a height of above mean high water (MHW), while the tower on the Manhattan side had reached above MHW. The arches of the Brooklyn tower were completed by August 1874. The tower was substantially finished by December 1874 with the erection of saddle plates for the main cables at the top of the tower. However, the ornamentation on the Brooklyn tower could not be completed until the Manhattan tower was finished. The last stone on the Brooklyn tower was raised in June 1875 and the Manhattan tower was completed in July 1876. The saddle plates atop both towers were also raised in July 1876. The work was dangerous: by 1876, three workers had died having fallen from the towers, while nine other workers were killed in other accidents.
In 1875, while the towers were being constructed, the project had depleted its original $5 million budget. Two bridge commissioners, one each from Brooklyn and Manhattan, petitioned New York state lawmakers to allot another $8 million for construction. Ultimately, the legislators passed a law authorizing the allotment with the condition that the cities would buy the stock of Brooklyn Bridge's private stockholders.
Work proceeded concurrently on the anchorages on each side. The Brooklyn anchorage broke ground in January 1873 and was subsequently substantially completed in August 1875. The Manhattan anchorage was built in less time, having started in May 1875, it was mostly completed in July 1876. The anchorages could not be fully completed until the main cables were spun, at which point another would be added to the height of each anchorage.
Cables
The first temporary wire was stretched between the towers on August 15, 1876, using chrome steel provided by the Chrome Steel Company of Brooklyn. The wire was then stretched back across the river, and the two ends were spliced to form a traveler, a lengthy loop of wire connecting the towers, which was driven by a steam hoisting engine at ground level. The wire was one of two that were used to create a temporary footbridge for workers while cable spinning was ongoing. The next step was to send an engineer across the completed traveler wire in a boatswain's chair slung from the wire, to ensure it was safe enough. The bridge's master mechanic, E.F. Farrington, was selected for this task, and an estimated crowd of 10,000 people on both shores watched him cross. A second traveler wire was then stretched across the span, a task that was completed by August 30. The temporary footbridge, located some above the elevation of the future deck, was completed in February 1877.
By December 1876, a steel contract for the permanent cables still had not been awarded. There was disagreement over whether the bridge's cables should use the as-yet-untested Bessemer steel or the well-proven crucible steel. Until a permanent contract was awarded, the builders ordered of wire in the interim, 10 tons each from three companies, including Washington Roebling's own steel mill in Brooklyn. In the end, it was decided to use number 8 Birmingham gauge (approximately 4 mm or 0.165 inches in diameter) crucible steel, and a request for bids was distributed, to which eight companies responded. In January 1877, a contract for crucible steel was awarded to J. Lloyd Haigh, who was associated with bridge trustee Abram Hewitt, whom Roebling distrusted.
The spinning of the wires required the manufacture of large coils of it which were galvanized but not oiled when they left the factory. The coils were delivered to a yard near the Brooklyn anchorage. There they were dipped in linseed oil, hoisted to the top of the anchorage, dried out and spliced into a single wire, and finally coated with red zinc for further galvanizing. There were thirty-two drums at the anchorage yard, eight for each of the four main cables. Each drum had a capacity of of wire. The first experimental wire for the main cables was stretched between the towers on May 29, 1877, and spinning began two weeks later. All four main cables were being strung by that July. During that time, the temporary footbridge was unofficially opened to members of the public, who could receive a visitor's pass; by August 1877 several thousand visitors from around the world had used the footbridge. The visitor passes ceased that September after a visitor had an epileptic seizure and nearly fell off.
As the wires were being spun, work also commenced on the demolition of buildings on either side of the river for the Brooklyn Bridge's approaches; this work was mostly complete by September 1877. The following month, initial contracts were awarded for the suspender wires, which would hang down from the main cables and support the deck. By May 1878, the main cables were more than two-thirds complete. However, the following month, one of the wires slipped, killing two people and injuring three others. In 1877, Hewitt wrote a letter urging against the use of Bessemer steel in the bridge's construction. Bids had been submitted for both crucible steel and Bessemer steel; John A. Roebling's Sons submitted the lowest bid for Bessemer steel, but at Hewitt's direction, the contract was awarded to Haigh.
A subsequent investigation discovered that Haigh had substituted inferior quality wire in the cables. Of eighty rings of wire that were tested, only five met standards, and it was estimated that Haigh had earned $300,000 from the deception. At this point, it was too late to replace the cables that had already been constructed. Roebling determined that the poorer wire would leave the bridge only four times as strong as necessary, rather than six to eight times as strong. The inferior-quality wire was allowed to remain and 150 extra wires were added to each cable. To avoid public controversy, Haigh was not fired, but instead was required to personally pay for higher-quality wire. The contract for the remaining wire was awarded to the John A. Roebling's Sons, and by October 5, 1878, the last of the main cables' wires went over the river.
Nearing completion
After the suspender wires had been placed, workers began erecting steel crossbeams to support the roadway as part of the bridge's overall superstructure. Construction on the bridge's superstructure started in March 1879, but, as with the cables, the trustees initially disagreed on whether the steel superstructure should be made of Bessemer or crucible steel. That July, the trustees decided to award a contract for of Bessemer steel to the Edgemoor (or Edge Moor) Iron Works, based in Philadelphia, to be delivered by 1880. The trustees later passed another resolution for another of Bessemer steel. However, by February 1880 the steel deliveries had not started. That October, the bridge trustees questioned Edgemoor's president about the delay in steel deliveries. Despite Edgemoor's assurances that the contract would be fulfilled, the deliveries still had not been completed by November 1881. Brooklyn mayor Seth Low, who became part of the board of trustees in 1882, became the chairman of a committee tasked to investigate Edgemoor's failure to fulfill the contract. When questioned, Edgemoor's president stated that the delays were the fault of another contractor, the Cambria Iron Company, who was manufacturing the eyebars for the bridge trusses; at that point, the contract was supposed to be complete by October 1882.
Further complicating the situation, Washington Roebling had failed to appear at the trustees' meeting in June 1882, since he had gone to Newport, Rhode Island. After the news media discovered this, most of the newspapers called for Roebling to be fired as chief engineer, except for the Daily State Gazette of Trenton, New Jersey, and the Brooklyn Daily Eagle. Some of the longstanding trustees, including Henry C. Murphy, James S. T. Stranahan, and William C. Kingsley, were willing to vouch for Roebling, since construction progress on the Brooklyn Bridge was still ongoing. However, Roebling's behavior was considered suspect among the younger trustees who had joined the board more recently.
Construction on the bridge itself was noted in formal reports that Murphy presented each month to the mayors of New York and Brooklyn. For example, Murphy's report in August 1882 noted that the month's progress included 114 intermediate cords erected within a week, as well as 72 diagonal stays, 60 posts, and numerous floor beams, bridging trusses, and stay bars. By early 1883, the Brooklyn Bridge was considered mostly completed and was projected to open that June. Contracts for bridge lighting were awarded by February 1883, and a toll scheme was approved that March.
Opposition
There was substantial opposition to the bridge's construction from shipbuilders and merchants located to the north, who argued that the bridge would not provide sufficient clearance underneath for ships. In May 1876, these groups, led by Abraham Miller, filed a lawsuit in the United States District Court for the Southern District of New York against the cities of New York and Brooklyn.
In 1879, an Assembly Sub-Committee on Commerce and Navigation began an investigation into the Brooklyn Bridge. A seaman who had been hired to determine the height of the span, testified to the committee about the difficulties that ship masters would experience in bringing their ships under the bridge when it was completed. Another witness, Edward Wellman Serrell, a civil engineer, said that the calculations of the bridge's assumed strength were incorrect. The Supreme Court decided in 1883 that the Brooklyn Bridge was a lawful structure.
Opening
The New York and Brooklyn Bridge was opened for use on May 24, 1883. Thousands of people attended the opening ceremony, and many ships were present in the East River for the occasion. Officially, Emily Warren Roebling was the first to cross the bridge. The bridge opening was also attended by U.S. president Chester A. Arthur and New York mayor Franklin Edson, who crossed the bridge and shook hands with Brooklyn mayor Seth Low at the Brooklyn end. Abram Hewitt gave the principal address.
Though Washington Roebling was unable to attend the ceremony (and rarely visited the site again), he held a celebratory banquet at his house on the day of the bridge opening. Further festivity included the performance by a band, gunfire from ships, and a fireworks display. On that first day, a total of 1,800 vehicles and 150,300 people crossed the span. Less than a week after the Brooklyn Bridge opened, ferry crews reported a sharp drop in patronage, while the bridge's toll operators were processing over a hundred people a minute. However, cross-river ferries continued to operate until 1942.
The bridge had cost in 1883 dollars (about US$ in ) to build, of which Brooklyn paid two-thirds. The bonds to fund the construction would not be paid off until 1956. An estimated 27 men died during its construction. Since the New York and Brooklyn Bridge was the only bridge across the East River at that time, it was also called the East River Bridge. Until the construction of the nearby Williamsburg Bridge in 1903, the New York and Brooklyn Bridge was the longest suspension bridge in the world, 20% longer than any built previously.
At the time of opening, the Brooklyn Bridge was not complete; the proposed public transit across the bridge was still being tested, while the Brooklyn approach was being completed. On May 30, 1883, six days after the opening, a woman falling down a stairway at the Brooklyn approach caused a stampede which resulted in at least twelve people being crushed and killed. In subsequent lawsuits, the Brooklyn Bridge Company was acquitted of negligence. However, the company did install emergency phone boxes and additional railings, and the trustees approved a fireproofing plan for the bridge. Public transit service began with the opening of the New York and Brooklyn Bridge Railway, a cable car service, on September 25, 1883. On May 17, 1884, one of the circus master P. T. Barnum's most famous attractions, Jumbo the elephant, led a parade of 21 elephants over the Brooklyn Bridge. This helped to lessen doubts about the bridge's stability while also promoting Barnum's circus.
1880s to 1910s
Patronage across the Brooklyn Bridge increased in the years after it opened; a million people paid to cross in the six first months. The bridge carried 8.5 million people in 1884, its first full year of operation; this number doubled to 17 million in 1885 and again to 34 million in 1889. Many of these people were cable car passengers. Additionally, about 4.5 million pedestrians a year were crossing the bridge for free by 1892.
The first proposal to make changes to the bridge was sent in only two and a half years after it opened, when Linda Gilbert suggested glass steam-powered elevators and an observatory be added to the bridge and a fee charged for use, which would in part fund the bridge's upkeep and in part fund her prison reform charity. This proposal was considered but not acted upon. Numerous other proposals were made during the first fifty years of the bridge's life. Trolley tracks were added in the center lanes of both roadways in 1898, allowing trolleys to use the bridge as well. That year, the formerly separate City of Brooklyn was unified with New York City, and the Brooklyn Bridge fell under city control.
Concerns about the Brooklyn Bridge's safety were raised during the turn of the century. In 1898, traffic backups due to a dead horse caused one of the truss cords to buckle. There were more significant worries after twelve suspender cables snapped in 1901, though a thorough investigation found no other defects. After the 1901 incident, five inspectors were hired to examine the bridge each day, a service that cost $250,000 a year. The Brooklyn Rapid Transit Company, which operated routes across the Brooklyn Bridge, issued a notice in 1905 saying that the bridge had reached its transit capacity.
By 1890, due to the popularity of the Brooklyn Bridge, there were proposals to construct other bridges across the East River between Manhattan and Long Island. Although a second deck for the Brooklyn Bridge was proposed, it was thought to be infeasible because doing so would overload the bridge's structural capacity. The first new bridge across the East River, the Williamsburg Bridge, opened upstream in 1903 and connected Williamsburg, Brooklyn, with the Lower East Side of Manhattan. This was followed by the Queensboro Bridge between Queens and Manhattan in March 1909, and the Manhattan Bridge between Brooklyn and Manhattan in December 1909. Several subway, railroad, and road tunnels were also constructed, which helped to accelerate the development of Manhattan, Brooklyn, and Queens.
1910s to 1940s
Though carriages and cable-car customers had paid tolls ever since the bridge's opening, pedestrians were spared from the tolls originally. By the first decade of the 20th century, pedestrians were also paying tolls. Tolls on all four bridges across the East River—the Brooklyn Bridge, as well as the Manhattan, Williamsburg, and Queensboro bridges to the north—were abolished in July 1911 as part of a populist policy initiative headed by New York City mayor William Jay Gaynor. The city government passed a bill to officially name the structure the "Brooklyn Bridge" in January 1915.
Ostensibly in an attempt to reduce traffic on nearby city streets, Grover Whalen, the commissioner of Plant and Structures, banned motor vehicles from the Brooklyn Bridge on July 6, 1922. The real reason for the ban was an incident the same year where two cables slipped due to high traffic loads. Both Whalen and Roebling called for the renovation of the Brooklyn Bridge and the construction of a parallel bridge, though the parallel bridge was never built. Whalen's successor William Wirt Mills announced in 1924 that a new wood-block pavement would be installed, permitting motor vehicles to use the bridge again; motor traffic was again allowed on the bridge starting on May 12, 1925.
As part of an experiment, starting in November 1946, the Manhattan-bound roadway carried Brooklyn-bound traffic during the evening rush hours. The experiment ended after two months due to complaints about congestion.
Mid- to late 20th century
Upgrades
The first major upgrade to the Brooklyn Bridge commenced in 1948, when a contract to entirely reconstruct the approach ramps was awarded to David B. Steinman. The renovation was expected to double the capacity of the bridge's roadways to nearly 6,000 cars per hour, at a projected cost of $7 million. The renovation included the demolition of both the elevated and the trolley tracks on the roadways, the removal of trusses separating the inner elevated tracks from the existing vehicle lanes and the widening of each roadway from two to three lanes, as well as the construction of a new steel-and-concrete floor. In addition, new ramps were added to Adams Street, Cadman Plaza, and the Brooklyn Queens Expressway (BQE) on the Brooklyn side, and to Park Row on the Manhattan side. The bridge was briefly closed to all traffic for the first time ever in January 1950, and the trolley tracks closed that March to allow the widening work to occur. During the construction project, one roadway at a time was closed, allowing reduced traffic flows to cross the bridge in one direction only.
The widened south roadway was completed in May 1951, followed by the north roadway in October 1953. The restoration was finished in May 1954 with the completion of the reconstructed elevated promenade. While the rebuilding of the span was ongoing, a fallout shelter was constructed beneath the Manhattan approach in anticipation of the Cold War. The abandoned space in one of the masonry arches was stocked with emergency survival supplies for a potential nuclear attack by the Soviet Union; these supplies remained in place half a century later. In addition, defensive barriers were added to the bridge as a safeguard against sabotage.
Simultaneous with the rebuilding of the Brooklyn Bridge, a double-decked viaduct for the BQE was being built through an existing steel overpass of the bridge's Brooklyn approach ramp. The segment of the BQE from Brooklyn Bridge south to Atlantic Avenue opened in June 1954, but the direct ramp from the northbound BQE to the Manhattan-bound Brooklyn Bridge did not open until 1959. The city also widened the Adams Street approach in Brooklyn, between the bridge and Fulton Street, from between 1954 and 1955. Subsequently, Boerum Place from Fulton Street south to Atlantic Avenue was also widened. This required the demolition of the old Kings County courthouse. The towers were cleaned in 1958 and the Brooklyn anchorage was repaired the next year.
On the Manhattan side, the city approved a controversial rebuilding of the Manhattan entrance plaza in 1953. The project, which would add a grade-separated junction over Park Row, was hotly contested because it would require the demolition of 21 structures, including the old New York World Building. The reconstruction also necessitated the relocation of 410 families on Park Row. In December 1956, the city started a two-year renovation of the plaza. This required the closure of one roadway at a time, as was done during the rebuilding of the bridge itself. Work on redeveloping the area around the Manhattan approach started in the mid-1960s. At the same time, plans were announced for direct ramps to the elevated FDR Drive to alleviate congestion at the approach. The ramp from FDR Drive to the Brooklyn Bridge was opened in 1968, followed by the ramp from the bridge to FDR Drive the next year. A single ramp from the Manhattan-bound Brooklyn Bridge to northbound Park Row was constructed in 1970. A repainting of the bridge was announced two years later in advance of its 90th anniversary.
Deterioration and late-20th century repair
The Brooklyn Bridge gradually deteriorated due to age and neglect. While it had 200 full-time dedicated maintenance workers before World War II, that number dropped to five by the late 20th century, and the city as a whole only had 160 bridge maintenance workers. In 1974, heavy vehicles such as vans and buses were banned from the bridge to prevent further erosion of the concrete roadway. A report in The New York Times four years later noted that the cables were visibly fraying and the pedestrian promenade had holes in it. The city began planning to replace all the Brooklyn Bridge's cables at a cost of $115 million, as part of a larger project to renovate all four toll-free East River spans. By 1980, the Brooklyn Bridge was in such dire condition that it faced imminent closure. In some places, half of the strands in the cables were broken.
In June 1981, two of the diagonal stay cables snapped, killing a pedestrian. Subsequently, the anchorages were found to have developed rust, and an emergency cable repair was necessitated less than a month later after another cable developed slack. Following the incident, the city accelerated the timetable of its proposed cable replacement, and it commenced a $153 million rehabilitation of the Brooklyn Bridge in advance of the 100th anniversary. As part of the project, the bridge's original suspender cables installed by J. Lloyd Haigh were replaced by Bethlehem Steel in 1986, marking the cables' first replacement since construction. In addition, the staircase at Washington Street in Brooklyn was renovated, the stairs from Tillary and Adams Streets were replaced with a ramp, and the short flights of steps from the promenade to each tower's balcony were removed. In a smaller project, the bridge was floodlit at night starting in 1982 to highlight its architectural features.
Additional problems persisted, and in 1993, high levels of lead were discovered near the bridge's towers. Further emergency repairs were undertaken in mid-1999 after small concrete shards began falling from the bridge into the East River. The concrete deck had been installed during the 1950s renovations and had a lifespan of about 60 years. The Park Row exit from the bridge's westbound lanes was closed as a safety measure after the September 11, 2001, attacks on the nearby World Trade Center. That section of Park Row had been closed off since it ran right underneath 1 Police Plaza, the headquarters of the New York City Police Department (NYPD). In early 2003, to save money on electricity, the NYCDOT turned off the bridge's "necklace lights" at night. They were turned back on later that year after several private entities made donations to fund the lights.
21st century
After the 2007 collapse of the I-35W bridge in Minneapolis, public attention focused on the condition of bridges across the U.S. The New York Times reported that the Brooklyn Bridge approach ramps had received a "poor" rating during an inspection in 2007. However, a NYCDOT spokesman said that the poor rating did not indicate a dangerous state but rather implied it required renovation. In 2010, the NYCDOT began renovating the approaches and deck, as well as repainting the suspension span. Work included widening two approach ramps from one to two lanes by re-striping a new prefabricated ramp; raising clearance over the eastbound BQE at York Street; seismic retrofitting; replacement of rusted railings and safety barriers; and road deck resurfacing. The work necessitated detours for four years. At the time, the project was scheduled to be completed in 2014; but completion was later delayed to 2015, then again to 2017. The project's cost also increased from $508 million in 2010 to $811 million in 2016.
In August 2016, the NYCDOT announced that it would conduct a seven-month, $370,000 study to verify if the bridge could support a heavier upper deck that consisted of an expanded bicycle and pedestrian path. By then, about 10,000 pedestrians and 3,500 cyclists used the pathway on an average weekday. Work on the pedestrian entrance on the Brooklyn side was underway by 2017. The NYCDOT also indicated in 2016 that it planned to reinforce the Brooklyn Bridge's foundations to prevent it from sinking, as well as repair the masonry arches on the approach ramps, which had been damaged by Hurricane Sandy four years earlier. In July 2018, the New York City Landmarks Preservation Commission approved a further renovation of the Brooklyn Bridge's suspension towers and approach ramps. That December, the federal government gave the city $25 million in funding, which would pay for a $337 million rehabilitation of the bridge approaches and the suspension towers. Work started in late 2019 and was scheduled to be completed in four years. This restoration included removing bricks from the arches and putting fresh concrete behind them, using mortar from the same upstate quarries as the original mortar. The granite arches were also cleaned, revealing the original gray color of the stone, which had long been hidden by grime. Additionally, 56 LED lamps were installed on the bridge at a cost of $2.4 million.
In early 2020, City Council speaker Corey Johnson and the nonprofit Van Alen Institute hosted an international contest to solicit plans for the redesign of the bridge's walkway. Ultimately, in January 2021, the city decided to install a two-way protected bike path on the Manhattan-bound roadway, replacing the leftmost vehicular lane. The bike lane would allow the existing promenade to be used exclusively by pedestrians. Work on the bike lane started in June 2021, and the new path was completed on September 14, 2021. Despite the addition of the bike path, the bridge's walkway was still frequently overcrowded, prompting the city to propose in mid-2023 that street vendors be banned from the bridge and others citywide. All vendors were banned from the bridge at the beginning of January 2024. The same month, the bridge's new LED lights were illuminated for the first time.
A plan for congestion pricing in New York City was approved in mid-2023, allowing the Metropolitan Transportation Authority to toll drivers who enter Manhattan south of 60th Street. Congestion pricing was implemented in January 2025. Most traffic to and from FDR Drive is exempt from the toll, but all other Manhattan-bound drivers pay a toll, which varies based on the time of day.
Usage
Vehicular traffic
Horse-drawn carriages have been allowed to use the Brooklyn Bridge's roadways since its opening. Originally, each of the two roadways carried two lanes of a different direction of traffic. The lanes were relatively narrow at only wide. In July 1922, motor vehicles were banned from the bridge; the ban lasted until May 1925.
After 1950, the main roadway carried six lanes of automobile traffic, three in each direction. It was then reduced to five lanes with the addition of a two-way bike lane on the Manhattan-bound side in 2021. Because of the roadway's posted height restriction of and weight restriction of , commercial vehicles and buses are prohibited from using the Brooklyn Bridge. The weight restrictions prohibit heavy passenger vehicles such as pickup trucks and SUVs from using the bridge, though this is not often enforced in practice.
On the Brooklyn side, vehicles can enter the bridge from Tillary/Adams Streets to the south, Sands/Pearl Streets to the west, and exit 28B of the eastbound Brooklyn-Queens Expressway. In Manhattan, cars can enter from both the northbound and southbound FDR Drive, as well as Park Row to the west, Chambers/Centre Streets to the north, and Pearl Street to the south. However, the exit from the bridge to northbound Park Row was closed after the September 11 attacks because of increased security concerns: that section of Park Row ran under One Police Plaza, the NYPD headquarters.
Exit list
Vehicular access to the bridge is provided by a complex series of ramps on both sides of the bridge. There are two entrances to the bridge's pedestrian promenade on either side. The current configuration was constructed from the mid-1950s up until the early 1970s. After 9/11, the ramp onto Park Row was restricted to public traffic, there are no plans to reopen it.
Rail traffic
Formerly, rail traffic operated on the Brooklyn Bridge as well. Cable cars and elevated railroads used the bridge until 1944, while trolleys ran until 1950.
Cable cars and elevated railroads
The New York and Brooklyn Bridge Railway, a cable car service, began operating on September 25, 1883; it ran on the inner lanes of the bridge, between terminals at the Manhattan and Brooklyn ends. Since Washington Roebling believed that steam locomotives would put excessive loads upon the structure of the Brooklyn Bridge, the cable car line was designed as a steam/cable-hauled hybrid. They were powered from a generating station under the Brooklyn approach. The cable cars could not only regulate their speed on the % upward and downward approaches, but also maintain a constant interval between each other. There were 24 cable cars in total.
Initially, the service ran with single-car trains, but patronage soon grew so much that by October 1883, two-car trains were in use. The line carried three million people in the first six months, nine million in 1884, and nearly 20 million in 1885 following the opening of the Brooklyn Union Elevated Railroad. Accordingly, the track layout was rearranged and more trains were ordered. At the same time, there were highly controversial plans to extend the elevated railroads onto the Brooklyn Bridge, under the pretext of extending the bridge itself. After disputes, the trustees agreed to build two elevated routes to the bridge on the Brooklyn side. Patronage continued to increase, and in 1888, the tracks were lengthened and even more cars were constructed to allow for four-car cable car trains. Electric wires for the trolleys were added by 1895, allowing for the potential future decommissioning of the steam/cable system. The terminals were rebuilt once more in July 1895, and, following the implementation of new electric cars in late 1896, the steam engines were dismantled and sold.
Following the unification of the cities of New York and Brooklyn in 1898, the New York and Brooklyn Bridge Railway ceased to be a separate entity that June and the Brooklyn Rapid Transit Company (BRT) assumed control of the line. The BRT started running through-services of elevated trains, which ran from Park Row Terminal in Manhattan to points in Brooklyn via the Sands Street station on the Brooklyn side. Before reaching Sands Street (at Tillary Street for Fulton Street Line trains, and at Bridge Street for Fifth Avenue Line and Myrtle Avenue Line trains), elevated trains bound for Manhattan were uncoupled from their steam locomotives. The elevated trains were then coupled to the cable cars, which would pull the passenger carriages across the bridge.
The BRT did not run any elevated train through services from 1899 to 1901. Due to increased patronage after the opening of the Interborough Rapid Transit Company (IRT)'s first subway line, the Park Row station was rebuilt in 1906. In the early 20th century, there were plans for Brooklyn Bridge elevated trains to run underground to the BRT's proposed Chambers Street station in Manhattan, though the connection was never opened. The overpass across William Street was closed in 1913 to make way for the proposed connection. In 1929, the overpass was reopened after it became clear that the connection would not be built.
After the IRT's Joralemon Street Tunnel and the Williamsburg Bridge tracks opened in 1908, the Brooklyn Bridge no longer held a monopoly on rail service between Manhattan and Brooklyn, and cable service ceased. New subway lines from the IRT and from the BRT's successor Brooklyn–Manhattan Transit Corporation (BMT), built in the 1910s and 1920s, posed significant competition to the Brooklyn Bridge rail services. With the opening of the Independent Subway System in 1932 and the subsequent unification of all three companies into a single entity in 1940, the elevated services started to decline, and the Park Row and Sands Street stations were greatly reduced in size. The Fifth Avenue and Fulton Street services across the Brooklyn Bridge were discontinued in 1940 and 1941 respectively, and the elevated tracks were abandoned permanently with the withdrawal of Myrtle Avenue services in 1944.
Trolleys
A plan for trolley service across the Brooklyn Bridge was presented in 1895. Two years later, the Brooklyn Bridge trustees agreed to a plan where trolleys could run across the bridge under ten-year contracts. Trolley service, which began in 1898, ran on what are now the two middle lanes of each roadway (shared with other traffic). When cable service was withdrawn in 1908, the trolley tracks on the Brooklyn side were rebuilt to alleviate congestion. Trolley service on the middle lanes continued until the elevated lines stopped using the bridge in 1944, when they moved to the protected center tracks. On March 5, 1950, the streetcars also stopped running, and the bridge was redesigned exclusively for automobile traffic.
Walkway
The Brooklyn Bridge has an elevated promenade open to pedestrians in the center of the bridge, located above the automobile lanes. The promenade is usually located below the height of the girders, except at the approach ramps leading to each tower's balcony. The path is generally wide, though this is constrained by obstacles such as protruding cables, benches, and stairways, which create "pinch points" at certain locations. The path narrows to at the locations where the main cables descend to the level of the promenade. Further exacerbating the situation, these "pinch points" are some of the most popular places to take pictures. As a result, in 2016, the NYCDOT announced that it planned to double the promenade's width.
A center line was painted to separate cyclists from pedestrians in 1971, creating one of the city's first dedicated bike lanes. Initially, the northern side of the promenade was used by pedestrians and the southern side by cyclists. In 2000, these were swapped, with cyclists taking the northern side and pedestrians taking the southern side. On September 14, 2021, the DOT closed off the inner-most car lane on the Manhattan-bound side with protective barriers and fencing to create a new bike path. Cyclists are now prohibited from the upper pedestrian lane.
Pedestrian access to the bridge from the Brooklyn side is from either the median of Adams Street at its intersection with Tillary Street or a staircase near Prospect Street between Cadman Plaza East and West. In Manhattan, the pedestrian walkway is accessible from crosswalks at the intersection of the bridge and Centre Street, or through a staircase leading to Park Row.
Emergency use
While the bridge has always permitted the passage of pedestrians, the promenade facilitates movement when other means of crossing the East River have become unavailable. During transit strikes by the Transport Workers Union in 1980 and 2005, people commuting to work used the bridge; they were joined by Mayors Ed Koch and Michael Bloomberg, who crossed as a gesture to the affected public. Pedestrians also walked across the bridge as an alternative to suspended subway services following the 1965, 1977, and 2003 blackouts, and after the September 11 attacks.
During the 2003 blackouts, many crossing the bridge reported a swaying motion. The higher-than-usual pedestrian load caused this swaying, which was amplified by the tendency of pedestrians to synchronize their footfalls with a sway. Several engineers expressed concern about how this would affect the bridge, although others noted that the bridge did withstand the event and that the redundancies in its design—the inclusion of the three support systems (suspension system, diagonal stay system, and stiffening truss)—make it "probably the best secured bridge against such movements going out of control". In designing the bridge, John Roebling had stated that the bridge would sag but not fall, even if one of these structural systems were to fail altogether.
Notable events
Stunts
There have been several notable jumpers from the Brooklyn Bridge. The first person was Robert Emmet Odlum, brother of women's rights activist Charlotte Odlum Smith, on May 19, 1885. He struck the water at an angle and died shortly afterwards from internal injuries. Steve Brodie supposedly dropped from underneath the bridge in July 1886 and was briefly arrested for it, though there is some doubt about whether he actually jumped. Larry Donovan made a slightly higher jump from the railing a month afterward. The first known person to jump from the bridge with the intention of suicide was Francis McCarey in 1892. A lesser known early jumper was James Duffy of County Cavan, Ireland, who on April 15, 1895, asked several men to watch him jump from the bridge. Duffy jumped and was not seen again. Additionally, the cartoonist Otto Eppers jumped and survived in 1910, and was then tried and acquitted for attempted suicide. The Brooklyn Bridge has since developed a reputation as a suicide bridge due to the number of jumpers who do so intending to kill themselves, though exact statistics are difficult to find.
Other notable feats have taken place on or near the bridge. In 1919, Giorgio Pessi piloted what was then one of the world's largest airplanes, the Caproni Ca.5, under the bridge. In 1993, bridge jumper Thierry Devaux illegally performed eight acrobatic bungee jumps above the East River close to the Brooklyn tower.
Crimes and terrorism
On March 1, 1994, Lebanese-born Rashid Baz opened fire on a van carrying members of the Chabad-Lubavitch Orthodox Jewish Movement, striking 16-year-old student Ari Halberstam and three others traveling on the bridge. Halberstam died five days later from his wounds, and Baz was later convicted of murder. He was apparently acting out of revenge for the Hebron massacre of Palestinian Muslims a few days prior to the incident. After initially classifying the killing as one committed out of road rage, the Justice Department reclassified the case in 2000 as a terrorist attack. The entrance ramp to the bridge on the Manhattan side was dedicated as the Ari Halberstam Memorial Ramp in 1995.
Several potential attacks or disasters have also been averted. In 1979, police disarmed a stick of dynamite placed under the Brooklyn approach, and an artist in Manhattan was arrested that year after another bombing attempt. In 2003, truck driver Iyman Faris was sentenced to about 20 years in prison for providing material support to Al-Qaeda, after an earlier plot to destroy the bridge by cutting through its support wires with blowtorches was thwarted.
Arrests
At 9:00 a.m. on May 19, 1977, artist Jack Bashkow climbed one of the towers for Bridging, a "media sculpture" by the performance group Art Corporation of America Inc. Seven artists climbed the largest bridges connected to Manhattan "to replace violence and fear in mass media for one day". When each of the artists had reached the tops of the bridges, they ignited bright-yellow flares at the same moment, resulting in rush hour traffic disruption, media attention, and the arrest of the climbers, though the charges were later dropped. Called "the first social-sculpture to use mass-media as art" by conceptual artist Joseph Beuys, the event was on the cover of the New York Post, received international attention, and received ABC Eyewitness News' 1977 Best News of the Year award. John Halpern documented the incident in the film Bridging, 1977. Halpern attempted another "bridging" "social sculpture" in 1979, when he planted a radio receiver, gunpowder and fireworks in a bucket atop one of the towers. The piece was later discovered by police, leading to his arrest for possessing a bomb.
On October 1, 2011, more than 700 protesters with the Occupy Wall Street movement were arrested while attempting to march across the bridge on the roadway. Protesters disputed the police account of the events and claimed that the arrests were the result of being trapped on the bridge by the NYPD. The majority of the arrests were subsequently dismissed.
On July 22, 2014, the two American flags on the flagpoles atop each tower were found to have been replaced by bleached-white American flags. Initially, cannabis activism was suspected as a motive, but on August 12, 2014, two Berlin artists claimed responsibility for hoisting the two white flags, having switched out the original flags with their replicas. The artists said that the flags were meant to celebrate "the beauty of public space" and the anniversary of the death of German-born John Roebling, and they denied that it was an "anti-American statement".
Anniversary celebrations
The 50th-anniversary celebrations on May 24, 1933, included a ceremony featuring an airplane show, ships, and fireworks, as well as a banquet. During the centennial celebrations on May 24, 1983, a flotilla of ships visited the harbor, officials held parades, and Grucci Fireworks held a fireworks display that evening. For the centennial, the Brooklyn Museum exhibited a selection of the original drawings made for the bridge's construction, including those by Washington Roebling. Media coverage of the centennial was declared "the public relations triumph of 1983" by Inc.
The 125th anniversary of the bridge's opening was celebrated by a five-day event on May 22–26, 2008, which included a live performance by the Brooklyn Philharmonic, a special lighting of the bridge's towers, and a fireworks display. Other events included a film series, historical walking tours, information tents, a series of lectures and readings, a bicycle tour of Brooklyn, a miniature golf course featuring Brooklyn icons, and other musical and dance performances. Just before the anniversary celebrations, artist Paul St George installed the Telectroscope, a video link on the Brooklyn side of the bridge that connected to a matching device on London's Tower Bridge. A renovated pedestrian connection to Dumbo, Brooklyn, was also reopened before the anniversary celebrations.
Impact
At the time of construction, contemporaries marveled at what technology was capable of, and the bridge became a symbol of the era's optimism. John Perry Barlow wrote in the late 20th century of the "literal and genuinely religious leap of faith" embodied in the bridge's construction, saying that the "Brooklyn Bridge required of its builders faith in their ability to control technology".
Historical designations and plaques
The Brooklyn Bridge has been listed as a National Historic Landmark since January 29, 1964, and was subsequently added to the National Register of Historic Places on October 15, 1966. The bridge has also been a New York City designated landmark since August 24, 1967, and was designated a National Historic Civil Engineering Landmark in 1972. In addition, it was placed on UNESCO's list of tentative World Heritage Sites in 2017.
A bronze plaque is attached to the Manhattan anchorage, which was constructed on the site of the Samuel Osgood House at 1 Cherry Street in Manhattan. Named after Samuel Osgood, a Massachusetts politician and lawyer, it was built in 1770 and served as the first U.S. presidential mansion. The Osgood House was demolished in 1856.
Another plaque on the Manhattan side of the pedestrian promenade, installed by the city in 1975, indicates the bridge's status as a city landmark.
Culture
The Brooklyn Bridge has had an impact on idiomatic American English. For example, references to "selling the Brooklyn Bridge" are frequent in American culture, sometimes presented as a historical reality but more often as an expression meaning an idea that strains credulity. George C. Parker and William McCloundy were two early 20th-century con men who may have perpetrated this scam successfully, particularly on new immigrants, although the author of The Brooklyn Bridge: A Cultural History wrote, "No evidence exists that the bridge has ever been sold to a 'gullible outlander'".
As a tourist attraction, the Brooklyn Bridge is a popular site for clusters of love locks, wherein a couple inscribes a date and their initials onto a lock, attach it to the bridge, and throw the key into the water as a sign of their love. The practice is illegal in New York City and the NYPD can give violators a $100 fine. NYCDOT workers periodically remove the love locks from the bridge at a cost of $100,000 per year.
To highlight the Brooklyn Bridge's cultural status, the city proposed building a Brooklyn Bridge museum near the bridge's Brooklyn end in the 1970s. Though the museum was ultimately not constructed, as many as 10,000 drawings and documents relating to it were found in a carpenter shop in Williamsburg in 1976. These documents were given to the New York City Municipal Archives, where they are normally located, though a selection of them were displayed at the Whitney Museum of American Art when they were discovered.
Media
The bridge is often featured in wide shots of the New York City skyline in television and film and has been depicted in numerous works of art. Fictional works have used the Brooklyn Bridge as a setting; for instance, the dedication of a portion of the bridge, and the bridge itself, were key components in the 2001 film Kate & Leopold. Furthermore, the Brooklyn Bridge has also served as an icon of America, with mentions in numerous songs, books, and poems. Among the most notable of these works is that of American Modernist poet Hart Crane, who used the Brooklyn Bridge as a central metaphor and organizing structure for his second book of poetry, The Bridge (1930).
The Brooklyn Bridge has also been lauded for its architecture. One of the first positive reviews was "The Bridge As A Monument", a Harper's Weekly piece written by architecture critic Montgomery Schuyler and published a week after the bridge's opening. In the piece, Schuyler wrote: "It so happens that the work which is likely to be our most durable monument, and to convey some knowledge of us to the most remote posterity, is a work of bare utility; not a shrine, not a fortress, not a palace, but a bridge." Architecture critic Lewis Mumford cited the piece as the impetus for serious architectural criticism in the U.S. He wrote that in the 1920s the bridge was a source of "joy and inspiration" in his childhood, and that it was a profound influence in his adolescence. Later critics would regard the Brooklyn Bridge as a work of art, as opposed to an engineering feat or a means of transport. Not all critics appreciated the bridge, however. Henry James, writing in the early 20th century, cited the bridge as an ominous symbol of the city's transformation into a "steel-souled machine room".
The construction of the Brooklyn Bridge is detailed in numerous media sources, including David McCullough's 1972 book The Great Bridge and Ken Burns's 1981 documentary Brooklyn Bridge. It is also described in Seven Wonders of the Industrial World, a BBC docudrama series with an accompanying book, as well as Chief Engineer: Washington Roebling, The Man Who Built the Brooklyn Bridge, a biography published in 2017.
See also
Brooklyn Bridge Park
Brooklyn Bridge trolleys
List of bridges and tunnels in New York City
List of bridges and tunnels on the National Register of Historic Places in New York
List of bridges documented by the Historic American Engineering Record in New York
List of National Historic Landmarks in New York City
List of New York City Designated Landmarks in Manhattan below 14th Street
List of New York City Designated Landmarks in Brooklyn
List of tallest structures built before the 20th century
National Register of Historic Places listings in Manhattan below 14th Street
National Register of Historic Places listings in Brooklyn
References
Notes
Citations
Bibliography
External links
Brooklyn Bridge – New York City Department of Transportation
Brooklyn Bridge at Historical Marker Database
Wikisource items:
1883 establishments in New York (state)
Bike paths in New York City
Bridges completed in 1883
Bridges in Brooklyn
Bridges in Manhattan
Bridges on the National Register of Historic Places in New York City
Bridges over the East River
Brooklyn Heights
Brooklyn–Manhattan Transit Corporation
Buildings and structures on the National Register of Historic Places in Manhattan
Civic Center, Manhattan
Dumbo, Brooklyn
Former railway bridges in the United States
Historic American Engineering Record in New York City
Historic Civil Engineering Landmarks
National Historic Landmarks in New York City
National Register of Historic Places in Brooklyn
New York City Designated Landmarks in Brooklyn
New York City Designated Landmarks in Manhattan
New York State Register of Historic Places in Kings County
New York State Register of Historic Places in New York County
Pedestrian bridges in New York City
Railroad bridges in New York City
Railroad bridges on the National Register of Historic Places in New York City
Railroad-related National Historic Landmarks
Road bridges in New York City
Road bridges on the National Register of Historic Places in New York City
Road-rail bridges in the United States
Steel bridges in the United States
Suspension bridges in New York City
Symbols of New York City
Tourist attractions in Brooklyn
Tourist attractions in Manhattan | Brooklyn Bridge | [
"Engineering"
] | 14,006 | [
"Civil engineering",
"Historic Civil Engineering Landmarks"
] |
47,756 | https://en.wikipedia.org/wiki/List%20of%20small%20groups | The following list in mathematics contains the finite groups of small order up to group isomorphism.
Counts
For n = 1, 2, … the number of nonisomorphic groups of order n is
1, 1, 1, 2, 1, 2, 1, 5, 2, 2, 1, 5, 1, 2, 1, 14, 1, 5, 1, 5, ...
For labeled groups, see .
Glossary
Each group is named by Small Groups library as Goi, where o is the order of the group, and i is the index used to label the group within that order.
Common group names:
Zn: the cyclic group of order n (the notation Cn is also used; it is isomorphic to the additive group of Z/nZ)
Dihn: the dihedral group of order 2n (often the notation Dn or D2n is used)
K4: the Klein four-group of order 4, same as and Dih2
D2n: the dihedral group of order 2n, the same as Dihn (notation used in section List of small non-abelian groups)
Sn: the symmetric group of degree n, containing the n! permutations of n elements
An: the alternating group of degree n, containing the even permutations of n elements, of order 1 for , and order n!/2 otherwise
Dicn or Q4n: the dicyclic group of order 4n
Q8: the quaternion group of order 8, also Dic2
The notations Zn and Dihn have the advantage that point groups in three dimensions Cn and Dn do not have the same notation. There are more isometry groups than these two, of the same abstract group type.
The notation denotes the direct product of the two groups; Gn denotes the direct product of a group with itself n times. G ⋊ H denotes a semidirect product where H acts on G; this may also depend on the choice of action of H on G.
Abelian and simple groups are noted. (For groups of order , the simple groups are precisely the cyclic groups Zn, for prime n.) The equality sign ("=") denotes isomorphism.
The identity element in the cycle graphs is represented by the black circle. The lowest order for which the cycle graph does not uniquely represent a group is order 16.
In the lists of subgroups, the trivial group and the group itself are not listed. Where there are several isomorphic subgroups, the number of such subgroups is indicated in parentheses.
Angle brackets <relations> show the presentation of a group.
List of small abelian groups
The finite abelian groups are either cyclic groups, or direct products thereof; see Abelian group. The numbers of nonisomorphic abelian groups of orders n = 1, 2, ... are
1, 1, 1, 2, 1, 1, 1, 3, 2, 1, 1, 2, 1, 1, 1, 5, 1, 2, 1, 2, ...
For labeled abelian groups, see .
List of small non-abelian groups
The numbers of non-abelian groups, by order, are counted by . However, many orders have no non-abelian groups. The orders for which a non-abelian group exists are
6, 8, 10, 12, 14, 16, 18, 20, 21, 22, 24, 26, 27, 28, 30, 32, 34, 36, 38, 39, 40, 42, 44, 46, 48, 50, ...
Classifying groups of small order
Small groups of prime power order pn are given as follows:
Order p: The only group is cyclic.
Order p2: There are just two groups, both abelian.
Order p3: There are three abelian groups, and two non-abelian groups. One of the non-abelian groups is the semidirect product of a normal cyclic subgroup of order p2 by a cyclic group of order p. The other is the quaternion group for and a group of exponent p for .
Order p4: The classification is complicated, and gets much harder as the exponent of p increases.
Most groups of small order have a Sylow p subgroup P with a normal p-complement N for some prime p dividing the order, so can be classified in terms of the possible primes p, p-groups P, groups N, and actions of P on N. In some sense this reduces the classification of these groups to the classification of p-groups. Some of the small groups that do not have a normal p-complement include:
Order 24: The symmetric group S4
Order 48: The binary octahedral group and the product
Order 60: The alternating group A5.
The smallest order for which it is not known how many nonisomorphic groups there are is 2048 = 211.
Small Groups Library
The GAP computer algebra system contains a package called the "Small Groups library," which provides access to descriptions of small order groups. The groups are listed up to isomorphism. At present, the library contains the following groups:
those of order at most 2000 except for order 1024 ( groups in the library; the ones of order 1024 had to be skipped, as there are additional nonisomorphic 2-groups of order 1024);
those of cubefree order at most 50000 (395 703 groups);
those of squarefree order;
those of order pn for n at most 6 and p prime;
those of order p7 for p = 3, 5, 7, 11 (907 489 groups);
those of order pqn where qn divides 28, 36, 55 or 74 and p is an arbitrary prime which differs from q;
those whose orders factorise into at most 3 primes (not necessarily distinct).
It contains explicit descriptions of the available groups in computer readable format.
The smallest order for which the Small Groups library does not have information is 1024.
See also
Classification of finite simple groups
Composition series
List of finite simple groups
Number of groups of a given order
Small Latin squares and quasigroups
Sylow theorems
Notes
References
, Table 1, Nonabelian groups order<32.
A catalog of the 340 groups of order dividing 64 with tables of defining relations, constants, and lattice of subgroups of each group.
External links
Particular groups in the Group Properties Wiki
GroupNames database
Hall, Jr., Marshall; Senior, James Kuhn (1964). The Groups of Order 2n (n ≤ 6). New York: Macmillan / London: Collier-Macmillan Ltd. LCCN 64016861
Groups that are small
Groups that are small
Finite groups | List of small groups | [
"Mathematics"
] | 1,392 | [
"Mathematical tables",
"Mathematical structures",
"Algebraic structures",
"Finite groups"
] |
47,761 | https://en.wikipedia.org/wiki/Hitchhiking | Hitchhiking (also known as thumbing, autostop or hitching) is a means of transportation that is gained by asking individuals, usually strangers, for a ride in their car or other vehicle. The ride is usually, but not always, free.
Signaling methods
Hitchhikers use a variety of signals to indicate they need a ride. Indicators can be physical gestures or displays including written signs. The physical gestures, e.g., hand signals, hitchhikers use differ around the world:
In some African countries, the hitchhiker's hand is held with the palm facing upwards.
In most of Europe, North America, South America and Australia, most hitchhikers stand with their back facing the direction of travel. The hitchhiker typically extends their arm towards the road with the thumb of the closed hand pointing upward or in the direction of vehicle travel.
Legal status
Hitchhiking is historically a common practice worldwide and hence there are very few places in the world where laws exist to restrict it. However, a minority of countries have laws that restrict hitchhiking at certain locations. In the United States, for example, some local governments have laws outlawing hitchhiking, on the basis of drivers' and hitchhikers' safety. In Canada, several highways have restrictions on hitchhiking, particularly in British Columbia and the 400-series highways in Ontario. In all countries in Europe, it is legal to hitchhike and in some places even encouraged. However, worldwide, even where hitchhiking is permitted, laws forbid hitchhiking where pedestrians are banned, such as the Autobahn (Germany), Autostrade (Italy), motorways (United Kingdom and continental Europe, with the exception of, at least, Lithuania) or interstate highways (United States), although hitchhikers often obtain rides at entrances and truck stops where it is legal at least throughout Europe with the exception of Italy.
Community
In recent years, hitchhikers have started efforts to strengthen their community. Examples include the annual Hitchgathering, an event organized by hitchhikers, for hitchhikers, and websites such as hitchwiki, which are platforms for hitchhikers to share tips and provide a way of looking up good hitchhiking spots around the world.
Decline
In 2011, Freakonomics Radio reviewed sparse data about hitchhiking, and identified a steady decline in hitchhiking in the US since the 1970s, which it attributed to a number of factors, including a greater lack of trust of strangers, lower air travel costs due to deregulation, the presence of more money in the economy to pay for travel and more numerous and more reliable cars. A marked increase in fear of hitchhiking is thought to have been spurred by movies such as The Texas Chain Saw Massacre (1974), The Hitcher (1986), and a few real incidents involving imperiled hitchhikers, including the kidnapping of Colleen Stan in California. See , below.
Some British researchers discuss reasons for hitchhiking's decline in the UK, and possible means of reviving it in safer and more-organized forms.
Public policy support
Since the mid-2010s, local authorities in rural areas in Germany have started to support hitchhiking, and this has spread to Austria and the German-speaking region of Belgium. The objectives are both social and environmental: as ride sharing improves mobility for local residents (particularly young and old people without their own cars) in places where public transport is inadequate, thus improving networking among local communities in an environmentally friendly way. This support typically takes the form of providing hitchhiking benches (in German Mitfahrbänke) where people hoping for a ride can wait for cars. These benches are usually brightly coloured and located at the exit from a village, sometimes at an existing bus stop lay-by where vehicles can pull in safely. Some are even provided with large fold-out or slide-out signs with place names allowing hitchers to clearly signal where they want to go. Some Mitfahrbänke have been installed with the help of the EU's LEADER programme for rural local development
In Austria, Mitfahrbänke are especially common in Lower Austria and Tyrol, and are promoted by the Federal Ministry of Agriculture, Regions and Tourism under its klimaaktiv climate protection initiative. In 2018 the Tyrolean MobilitäterInnen network published a Manual for the Successful Introduction of Hitch-hiking Benches.
Safety
Limited data is available regarding the safety of hitchhiking. Compiling good safety data requires counting hitchhikers, counting rides, and counting problems, all difficult tasks.
Two studies on the topic include a 1974 California Highway Patrol study and a 1989 German federal police (Bundeskriminalamt Wiesbaden) study. The California study found that hitchhikers were not disproportionately likely to be victims of crime. The German study concluded that the actual risk is much lower than the publicly perceived risk; the authors did not advise against hitchhiking in general. They found that in some cases there were verbal disputes or inappropriate comments, but physical attacks were very rare.
Recommended safety practices include:
Asking for rides at gas stations instead of signaling at the roadside
Refusing rides from alcohol impaired drivers
Hitchhiking during daylight hours
Trusting one's instincts
Traveling with another hitchhiker; this measure decreases the likelihood of harm by a factor of six.
In the UK, The Scout Association specifically lists hitchhiking as an activity not permitted at any scouting event.
Around the world
Cuba
In Cuba, picking up hitchhikers is mandatory for government vehicles, if passenger space is available. Hitchhiking is encouraged, as Cuba has few cars, and hitchhikers use designated spots. Drivers pick up waiting riders on a first come, first served basis.
Israel
In Israel, hitchhiking is commonplace at designated locations called ( in Hebrew, derived from the German ). Travelers soliciting rides, called , wait at , typically junctions of highways or main roads outside of a city.
Poland
Hitchhiking in Poland has a long history and is still popular. It was legalised and formalised in 1957 so hitchhikers could buy booklets including coupons from travel agencies. These coupons were given to drivers who took hitchhikers. By the end of each season drivers who collected the highest number of coupons could exchange them for prizes, and others took part in a lottery. This so-called "Akcja Autostop" was popular till the end of the 1970s, but the sale of the booklet was discontinued in 1995.
United States
Hitchhiking became a common method of traveling during the Great Depression and during the
counterculture of the 1960s.
Warnings of the potential dangers of picking up hitchhikers were publicized to drivers, who were advised that some hitchhikers would rob drivers and, in some cases, sexually assault or murder them. Other warnings were publicized to the hitchhikers themselves, alerting them to the same types of crimes being carried out by drivers. Still, hitchhiking was part of the American psyche and many people continued to stick out their thumbs, even in states where the practice had been outlawed.
Today, hitchhiking is legal in 44 of the 50 states, provided that the hitchhiker is not standing in the roadway or otherwise hindering the normal flow of traffic. Even in states where hitchhiking is illegal, hitchhikers are rarely ticketed. For example, the Wyoming Highway Patrol approached 524 hitchhikers in 2010, but only eight of them were cited (hitchhiking was subsequently legalized in Wyoming in 2013).
See also
Murders of Jacqueline Ansell-Lamb and Barbara Mayo – two unsolved murders of hitchhikers in England in 1970
Carpool
Flexible carpooling – hitchhiking formalized via designated meeting points
Freighthopping
Hitchwiki
Ridesharing company
Slugging – hitchhiking motivated by high-occupancy vehicle lanes in several urban areas
References
Bibliography
Brunvand, Harold (1981). The Vanishing Hitchhiker. American Urban Legends and Their Meaning. New York NY: Norton & Company.
Griffin, John H. (1961). Black Like Me. Boston: Houghton Mifflin.
Hawks, Tony (1996). Round Ireland with a Fridge. London: Ebury.
Laviolette, Patrick (2016). Why did the anthropologist cross the road? Ethnos: Journal of Anthropology. 81(3): 379–401.
Nwanna, Gladson I. (2004). Americans Traveling Abroad: What You Should Know Before You Go, Frontier Publishers, .
Packer, Jeremy (2008). Hitching the highway to hell: Media hysterics and the politics of youth mobility. Mobility Without Mayhem: Safety, Cars, and Citizenship. Chapel Hill: Duke Univ. Press (77–110).
Reid, Jack. (2020) Roadside Americans: The Rise and Fall of Hitchhiking in a Changing Nation. Chapel Hill: Univ, of North Carolina Press.
Smith, David H. & Frauke Zeller (2017). The death and lives of hitchBOT: the design and implementation of a hitchhiking robot. Leonardo. 50(1): 77–8.
Sykes, Simon & Tom Sykes (2005). No Such Thing as a Free Ride. UK Edition. London: Cassell Illustrated.
Tobar, Héctor (2020). The Last Great Road Bum. New York: Farrar, Straus and Giroux.
Kabourkova, Michaela (2022). Solo Female Traveller: What I Learnt from Hitchhiking in 70 Countries. Valencia: Amazon.
External links
Itinerant living
Hand gestures
Sustainable transport
Fingers | Hitchhiking | [
"Physics"
] | 1,974 | [
"Physical systems",
"Transport",
"Sustainable transport"
] |
47,763 | https://en.wikipedia.org/wiki/Environmental%20economics | Environmental economics is a sub-field of economics concerned with environmental issues. It has become a widely studied subject due to growing environmental concerns in the twenty-first century. Environmental economics "undertakes theoretical or empirical studies of the economic effects of national or local environmental policies around the world. ... Particular issues include the costs and benefits of alternative environmental policies to deal with air pollution, water quality, toxic substances, solid waste, and global warming."
Environmental economics is distinguished from ecological economics in that ecological economics emphasizes the economy as a subsystem of the ecosystem with its focus upon preserving natural capital. One survey of German economists found that ecological and environmental economics are different schools of economic thought, with ecological economists emphasizing "strong" sustainability and rejecting the proposition that human-made ("physical") capital can substitute for natural capital.
History
The modern field of environmental economics has been traced to the 1960s with significant contribution from Post-Keynesian economist Paul Davidson, who had just completed a management position with the Continental Oil Company.
Topics and concepts
Market failure
Central to environmental economics is the concept of market failure. Market failure means that markets fail to allocate resources efficiently. As stated by Hanley, Shogren, and White (2007): "A market failure occurs when the market does not allocate scarce resources to generate the greatest social welfare. A wedge exists between what a private person does given market prices and what society might want him or her to do to protect the environment. Such a wedge implies wastefulness or economic inefficiency; resources can be reallocated to make at least one person better off without making anyone else worse off." This results in a inefficient market that needs to be corrected through avenues such as government intervention. Common forms of market failure include externalities, non-excludability and non-rivalry.
Externality
An externality exists when a person makes a choice that affects other people in a way that is not accounted for in the market price. An externality can be positive or negative but is usually associated with negative externalities in environmental economics. For instance, water seepage in residential buildings occurring in upper floors affect the lower floors. Another example concerns how the sale of Amazon timber disregards the amount of carbon dioxide released in the cutting. Or a firm emitting pollution will typically not take into account the costs that its pollution imposes on others. As a result, pollution may occur in excess of the 'socially efficient' level, which is the level that would exist if the market was required to account for the pollution. A classic definition influenced by Kenneth Arrow and James Meade is provided by Heller and Starrett (1976), who define an externality as "a situation in which the private economy lacks sufficient incentives to create a potential market in some good and the nonexistence of this market results in losses of Pareto efficiency". In economic terminology, externalities are examples of market failures, in which the unfettered market does not lead to an efficient outcome.
Common goods and public goods
When it is too costly to exclude some people from access to an environmental resource, the resource is either called a common property resource (when there is rivalry for the resource, such that one person's use of the resource reduces others' opportunity to use the resource) or a public good (when use of the resource is non-rivalrous). In either case of non-exclusion, market allocation is likely to be inefficient.
These challenges have long been recognized. Hardin's (1968) concept of the tragedy of the commons popularized the challenges involved in non-exclusion and common property. "Commons" refers to the environmental asset itself, "common property resource" or "common pool resource" refers to a property right regime that allows for some collective body to devise schemes to exclude others, thereby allowing the capture of future benefit streams; and "open-access" implies no ownership in the sense that property everyone owns nobody owns.
The basic problem is that if people ignore the scarcity value of the commons, they can end up expending too much effort, over harvesting a resource (e.g., a fishery). Hardin theorizes that in the absence of restrictions, users of an open-access resource will use it more than if they had to pay for it and had exclusive rights, leading to environmental degradation. See, however, Ostrom's (1990) work on how people using real common property resources have worked to establish self-governing rules to reduce the risk of the tragedy of the commons.
The mitigation of climate change effects is an example of a public good, where the social benefits are not reflected completely in the market price. Because the personal marginal benefits are less than the social benefits the market under-provides climate change mitigation. This is a public good since the risks of climate change are both non-rival and non-excludable. Such efforts are non-rival since climate mitigation provided to one does not reduce the level of mitigation that anyone else enjoys. They are non-excludable actions as they will have global consequences from which no one can be excluded. A country's incentive to invest in carbon abatement is reduced because it can "free ride" off the efforts of other countries. Over a century ago, Swedish economist Knut Wicksell (1896) first discussed how public goods can be under-provided by the market because people might conceal their preferences for the good, but still enjoy the benefits without paying for them.
Valuation
Assessing the economic value of the environment is a major topic within the field. The values of natural resources often are not reflected in prices that markets set and, in fact, many of them are available at no monetary charge. This mismatch frequently causes distortions in pricing of natural assets: both overuse of them and underinvestment in them. Economic value or tangible benefits of ecosystem services and, more generally, of natural resources, include both use and indirect (see the nature section of ecological economics). Non-use values include existence, option, and bequest values. For example, some people may value the existence of a diverse set of species, regardless of the effect of the loss of a species on ecosystem services. The existence of these species may have an option value, as there may be the possibility of using it for some human purpose. For example, certain plants may be researched for drugs. Individuals may value the ability to leave a pristine environment for their children.
Use and indirect use values can often be inferred from revealed behavior, such as the cost of taking recreational trips or using hedonic methods in which values are estimated based on observed prices. Non-use values are usually estimated using stated preference methods such as contingent valuation or choice modelling. Contingent valuation typically takes the form of surveys in which people are asked how much they would pay to observe and recreate in the environment (willingness to pay) or their willingness to accept (WTA) compensation for the destruction of the environmental good. Hedonic pricing examines the effect the environment has on economic decisions through housing prices, traveling expenses, and payments to visit parks.
State subsidy
Almost all governments and states magnify environmental harm by providing various types of subsidies that have the effect of paying companies and other economic actors more to exploit natural resources than to protect them. The damage to nature of such public subsidies has been conservatively estimated at $4-$6 trillion U.S. dollars per year.
Solutions
Solutions advocated to correct such externalities include:
Environmental regulations. Under this plan, the economic impact has to be estimated by the regulator. Usually, this is done using cost–benefit analysis. There is a growing realization that regulations (also known as "command and control" instruments) are not so distinct from economic instruments as is commonly asserted by proponents of environmental economics. E.g.1 regulations are enforced by fines, which operate as a form of tax if pollution rises above the threshold prescribed. E.g.2 pollution must be monitored and laws enforced, whether under a pollution tax regime or a regulatory regime. The main difference an environmental economist would argue exists between the two methods, however, is the total cost of the regulation. "Command and control" regulation often applies uniform emissions limits on polluters, even though each firm has different costs for emissions reductions, i.e., some firms, in this system, can abate pollution inexpensively, while others can only abate it at high cost. Because of this, the total abatement in the system comprises some expensive and some inexpensive efforts. Consequently, modern "Command and control" regulations are oftentimes designed in a way that addresses these issues by incorporating utility parameters. For instance, CO2 emission standards for specific manufacturers in the automotive industry are either linked to the average vehicle footprint (US system) or average vehicle weight (EU system) of their entire vehicle fleet. Environmental economic regulations find the cheapest emission abatement efforts first, and then move on to the more expensive methods. E.g. as said earlier, trading, in the quota system, means a firm only abates pollution if doing so would cost less than paying someone else to make the same reduction. This leads to a lower cost for the total abatement effort as a whole.
Quotas on pollution. Often it is advocated that pollution reductions should be achieved by way of tradeable emissions permits, which if freely traded may ensure that reductions in pollution are achieved at least cost. In theory, if such tradeable quotas are allowed, then a firm would reduce its own pollution load only if doing so would cost less than paying someone else to make the same reduction, i.e., only if buying tradeable permits from another firm(s) is costlier. In practice, tradeable permits approaches have had some success, such as the U.S.'s sulphur dioxide trading program or the EU Emissions Trading Scheme, and interest in its application is spreading to other environmental problems.
Taxes and tariffs on pollution. Increasing the costs of polluting will discourage polluting, and will provide a "dynamic incentive", that is, the disincentive continues to operate even as pollution levels fall. A pollution tax that reduces pollution to the socially "optimal" level would be set at such a level that pollution occurs only if the benefits to society (for example, in form of greater production) exceeds the costs. This concept was introduced by Arthur Pigou, a British economist active in the late nineteenth through the mid-twentieth century. He showed that these externalities occur when markets fail, meaning they do not naturally produce the socially optimal amount of a good or service. He argued that “a tax on the production of paint would encourage the [polluting] factory to reduce production to the amount best for society as a whole.” These taxes are known amongst economists as Pigouvian Taxes, and they regularly implemented where negative externalities are present. Some advocate a major shift from taxation from income and sales taxes to tax on pollution – the so-called "green tax shift".
Better defined property rights. The Coase Theorem states that assigning property rights will lead to an optimal solution, regardless of who receives them, if transaction costs are trivial and the number of parties negotiating is limited. For example, if people living near a factory had a right to clean air and water, or the factory had the right to pollute, then either the factory could pay those affected by the pollution or the people could pay the factory not to pollute. Or, citizens could take action themselves as they would if other property rights were violated. The US River Keepers Law of the 1880s was an early example, giving citizens downstream the right to end pollution upstream themselves if the government itself did not act (an early example of bioregional democracy). Many markets for "pollution rights" have been created in the late twentieth century—see emissions trading. According to the Coase Theorem, the involved parties will bargain with each other, which results in an efficient solution. However, modern economic theory has shown that the presence of asymmetric information may lead to inefficient bargaining outcomes. Specifically, Rob (1989) has shown that pollution claim settlements will not lead to the socially optimal outcome when the individuals that will be affected by pollution have learned private information about their disutility already before the negotiations take place. Goldlücke and Schmitz (2018) have shown that inefficiencies may also result if the parties learn their private information only after the negotiations, provided that the feasible transfer payments are bounded. Using cooperative game theory, Gonzalez, Marciano and Solal (2019) have shown that in social cost problems involving more than three agents, the Coase theorem suffers from many counterexamples and that only two types of property rights lead to an optimal solution.
Accounting for environmental externalities in the final price. In fact, the world's largest industries burn about $7.3 trillion of free natural capital per year. Thus, the world's largest industries would hardly be profitable if they had to pay for this destruction of natural capital. Trucost has assessed over 100 direct environmental impacts and condensed them into 6 key environmental performance indicators (EKPIs). The assessment of environmental impacts is derived from different sources (academic journals, governments, studies, etc.) due to the lack of market prices. The table below gives an overview of the 5 regional sectors per EKPI with the highest impact on the overall EKPI:
If companies are allowed to include some of these externalities in their final prices, this could undermine the Jevons paradox and provide enough revenue to help companies innovate.
Relationship to other fields
Environmental economics is related to ecological economics but there are differences. Most environmental economists have been trained as economists. They apply the tools of economics to address environmental problems, many of which are related to so-called market failures—circumstances wherein the "invisible hand" of economics is unreliable. Most ecological economists have been trained as ecologists, but have expanded the scope of their work to consider the impacts of humans and their economic activity on ecological systems and services, and vice versa. This field takes as its premise that economics is a strict subfield of ecology. Ecological economics is sometimes described as taking a more pluralistic approach to environmental problems and focuses more explicitly on long-term environmental sustainability and issues of scale.
Environmental economics is viewed as more idealistic in a price system; ecological economics as more realistic in its attempts to integrate elements outside of the price system as primary arbiters of decisions. These two groups of specialisms sometimes have conflicting views which may be traced to the different philosophical underpinnings.
Another context in which externalities apply is when globalization permits one player in a market who is unconcerned with biodiversity to undercut prices of another who is – creating a race to the bottom in regulations and conservation. This, in turn, may cause loss of natural capital with consequent erosion, water purity problems, diseases, desertification, and other outcomes that are not efficient in an economic sense. This concern is related to the subfield of sustainable development and its political relation, the anti-globalization movement.
Environmental economics was once distinct from resource economics. Natural resource economics as a subfield began when the main concern of researchers was the optimal commercial exploitation of natural resource stocks. But resource managers and policy-makers eventually began to pay attention to the broader importance of natural resources (e.g. values of fish and trees beyond just their commercial exploitation). It is now difficult to distinguish "environmental" and "natural resource" economics as separate fields as the two became associated with sustainability. Many of the more radical green economists split off to work on an alternate political economy.
Environmental economics was a major influence on the theories of natural capitalism and environmental finance, which could be said to be two sub-branches of environmental economics concerned with resource conservation in production, and the value of biodiversity to humans, respectively. The theory of natural capitalism (Hawken, Lovins, Lovins) goes further than traditional environmental economics by envisioning a world where natural services are considered on par with physical capital.
The more radical green economists reject neoclassical economics in favour of a new political economy beyond capitalism or communism that gives a greater emphasis to the interaction of the human economy and the natural environment, acknowledging that "economy is three-fifths of ecology". This political group is a proponent of a transition to renewable energy.
These more radical approaches would imply changes to money supply and likely also a bioregional democracy so that political, economic, and ecological "environmental limits" were all aligned, and not subject to the arbitrage normally possible under capitalism.
An emerging sub-field of environmental economics studies its intersection with development economics. Dubbed "envirodevonomics" by Michael Greenstone and B. Kelsey Jack in their paper "Envirodevonomics: A Research Agenda for a Young Field", the sub-field is primarily interested in studying "why environmental quality [is] so poor in developing countries." A strategy for better understanding this correlation between a country's GDP and its environmental quality involves analyzing how many of the central concepts of environmental economics, including market failures, externalities, and willingness to pay, may be complicated by the particular problems facing developing countries, such as political issues, lack of infrastructure, or inadequate financing tools, among many others.
In the field of law and economics, environmental law is studied from an economic perspective. The economic analysis of environmental law studies instruments such as zoning, expropriation, licensing, third party liability, safety regulation, mandatory insurance, and criminal sanctions. A book by Michael Faure (2003) surveys this literature.
Professional bodies
The main academic and professional organizations for the discipline of Environmental Economics are the Association of Environmental and Resource Economists (AERE) and the European Association for Environmental and Resource Economics (EAERE). The main academic and professional organization for the discipline of Ecological Economics is the International Society for Ecological Economics (ISEE). The main organization for Green Economics is the Green Economics Institute.
By country
India
The Indian government promotes the Bharatiya model of development considered different from western models. The Economic Survey for the year 2024, noted that often solutions to address climate change “are fuelled by a market society, which seeks to substitute the means to achieve overconsumption rather than addressing overconsumption itself”. The report argued that India needs a different approach and a “Bharatiya Model of Development”, linked to the principles of sustainability and to the Indian philosophy, can help.
See also
Agroecology
Carbon fee and dividend
Carbon finance
Carbon negative fuel
Circular economy
Earth Economics (policy think tank)
Eco-capitalism
Eco commerce
Economics of global warming
Ecometrics
Eco-Money
Eco-socialism
Ecosystem Marketplace
Ecotax
Energy balance
Environmental accounting
Environmental economists (category)
Environmental credit crunch
Environmental enterprise
Environmental Investment Organisation
Environmental pricing reform
Environmental tariff
Fair trade
Fiscal environmentalism
Free-market environmentalism
Green banking
Green libertarianism
Green syndicalism
Green trading
ISO 14000 (environmental standards)
List of scholarly journals in environmental economics
Natural capital
Natural resource
Natural resource economics
Principles of ecopreneurship
Property rights (economics)
Renewable resource
Risk assessment
Strategic Sustainable Investing (SSI)
Systems ecology
World Ecological Forum
Hypotheses and theorems
Coase theorem
Porter hypothesis
Notes
References
Further reading
Environmental social science
Industrial ecology
Market failure | Environmental economics | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 3,952 | [
"Industrial engineering",
"Environmental economics",
"Environmental engineering",
"Industrial ecology",
"Environmental social science"
] |
47,768 | https://en.wikipedia.org/wiki/Texas%20Instruments | Texas Instruments Incorporated (TI) is an American multinational semiconductor company headquartered in Dallas, Texas. It is one of the top 10 semiconductor companies worldwide based on sales volume. The company's focus is on developing analog chips and embedded processors, which account for more than 80% of its revenue. TI also produces digital light processing (DLP) technology and education technology products including calculators, microcontrollers, and multi-core processors.
Texas Instruments emerged in 1951 after a reorganization of Geophysical Service Incorporated, a company founded in 1930 that manufactured equipment for use in the seismic industry, as well as defense electronics. TI produced the world's first commercial silicon transistor in 1954, and the same year designed and manufactured the first transistor radio. Jack Kilby invented the integrated circuit in 1958 while working at TI's Central Research Labs. TI also invented the hand-held calculator in 1967, and introduced the first single-chip microcontroller in 1970, which combined all the elements of computing onto one piece of silicon.
In 1987, TI invented the digital light processing device (also known as the DLP chip), which serves as the foundation for the company's DLP technology and DLP Cinema. TI released the popular TI-81 calculator in 1990, which made it a leader in the graphing calculator industry. Its defense business was sold to Raytheon Company in 1997; this allowed TI to strengthen its focus on digital solutions. After the acquisition of National Semiconductor in 2011, the company had a combined portfolio of 45,000 analog products and customer design tools. In the stock market, Texas Instruments is often regarded as an indicator for the semiconductor and electronics industry as a whole, since the company sells to more than 100,000 customers.
History
Texas Instruments was founded by Cecil H. Green, J. Erik Jonsson, Eugene McDermott, and Patrick E. Haggerty in 1951. McDermott was one of the original founders of Geophysical Service Inc. (GSI) in 1930. McDermott, Green, and Jonsson were GSI employees who purchased the company in 1941. In November 1945, Patrick Haggerty was hired as general manager of the Laboratory and Manufacturing (L&M) division, which focused on electronic equipment. By 1951, the L&M division, with its defense contracts, was growing faster than GSI's geophysical division. The company was reorganized and initially renamed General Instruments Inc. Because a firm named General Instrument already existed, the company was renamed Texas Instruments that same year. From 1956 to 1961, Fred Agnich of Dallas, later a Republican member of the Texas House of Representatives, was the Texas Instruments president. Geophysical Service, Inc. became a subsidiary of Texas Instruments. Early in 1988, most of GSI was sold to the Halliburton Company.
Geophysical Service Incorporated
In 1930, J. Clarence Karcher and Eugene McDermott founded Geophysical Service, an early provider of seismic exploration services to the petroleum industry. In 1939, the company reorganized as Coronado Corp, an oil company with Geophysical Service Inc (GSI), now as a subsidiary. On December 6, 1941, McDermott along with three other GSI employees, J. Erik Jonsson, Cecil H. Green, and H. B. Peacock purchased GSI. During World War II, GSI expanded its services to include electronics for the U.S. Army, Army Signal Corps, and U.S. Navy. In 1951, the company changed its name to Texas Instruments, spun off to build seismographs for oil explorations and with GSI becoming a wholly owned subsidiary of the new company.
An early success came for TI-GSI in 1965, when GSI was able (under a Top Secret government contract) to monitor the Soviet Union's underground nuclear weapons testing under the ocean in Vela Uniform, a subset of Project Vela, to verify compliance of the Partial Nuclear Test Ban Treaty.
Texas Instruments also continued to manufacture equipment for use in the seismic industry, and GSI continued to provide seismic services. After selling (and repurchasing) GSI, TI finally sold the company to Halliburton in 1988, after which sale GSI ceased to exist as a separate entity.
Semiconductors
In early 1952, Texas Instruments purchased a patent license to produce germanium transistors from Western Electric, the manufacturing arm of AT&T, for US$25,000, beginning production by the end of the year. Haggerty brought Gordon Teal to the company due to his expertise in growing semiconductor crystals while at Bell Telephone Laboratories. Teal's first assignment was to direct TI's research laboratory. At the end of 1952, Texas Instruments announced that it had expanded to 2,000 employees and $17 million in sales.
Among his new hires was Willis Adcock, who joined TI early in 1953. Adcock, who like Teal was a physical chemist, began leading a small research group focused on the task of fabricating grown-junction, silicon, single-crystal, small-signal transistors. Adcock later became the first TI Principal Fellow.
First silicon transistor and integrated circuits
In January 1954, Morris Tanenbaum at Bell Telephone Laboratories created the first workable silicon transistor. This work was reported in the spring of 1954, at the IRE off-the-record conference on solid-state devices, and was later published in the Journal of Applied Physics. Working independently in April 1954, Gordon Teal at TI created the first commercial silicon transistor and tested it on April 14, 1954. On May 10, 1954, at the Institute of Radio Engineers National Conference on Airborne Electronics in Dayton, Ohio, Teal presented a paper: "Some Recent Developments in Silicon and Germanium Materials and Devices".
In 1954, Texas Instruments designed and manufactured the first transistor radio. The Regency TR-1 used germanium transistors, as silicon transistors were much more expensive at the time. This was an effort by Haggerty to increase market demand for transistors.
Jack Kilby, an employee at TI, invented the integrated circuit in 1958. Kilby recorded his initial ideas concerning the integrated circuit in July 1958, and successfully demonstrated the world's first working integrated circuit on September 12, 1958. Six months later, Robert Noyce of Fairchild Semiconductor (who went on to co-found Intel) independently developed the integrated circuit with integrated interconnect, and is also considered an inventor of the integrated circuit. In 1969, Kilby was awarded the National Medal of Science, and in 1982 he was inducted into the National Inventor's Hall of Fame. Kilby also won the 2000 Nobel Prize in Physics for his part of the invention of the integrated circuit. Noyce's chip, made at Fairchild, was made of silicon, while Kilby's chip was made of germanium. In 2008, TI named its new development laboratory "Kilby Labs" after Jack Kilby.
Standard TTL
The 7400 series of transistor-transistor logic chips, developed by Texas Instruments in the 1960s, popularized the use of integrated circuits in computer logic. The military-grade version of this was the 5400 series.
Microprocessor
Texas Instruments invented the hand-held calculator (a prototype called "Cal Tech") in 1967 and the single-chip microcomputer in 1971, was assigned the first patent on a single-chip microprocessor (invented by Gary Boone) on September 4, 1973. This was disputed by Gilbert Hyatt, formerly of the Micro Computer Company, in August 1990, when he was awarded a patent superseding TI's. This was overturned on June 19, 1996, in favor of TI (note: Intel is usually given credit with Texas Instruments for the almost-simultaneous invention of the microprocessor).
First speech synthesis chip
In 1978, Texas Instruments introduced the first single-chip linear predictive coding speech synthesizer. In 1976, TI began a feasibility study of memory-intensive applications for bubble memory then being developed. They soon focused on speech applications. This resulted in the development the TMC0280 one-chip linear predictive coding speech synthesizer, which was the first time a single silicon chip had electronically replicated the human voice. This was used in several TI commercial products beginning with Speak & Spell, which was introduced at the Summer Consumer Electronics Show in June 1978. In 2001, TI left the speech synthesis business, selling it to Sensory Inc. of Santa Clara, California.
Consumer electronics and computers
In May 1954, Texas Instruments designed and built a prototype of the world's first transistor radio, and, through a partnership with Industrial Development Engineering Associates of Indianapolis, Indiana, the 100% solid-state radio was sold to the public beginning in October of that year.
In the 1960s, company president Pat Haggerty had a team that included Jack Kilby to work on a handheld calculator project. Kilby and two other colleagues created the Cal-Tech, a three-pound battery-powered calculator that could do basic math and fit six-digit numbers on its display. This 4.25 x 6.15 x 1.75 inch calculator's processor would originate the vast majority of Texas Instruments’ revenue.
In 1973, the handheld calculator SR-10 (named after slide rule) and in 1974, the handheld scientific calculator SR-50 were issued by TI. Both had red LED-segments numeric displays. The optical design of the SR-50 is somewhat similar to the HP-35 edited by Hewlett-Packard before in early 1972, but buttons for the operations "+", "–", ... are in the right of the number block and the decimal point lies between two neighboring digits.
TI continued to be active in the consumer electronics market through the 1970s and 1980s. Early on, this also included two digital clock models – one for desk and the other a bedside alarm. From this sprang what became the Time Products Division, which made LED watches. Though these LED watches enjoyed early commercial success due to excellent quality, it was short-lived due to poor battery life. LEDs were replaced with LCD watches for a short time, but these could not compete because of styling issues, excessive makes and models, and price points. The watches were manufactured in Dallas and then Lubbock, Texas. Several spin-offs of the Speak & Spell, such as the Speak & Read and Speak & Math, were introduced soon thereafter.
In 1979, TI entered the home computer market with the TI-99/4, a competitor to computers such as the Apple II, TRS-80, and the later Atari 400/800 and VIC-20. By late 1982, TI was dominating the U.S. home computer market, shipping 5,000 computers a day from their factory in Lubbock. It discontinued the TI-99/4A (1981), the sequel to the 99/4, in late 1983 amid an intense price war waged primarily against Commodore. At the 1983 Winter CES, TI showed models 99/2 and the Compact Computer 40, the latter aimed at professional users. The TI Professional (1983) ultimately joined the ranks of the many unsuccessful MS-DOS and x86-based—but non-compatible—competitors to the IBM PC (the founders of Compaq, an early leader in PC compatibles, all came from TI). The company for years successfully made and sold PC-compatible laptops before withdrawing from the market and selling its product line to Acer in 1998.
Defense electronics
TI entered the defense electronics market in 1942 with submarine detection equipment, based on the seismic exploration technology previously developed for the oil industry. The division responsible for these products was known at different times as the Laboratory & Manufacturing Division, the Apparatus Division, the Equipment Group, and the Defense Systems & Electronics Group (DSEG).
During the early 1980s, TI instituted a quality program which included Juran training, as well as promoting statistical process control, Taguchi methods, and Design for Six Sigma. In the late 1980s, the company, along with Eastman Kodak and Allied Signal, began involvement with Motorola, institutionalizing Motorola's Six Sigma methodology. Motorola, which originally developed the Six Sigma methodology, began this work in 1982. In 1992, the DSEG division of Texas Instruments' quality-improvement efforts were rewarded by winning the Malcolm Baldrige National Quality Award for manufacturing.
Infrared and radar systems
TI developed the AAA-4 infrared search and track device in the late 1950s and early 1960s for the F-4B Phantom for passive scanning of jet-engine emissions, but it possessed limited capabilities and was eliminated on F-4Ds and later models.
In 1956, TI began research on infrared technology that led to several line scanner contracts and with the addition of a second scan mirror the invention of the first forward looking infrared (FLIR) in 1963 with production beginning in 1966. In 1972, TI invented the common module FLIR concept, greatly reducing cost and allowing reuse of common components.
TI went on to produce side-looking radar systems, the first terrain-following radar and surveillance radar systems for both the military and FAA. TI demonstrated the first solid-state radar called Molecular Electronics for Radar Applications. In 1976, TI developed a microwave landing system prototype. In 1984, TI developed the first inverse synthetic aperture radar. The first single-chip gallium arsenide radar module was developed. In 1991, the military microwave integrated circuit program was initiated—a joint effort with Raytheon.
Missiles and laser-guided bombs
In 1961, TI won the guidance and control system contract for the defense suppression AGM-45 Shrike antiradiation missile. This led later to the prime on the high-speed antiradiation missile (AGM-88 HARM) development contract in 1974 and production in 1981.
In 1964, TI began development of the first laser guidance system for precision-guided munitions, leading to the Paveway series of laser-guided bombs (LGBs). The first LGB was the BOLT-117.
In 1969, TI won the Harpoon (missile) Seeker contract. In 1986, TI won the Army FGM-148 Javelin fire-and-forget man portable antitank guided missile in a joint venture with Martin Marietta. In 1991, TI was awarded the contract for the AGM-154 Joint Standoff Weapon.
In 1988, TI paid the U.S. government $5.2 million "to settle allegations one of its divisions overcharged the government on contracts for guided missiles sold to the Navy".
Military computers
Because of TI's research and development of military temperature-range silicon transistors and integrated circuits (ICs), TI won contracts for the first IC-based computer for the U.S. Air Force in 1961 (molecular electronic computer) and for ICs for the Minuteman Missile the following year. In 1968, TI developed the data systems for Mariner Program. In 1991 TI won the F-22 Radar and Computer development contract.
Divestiture to Raytheon
As the defense industry consolidated, TI sold its defense business to the Raytheon Company in 1997 for $2.95 billion. The Department of Justice required that Raytheon divest the TI Monolithic Microwave Integrated Circuit (MMIC) operations after closing the transaction. The TI MMIC business accounted for less than $40 million in 1996 revenues, or roughly 2% of the $1.8 billion in total TI defense revenues, and was sold to TriQuint Semiconductor, Inc. Raytheon retained its own existing MMIC capabilities and has the right to license TI's MMIC technology for use in future product applications from TriQuint.
Shortly after Raytheon acquired TI DSEG, Raytheon then acquired Hughes Aircraft from General Motors. Raytheon then owned TI's mercury cadmium telluride detector business and infrared (IR) systems group. In California, it also had Hughes infrared detector and an IR systems business. When again the US government forced Raytheon to divest itself of a duplicate capability, the company kept the TI IR systems business and the Hughes detector business. As a result of these acquisitions, these former arch rivals of TI systems and Hughes detectors work together.
Immediately after acquisition, DSEG was known as Raytheon TI Systems (RTIS). It is now fully integrated into Raytheon and this designation no longer exists.
Artificial intelligence
TI was active in the area of artificial intelligence in the 1980s. In addition to ongoing developments in speech and signal processing and recognition, it developed and sold the Explorer computer family of Lisp machines. For the Explorer, a special 32-bit Lisp microprocessor was developed, which was used in the Explorer II and the TI MicroExplorer (a Lisp Machine on a NuBus board for the Apple Macintosh). AI application software developed by TI for the Explorer included the gate assignment system for United Airlines, described as "an artificial intelligence program that captures the combined experience and knowledge of a half-dozen United operations experts." In software for the PC, they introduced "Personal Consultant", a rule-based expert system development tool and runtime engine, followed by "Personal Consultant Plus" written in the Lisp-like language from MIT known as Scheme, and the natural language menu system NLMenu.
Sensors and controls
TI was a major original-equipment manufacturer of sensor, control, protection, and RFID products for the automotive, appliance, aircraft, and other industries. The Sensors & Controls division was headquartered in Attleboro, Massachusetts.
By the mid-1980s, industrial computers known as PLC's (programmable logic controllers) were separated from Sensors & Controls as the Industrial Systems Division, which was sold in the early 1990s to Siemens.
In 2006, Bain Capital LLC, a private equity firm, purchased the Sensors & Controls division for $3.0 billion in cash. The RFID portion of the division remained part of TI, transferring to the Application Specific Products business unit of the Semiconductor division, with the newly formed independent company based in Attleboro taking the name Sensata Technologies.
Software
In 1997, TI sold its software division, along with its main products such as the CA Gen, to Sterling Software, which is now part of Computer Associates. However, TI still owns small pieces of software, such as the software for calculators such as the TI Interactive!. TI also creates a significant amount of target software for its digital signal processors, along with host-based tools for creating DSP applications.
Finances
For the fiscal year 2017, Texas Instruments reported earnings of $3.682 billion, with an annual revenue of $14.961 billion, an increase of 11.9% over the previous fiscal cycle. TI shares traded at over $82 per share, and its market capitalization was valued at over $88.0 billion in October 2018. As of 2018, TI ranked 192nd on the Fortune 500 list of the largest United States corporations by revenue.
Divisions
As of 2016, TI is made up of four divisions: analog products, embedded processors, digital light processing, and educational technology.
As of January 2021, the industrial market accounts for 41 percent of TI's annual revenue, and the automotive market accounts for 21 percent.
Other businesses
TI's remaining businesses consisting of DLP products (primarily used in projectors to create high-definition images), calculators and certain custom semiconductors known as application-specific integrated circuits.
DLP Products
Texas Instruments sells DLP technology for TVs, video projectors, and digital cinema. On February 2, 2000, Philippe Binant, technical manager of Digital Cinema Project at Gaumont in France, realized the first digital cinema projection in Europe with the DLP Cinema technology developed by TI DLP technology enables a diverse range of display and advanced light control applications spanning industrial, enterprise, automotive, and consumer market segments.
Custom application-specific integrated circuits (ASICs)
The ASICs business develops more complex integrated-circuit solutions for clients on a custom basis.
Educational technology
TI has produced educational toys for children, including the Little Professor in 1976 and Dataman in 1977.
TI produces a range of calculators, with the TI-30 being one of the most popular early calculators. TI has also developed a line of graphing calculators, the first being the TI-81, and most popular being the TI-83 Plus (with the TI-84 Plus being an updated equivalent).
Many TI calculators are still sold without graphing capabilities. The TI-30 has been replaced by the TI-30X IIS. Also, some financial calculators are for sale on the TI website.
In 2007, TI released the TI-Nspire family of calculators and computer software that has similar capabilities to the calculators.
Less than 3% of Texas Instruments’ overall revenue comes from calculators, part of the $1.43 billion revenue in the "Other" section in the company's 2018 annual report. Nevertheless, the calculators are a lucrative product. For example, estimates have a $15 to $20 cost to produce TI-84 Plus which likely has a profit margin of at least 50%.
Throughout the 1980s, Texas Instruments worked closely with National Council of Teachers of Mathematics (NCTM) to develop a calculator to become the educational standard. In 1986, Connecticut School Board became the first to require a graphing calculator on state-mandated exams. Chicago Public Schools gave a free calculator to every student, beginning in the fourth grade, in 1988. New York required the calculator in 1992 for its Regents exams after first allowing it the previous year. The College Board required calculators on the Advanced Placement tests in 1993 and allowed calculators on the SAT a year later. Texas Instruments provides free services to the College Board, which administers AP tests and the SAT, and also has a group called Teachers Teaching for Technology (T3), which educates teachers on how to use its calculators.
TI calculator community
In the 1990s, with the advent of TI's graphing calculator series, programming became popular among some students. The TI-8x series of calculators (beginning with the TI-81) came with a built-in BASIC interpreter, through which simple programs could be created. The TI-83 was the first in the series to receive native assembly. Around the same time that these programs were first being written, programmers began creating websites to host their work, along with tutorials and other calculator-relevant information. This led to the formation of TI calculator webrings and eventually a few large communities, including ticalc.org.
The TI community reached the height of its popularity in the early 2000s, with many new websites and programming groups being started. In fact, the aforementioned community sites were exploding with activity, with close to 100 programs being uploaded daily by users of the sites. Also, a competition existed between both sites to be the top site in the community, which helped increase interest and activity in the community.
One of the common unifying forces that has united the community over the years has been the rather contentious relationship with TI regarding control over its graphing calculators. TI graphing calculators generally fall into two distinct groups—the older ones powered by the Zilog Z80 and the newer ones running on the Motorola 68000 series. Both lines of calculators are locked by TI with checks in the hardware and through the signing of software to disable use of custom operating systems. However, users discovered the keys and published them in 2009. TI responded by sending invalid DMCA takedown notices, causing the Texas Instruments signing key controversy.
Competitors
TI has the largest market share in the analog semiconductor industry, accounting for over $10 billion of the total $57 billion market in 2020.
Acquisitions
In 1996, TI acquired Tartan, Inc.
In 1997, TI acquired Amati Communications for $395 million.
In 1998, TI acquired GO DSP.
In 1998, TI acquired the standard logic (semiconductor) product lines from Harris Semiconductor, which included the CD4000, HC4xxx, HCT, FCT, and ACT product families.
In 1999, TI acquired Libit Signal Processing Ltd. of Herzlia, Israel for approximately $365 million in cash.
In 1999, TI acquired Butterfly VLSI, Ltd. for approximately $50 million.
In 1999, TI acquired Telogy Networks for $457 million.
In 1999, TI acquired Unitrode Corporation (NYSE:UTR).
In 2000, TI acquired Burr-Brown Corporation for $7.6 billion.
In 2001, TI acquired Graychip.
In 2003, TI acquired Radia Inc, a San-Jose based companied who had developed a developed an ASIC WiFi front end prototype without the base band processor, for about $320 million. The company has an Israeli home office.
In 2006, TI acquired Chipcon for about $200 million.
In 2009, TI acquired CICLON and Luminary Micro.
In 2011, TI acquired National Semiconductor for $6.5 billion.
In 2021, TI acquired an operational 300mm fabrication plant located in Lehi, Utah from Micron for $900 million.
In 2022, TI acquired icDirectory France
National Semiconductor acquisition
On April 4, 2011, Texas Instruments announced that it had agreed to buy National Semiconductor for $6.5 billion in cash. TI paid $25 per share of National Semiconductor stock, which was an 80% premium over the share price of $14.07 as of April 4, 2011, close. The deal made TI the world's largest maker of analog technology components.
The companies formally merged on September 23, 2011.
See also
Anylite Technology
EnOcean
Symbian Foundation
OMAP
Melendy E. Lovett
SolarMagic
Texas Instruments DaVinci
References
Bibliography
Further reading
P. Binant, "Kodak: Au coeur de la projection numérique, Actions, no. 29, pp. 12–13, Paris, 2007.
Nobel Lectures, World Scientific Publishing Co., Singapore, 2000.
External links
1951 establishments in Texas
American companies established in 1930
American companies established in 1951
Companies formerly listed on the New York Stock Exchange
Companies in the Nasdaq-100
Companies listed on the Nasdaq
Computer companies of the United States
Computer hardware companies
Defunct computer systems companies
Electronic calculator companies
Electronics companies established in 1930
Electronics companies of the United States
Home computer hardware companies
HSA Foundation founding members
Manufacturing companies based in Dallas
Manufacturing companies established in 1951
Semiconductor companies of the United States
Technology companies established in 1930 | Texas Instruments | [
"Technology"
] | 5,477 | [
"Computer hardware companies",
"Computers"
] |
47,769 | https://en.wikipedia.org/wiki/Transistor%E2%80%93transistor%20logic | Transistor–transistor logic (TTL) is a logic family built from bipolar junction transistors. Its name signifies that transistors perform both the logic function (the first "transistor") and the amplifying function (the second "transistor"), as opposed to earlier resistor–transistor logic (RTL) and diode–transistor logic (DTL).
TTL integrated circuits (ICs) were widely used in applications such as computers, industrial controls, test equipment and instrumentation, consumer electronics, and synthesizers.
After their introduction in integrated circuit form in 1963 by Sylvania Electric Products, TTL integrated circuits were manufactured by several semiconductor companies. The 7400 series by Texas Instruments became particularly popular. TTL manufacturers offered a wide range of logic gates, flip-flops, counters, and other circuits. Variations of the original TTL circuit design offered higher speed or lower power dissipation to allow design optimization. TTL devices were originally made in ceramic and plastic dual in-line package(s) and in flat-pack form. Some TTL chips are now also made in surface-mount technology packages.
TTL became the foundation of computers and other digital electronics. Even after Very-Large-Scale Integration (VLSI) CMOS integrated circuit microprocessors made multiple-chip processors obsolete, TTL devices still found extensive use as glue logic interfacing between more densely integrated components.
History
TTL was invented in 1961 by James L. Buie of TRW, which declared it "particularly suited to the newly developing integrated circuit design technology." The original name for TTL was transistor-coupled transistor logic (TCTL). The first commercial integrated-circuit TTL devices were manufactured by Sylvania in 1963, called the Sylvania Universal High-Level Logic family (SUHL). The Sylvania parts were used in the controls of the Phoenix missile. TTL became popular with electronic systems designers after Texas Instruments introduced the 5400 series of ICs, with military temperature range, in 1964 and the later 7400 series, specified over a narrower range and with inexpensive plastic packages, in 1966.
The Texas Instruments 7400 family became an industry standard. Compatible parts were made by Motorola, AMD, Fairchild, Intel, Intersil, Signetics, Mullard, Siemens, SGS-Thomson, Rifa, National Semiconductor, and many other companies, even in the Eastern Bloc (Soviet Union, GDR, Poland, Czechoslovakia, Hungary, Romania — for details see 7400 series). Not only did others make compatible TTL parts, but compatible parts were made using many other circuit technologies as well. At least one manufacturer, IBM, produced non-compatible TTL circuits for its own use; IBM used the technology in the IBM System/38, IBM 4300, and IBM 3081.
The term "TTL" is applied to many successive generations of bipolar logic, with gradual improvements in speed and power consumption over about two decades. The most recently introduced family 74Fxx is still sold today (as of 2019), and was widely used into the late 90s. 74AS/ALS Advanced Schottky was introduced in 1985. As of 2008, Texas Instruments continues to supply the more general-purpose chips in numerous obsolete technology families, albeit at increased prices. Typically, TTL chips integrate no more than a few hundred transistors each. Functions within a single package generally range from a few logic gates to a microprocessor bit-slice. TTL also became important because its low cost made digital techniques economically practical for tasks previously done by analog methods.
The Kenbak-1, ancestor of the first personal computers, used TTL for its CPU instead of a microprocessor chip, which was not available in 1971. The Datapoint 2200 from 1970 used TTL components for its CPU and was the basis for the 8008 and later the x86 instruction set. The 1973 Xerox Alto and 1981 Star workstations, which introduced the graphical user interface, used TTL circuits integrated at the level of arithmetic logic units (ALUs) and bitslices, respectively. Most computers used TTL-compatible "glue logic" between larger chips well into the 1990s. Until the advent of programmable logic, discrete bipolar logic was used to prototype and emulate microarchitectures under development.
Implementation
Fundamental TTL gate
TTL inputs are the emitters of bipolar transistors. In the case of NAND inputs, the inputs are the emitters of multiple-emitter transistors, functionally equivalent to multiple transistors where the bases and collectors are tied together. The output is buffered by a common emitter amplifier.
Inputs both logical ones. When all the inputs are held at high voltage, the base–emitter junctions of the multiple-emitter transistor are reverse-biased. Unlike DTL, a small “collector” current (approximately 10 μA) is drawn by each of the inputs. This is because the transistor is in reverse-active mode. An approximately constant current flows from the positive rail, through the resistor and into the base of the multiple emitter transistor. This current passes through the base–emitter junction of the output transistor, allowing it to conduct and pulling the output voltage low (logical zero).
An input logical zero. Note that the base–collector junction of the multiple-emitter transistor and the base–emitter junction of the output transistor are in series between the bottom of the resistor and ground. If one input voltage becomes zero, the corresponding base–emitter junction of the multiple-emitter transistor is in parallel with these two junctions. A phenomenon called current steering means that when two voltage-stable elements with different threshold voltages are connected in parallel, the current flows through the path with the smaller threshold voltage. That is, current flows out of this input and into the zero (low) voltage source. As a result, no current flows through the base of the output transistor, causing it to stop conducting and the output voltage becomes high (logical one). During the transition the input transistor is briefly in its active region; so it draws a large current away from the base of the output transistor and thus quickly discharges its base. This is a critical advantage of TTL over DTL that speeds up the transition over a diode input structure.
The main disadvantage of TTL with a simple output stage is the relatively high output resistance at output logical "1" that is completely determined by the output collector resistor. It limits the number of inputs that can be connected (the fanout). Some advantage of the simple output stage is the high voltage level (up to VCC) of the output logical "1" when the output is not loaded.
Open collector wired logic
A common variation omits the collector resistor of the output transistor, making an open-collector output. This allows the designer to fabricate wired logic by connecting the open-collector outputs of several logic gates together and providing a single external pull-up resistor. If any of the logic gates becomes logic low (transistor conducting), the combined output will be low. Examples of this type of gate are the 7401 and 7403 series. Open-collector outputs of some gates have a higher maximum voltage, such as 15 V for the 7426, useful when driving non-TTL loads.
TTL with a "totem-pole" output stage
To solve the problem with the high output resistance of the simple output stage the second schematic adds to this a "totem-pole" ("push–pull") output. It consists of the two n-p-n transistors V3 and V4, the "lifting" diode V5 and the current-limiting resistor R3 (see the figure on the right). It is driven by applying the same current steering idea as above.
When V2 is "off", V4 is "off" as well and V3 operates in active region as a voltage follower producing high output voltage (logical "1").
When V2 is "on", it activates V4, driving low voltage (logical "0") to the output. Again there is a current-steering effect: the series combination of V2's C-E junction and V4's B-E junction is in parallel with the series of V3 B-E, V5's anode-cathode junction, and V4 C-E. The second series combination has the higher threshold voltage, so no current flows through it, i.e. V3 base current is deprived. Transistor V3 turns "off" and it does not impact on the output.
In the middle of the transition, the resistor R3 limits the current flowing directly through the series connected transistor V3, diode V5 and transistor V4 that are all conducting. It also limits the output current in the case of output logical "1" and short connection to the ground. The strength of the gate may be increased without proportionally affecting the power consumption by removing the pull-up and pull-down resistors from the output stage.
The main advantage of TTL with a "totem-pole" output stage is the low output resistance at output logical "1". It is determined by the upper output transistor V3 operating in active region as an emitter follower. The resistor R3 does not increase the output resistance since it is connected in the V3 collector and its influence is compensated by the negative feedback. A disadvantage of the "totem-pole" output stage is the decreased voltage level (no more than 3.5 V) of the output logical "1" (even if the output is unloaded). The reasons for this reduction are the voltage drops across the V3 base–emitter and V5 anode–cathode junctions.
Interfacing considerations
Like DTL, TTL is a current-sinking logic since a current must be drawn from inputs to bring them to a logic 0 voltage level. The driving stage must absorb up to 1.6 mA from a standard TTL input while not allowing the voltage to rise to more than 0.4 volts. The output stage of the most common TTL gates is specified to function correctly when driving up to 10 standard input stages (a fanout of 10). TTL inputs are sometimes simply left floating to provide a logical "1", though this usage is not recommended.
Standard TTL circuits operate with a 5-volt power supply. A TTL input signal is defined as "low" when between 0 V and 0.8 V with respect to the ground terminal, and "high" when between 2 V and VCC (5 V), and if a voltage signal ranging between 0.8 V and 2.0 V is sent into the input of a TTL gate, there is no certain response from the gate and therefore it is considered "uncertain" (precise logic levels vary slightly between sub-types and by temperature). TTL outputs are typically restricted to narrower limits of between 0.0 V and 0.4 V for a "low" and between 2.4 V and VCC for a "high", providing at least 0.4 V of noise immunity. Standardization of the TTL levels is so ubiquitous that complex circuit boards often contain TTL chips made by many different manufacturers selected for availability and cost, compatibility being assured. Two circuit board units off the same assembly line on different successive days or weeks might have a different mix of brands of chips in the same positions on the board; repair is possible with chips manufactured years later than original components. Within usefully broad limits, logic gates can be treated as ideal Boolean devices without concern for electrical limitations. The 0.4 V noise margins are adequate because of the low output impedance of the driver stage, that is, a large amount of noise power superimposed on the output is needed to drive an input into an undefined region.
In some cases (e.g., when the output of a TTL logic gate needs to be used for driving the input of a CMOS gate), the voltage level of the "totem-pole" output stage at output logical "1" can be increased closer to VCC by connecting an external resistor between the V4 collector and the positive rail. It pulls up the V5 cathode and cuts-off the diode. However, this technique actually converts the sophisticated "totem-pole" output into a simple output stage having significant output resistance when driving a high level (determined by the external resistor).
Packaging
Like most integrated circuits of the period 1963–1990, commercial TTL devices are usually packaged in dual in-line packages (DIPs), usually with 14 to 24 pins, for through-hole or socket mounting. Epoxy plastic (PDIP) packages were often used for commercial temperature range components, while ceramic packages (CDIP) were used for military temperature range parts.
Beam-lead chip dies without packages were made for assembly into larger arrays as hybrid integrated circuits. Parts for military and aerospace applications were packaged in flatpacks, a form of surface-mount package, with leads suitable for welding or soldering to printed circuit boards. Today, many TTL-compatible devices are available in surface-mount packages, which are available in a wider array of types than through-hole packages.
TTL is particularly well suited to bipolar integrated circuits because additional inputs to a gate merely required additional emitters on a shared base region of the input transistor. If individually packaged transistors were used, the cost of all the transistors would discourage one from using such an input structure. But in an integrated circuit, the additional emitters for extra gate inputs add only a small area.
At least one computer manufacturer, IBM, built its own flip chip integrated circuits with TTL; these chips were mounted on ceramic multi-chip modules.
Comparison with other logic families
TTL devices consume substantially more power than equivalent CMOS devices at rest, but power consumption does not increase with clock speed as rapidly as for CMOS devices. Compared to contemporary ECL circuits, TTL uses less power and has easier design rules but is substantially slower. Designers can combine ECL and TTL devices in the same system to achieve best overall performance and economy, but level-shifting devices are required between the two logic families. TTL is less sensitive to damage from electrostatic discharge than early CMOS devices.
Due to the output structure of TTL devices, the output impedance is asymmetrical between the high and low state, making them unsuitable for driving transmission lines. This drawback is usually overcome by buffering the outputs with special line-driver devices where signals need to be sent through cables. ECL, by virtue of its symmetric low-impedance output structure, does not have this drawback.
The TTL "totem-pole" output structure often has a momentary overlap when both the upper and lower transistors are conducting, resulting in a substantial pulse of current drawn from the power supply. These pulses can couple in unexpected ways between multiple integrated circuit packages, resulting in reduced noise margin and lower performance. TTL systems usually have a decoupling capacitor for every one or two IC packages, so that a current pulse from one TTL chip does not momentarily reduce the supply voltage to another.
Since the mid 1980s, several manufacturers supply CMOS logic equivalents with TTL-compatible input and output levels, usually bearing part numbers similar to the equivalent TTL component and with the same pinouts. For example, the 74HCT00 series provides many drop-in replacements for bipolar 7400 series parts, but uses CMOS technology.
Sub-types
Successive generations of technology produced compatible parts with improved power consumption or switching speed, or both. Although vendors uniformly marketed these various product lines as TTL with Schottky diodes, some of the underlying circuits, such as used in the LS family, could rather be considered DTL.
Variations of and successors to the basic TTL family, which has a typical gate propagation delay of 10ns and a power dissipation of 10 mW per gate, for a power–delay product (PDP) or switching energy of about 100 pJ, include:
Low-power TTL (L), which traded switching speed (33ns) for a reduction in power consumption (1 mW) (now essentially replaced by CMOS logic)
High-speed TTL (H), with faster switching than standard TTL (6ns) but significantly higher power dissipation (22 mW)
Schottky TTL (S), introduced in 1969, which used Schottky diode clamps at gate inputs to prevent charge storage and improve switching time. These gates operated more quickly (3ns) but had higher power dissipation (19 mW)
Low-power Schottky TTL (LS) – used the higher resistance values of low-power TTL and the Schottky diodes to provide a good combination of speed (9.5 ns) and reduced power consumption (2 mW), and PDP of about 20 pJ. Probably the most common type of TTL, these were used as glue logic in microcomputers, essentially replacing the former H, L, and S sub-families.
Fast (F) and Advanced-Schottky (AS) variants of LS from Fairchild and TI, respectively, circa 1985, with "Miller-killer" circuits to speed up the low-to-high transition. These families achieved PDPs of 10 pJ and 4 pJ, respectively, the lowest of all the TTL families.
Low-voltage TTL (LVTTL) for 3.3-volt power supplies and memory interfacing.
Most manufacturers offer commercial and extended temperature ranges: for example Texas Instruments 7400 series parts are rated from 0 to 70 °C, and 5400 series devices over the military-specification temperature range of −55 to +125 °C.
Special quality levels and high-reliability parts are available for military and aerospace applications.
Radiation-hardened devices (for example from the SNJ54 series) are offered for space applications.
Applications
Before the advent of VLSI devices, TTL integrated circuits were a standard method of construction for the processors of minicomputer and midrange mainframe computers, such as the DEC VAX and Data General Eclipse; however some computer families were based on proprietary components (e.g. Fairchild CTL) while supercomputers and high-end mainframes used emitter-coupled logic. They were also used for equipment such as machine tool numerical controls, printers and video display terminals, and as microprocessors became more functional for "glue logic" applications, such as address decoders and bus drivers, which tie together the function blocks realized in VLSI elements. The Gigatron TTL is a more recent (2018) example of a processor built entirely with TTL integrated circuits.
Analog applications
While originally designed to handle logic-level digital signals, a TTL inverter can be biased as an analog amplifier. Connecting a resistor between the output and the input biases the TTL element as a negative feedback amplifier. Such amplifiers may be useful to convert analog signals to the digital domain but would not ordinarily be used where analog amplification is the primary purpose. TTL inverters can also be used in crystal oscillators where their analog amplification ability is significant.
A TTL gate may operate inadvertently as an analog amplifier if the input is connected to a slowly changing input signal that traverses the unspecified region from 0.8 V to 2 V. The output can be erratic when the input is in this range. A slowly changing input like this can also cause excess power dissipation in the output circuit. If such an analog input must be used, there are specialized TTL parts with Schmitt trigger inputs available that will reliably convert the analog input to a digital value, effectively operating as a one bit A to D converter.
Serial signaling
TTL serial refers to single-ended serial communication using raw transistor voltage levels: "low" for 0 and "high" for 1. UART over TTL serial is a common debug interface for embedded devices. Handheld devices such as graphing calculators and GPS receivers and fishfinders also commonly use UART with TTL. TTL serial is only a de facto standard: there are no strict electrical guidelines. Driver–receiver modules interface between TTL and longer-range serial standards: one example is the MAX232, which converts from and to RS-232.
Differential TTL is TTL serial carried over a differential pair with complement levels, providing much enhanced noise tolerance. Both RS-422 and RS-485 signals can be produced using TTL levels.
ccTalk is based on TTL voltage levels.
See also
Resistor–transistor logic (RTL)
List of 7400 series integrated circuits
References
Further reading
Lessons in Electric Circuits - Volume IV - Digital; Tony Kuphaldt; Open Book Project; 508 pages; 2007. (Chapter 3 Logic Gates)
External links
Fairchild Semiconductor. An Introduction to and Comparison of 74HCT TTL Compatible CMOS Logic (Application Note 368). 1984. (for relative ESD sensitivity of TTL and CMOS.)
Texas Instruments logic family application notes
Digital electronics
Logic families | Transistor–transistor logic | [
"Engineering"
] | 4,468 | [
"Electronic engineering",
"Digital electronics"
] |
47,786 | https://en.wikipedia.org/wiki/Environmental%20finance | Environmental finance is a field within finance that employs market-based environmental policy instruments to improve the ecological impact of investment strategies. The primary objective of environmental finance is to regress the negative impacts of climate change through pricing and trading schemes. The field of environmental finance was established in response to the poor management of economic crises by government bodies globally. Environmental finance aims to reallocate a businesses resources to improve the sustainability of investments whilst also retaining profit margins.
History
In 1992, Richard L. Sandor proposed a new course outlining emission markets at the University of Chicago Booth School of Business, that would later be known as the course, Environmental Finance. Sandor anticipated a social shift in perspectives on the effects of global warming and wanted to be on the frontier of new research.
Prior to this in 1990, Sandor had been involved with the passing of the Clean Air Act Amendment for the Chicago Board of Trade, which aimed to reduce high sulfur dioxide levels following WW2. Inspired by the theory of social cost, Sandor focused on cap-and-trade strategies such as emission trading schemes and more flexible mechanisms including taxes and subsidies to manage environmental crisis. The implementation of cap-and-trade mechanisms was a contributing factor to the success of the Clean Air Act Amendment.
Following the Clean Air Act in 1990, the United Nations Conference on Trade and Development approached the Chicago Board of Trade in 1991, to enquire about how the market-based instruments used to combat high atmospheric sulfur dioxide concentrations could be applied to the increasing levels of atmospheric carbon dioxide. Sandor created a framework consisting of four characteristics which could be used to describe the carbon market:
Standardisation
Unit Trading
Price Basis
Delivery
In 1997 the Kyoto Protocol was enacted and later enforced in 2005 by the United Nations Framework Convention on Climate Change. Included nations agreed to focus on reducing global greenhouse gas emissions through the market-based mechanism of emissions trading. Reductions averaged approximately 5% by 2012 which equates to almost 30% in reduction of total emissions. Some nations made significant progress under the Kyoto protocol, however as it only became law in 2005, nations such as the United States and China reported increased emissions, substantially offsetting progress made by other regions.
In 1999, the Dow Jones Sustainability Index was introduced to evaluate the ecological and social impact of stocks so shareholders could invest more sustainably. The index acts as an incentive for firms to improve their environmental footprint to attract more shareholders.
Later in 2000, the United Nations introduced the Millennium Development Goal scheme which sought to promote a sustainable framework for large multinational corporations and countries to follow to improve the environmental impact of financial investments. This framework facilitated the development of the United Nations Sustainable Development Goal scheme in 2015, which aimed to increase funding environmentally responsible investments in developing nations. Funding was targeted to improve areas such as primary education, gender equality, maternal health, and nutrition, with the overall goal of creating beneficial national relationships to decrease the ecological footprint of developing economies. Implementation of these frameworks has promoted greater participation and accountability of corporate environmental sustainability, with over 230 of the largest global firms reporting their sustainability metrics to the United Nations.
The United Nations Environment Program (UNEP) has had a detailed history in providing infrastructure to improve the environmental effects of financial investments. In 2004, the institute provided training on responsible environmental credit budgeting and management for Eastern European nations. Following the Global Financial Crisis beginning in 2007, the UNEP provided substantial support for future sustainable investment choices for economies such as Greece which were impacted severely. The Portfolio Decarbonisation Coalition established in 2014 is a significantly notable initiative in the history of environmental finance as it aims to establish an economy that is not dependent on investments with large carbon footprints. This goal is achieved through large-scale stakeholder reinvestment and securing long-term, responsible, investment commitments. Most recently, the UNEP has recommended OECD nations to align investment strategies alongside the objectives of the Paris Agreement, to improve long-term investments with significant ecological effects.
In 2008 the Climate Change Act enacted by the UK Government established a framework to limit greenhouse gasses and carbon emissions through a budgeting scheme, which motivated firms and businesses to reduce their carbon output for a financial reward. Specifically, by 2050 it seeks to reduce carbon emissions by 80% compared to levels in 1980. The Act seeks to achieve this goal by reviewing carbon budgeting schemes such emission trading credits, every 5 years to continually reassess and recalibrate relevant policies. The cost of reaching the 2050 goal has been estimated at approximately 1.5% of GDP, although the positive environmental impact of reducing carbon footprint and increased in investment into the renewable energy sector will offset this cost. A further implicated cost in the pursuit of the Act is a predicted £100 increase in annual household energy costs, however this price increase is set to be outweighed by an improved energy efficiency which will decrease fuel costs.
The 2010 cap and trade scheme introduced in the metropolitan regions of Tokyo was mandatory for businesses heavily dependent on fuel and electricity, who accounted for almost 20% of total carbon emissions in the area. The scheme aimed to reduce emissions by 17% by the end of 2019.
In 2011 the Clean Energy Act was enacted by the Australian Government. The act introduced the Carbon Tax which aimed to reduce greenhouse gas emission by charging large firms for their carbon tonnage. The Clean Energy Act facilitated the transition to an emissions trading scheme in 2014. The scheme also aims to fulfill the Australian Government's obligations in respect to the Kyoto Protocol and the Climate Change Convention. Additionally, the Act seeks to reduce emissions in a manner that will foster economic growth through increased market competition and investment into renewable energy sources. The Australian National Registry of Emissions Units regulates and monitors the use of emission credits utilised by the Act. Firms must enroll in the registry to buy and sell credits to compensate for their relevant reduction or over-consumption of carbon emissions.
The Republic of Korea's 2015 emission trading scheme aims to reduce carbon emissions by 37% by 2030. It strives to achieve this through allocating a quota of carbon emission to the largest carbon emitting businesses, resetting at the beginning of the schemes 3 separate phases.
In 2017 the National Mitigation Plan was passed by the Irish Government which aimed to regress climate change by decreasing emission levels through revised investment strategies and frameworks for power generation, agriculture, and transport The plan involves 106 separate guidelines for short and long term climate change mitigation.
The European Union Emission Trading Scheme concluding at the end of 2020 is the longest single global carbon pricing scheme, which has been improved over its three 5-year phases. Current improvements include a centralised emission credit trading system, auctioning of credits, addressing a broader range of green house gasses and the introduction of a European-wide credit cap instead of national caps.
Strategies
Societal shifts from fossil fuels to renewable energy caused by an increased awareness of climate change has made government bodies and firms re-evaluate investment strategies to avoid irreparable ecological damage. Shifts away from fossil fuels also increase demand into alternate energy sources which requires revised investment strategies.
The initial stage to mitigate climate change through financial tools involves ecological and economic forecasting to model future impacts of current investment methodologies on the environment. This allows for an approximate estimation of future environments; however, the impacts of continued harmful business trends need to be observed under a non-linear perspective.
Cap-and-trade mechanisms limit the total amount of emissions a particular region or country can emit. Firms are issued with tradeable permits which they can buy or sell. This acts as a financial incentive to reduce emissions and as a disincentive to exceed emission caps.
In 2005, the European Union Emission Trading Scheme was established and is now the largest emission trading scheme globally.
In 2013, the Québec Cap-and-trade scheme was established and is currently the primary mitigation strategy for the area.
Direct foreign investment into developing nations provide more efficient and sustainable energy sources.
In 2006, the Clean Development Mechanism was formed under the Kyoto Protocol, providing solar power and new technologies to developing nations. Countries who invest into developing nations can receive emission reduction credits as a reward.
Removal of atmospheric carbon dioxide has been proposed as a solution to mitigate climate change, by increasing tree densities to absorb carbon dioxide. Other methods involve new technologies which are still in research development stages.
Research in environmental finance has sought how to strategically invest in clean technologies. When paired with international legislation, such as the case of the Montreal Protocol on Substances that Deplete the Ozone Layer, environmentally based investments have stimulated emerging industries and reduced the consequences of climate change. The international collaboration would ultimately lead to the changes that repaired the hole in the ozone layer.
Climate finance
Impact
The European Union Emission Trading Scheme from 2008-2012 was responsible for a 7% reduction in emissions for the states within the scheme. In 2013, allowances were reviewed to accommodate for new emission reduction targets. The new annual recommended target was a reduction of 1.72%. It is estimated that reducing the amount of quoted credits was restricted more tightly, emissions could have been reduced by a total of 25%. Nations such as Romania, Poland and Sweden experienced significant revenue, benefiting from selling credits. Despite successfully reducing emissions, the European Union Emission Trading Scheme has been critiqued for its lack of flexibility to accommodate to major shifts in the economic landscape and reassess currents contexts to provide a revised cap on trading credits, potentially undermining the original objective of the scheme.
The New Zealand Emissions Trading Scheme of 2008 was modelled to increase annual household energy expenditure to 0.8% and increase fuel prices by approximately 6%. The price of agricultural products such as beef and dairy were modelled to decrease by almost 1%. Price increases in carbon intensive sectors such as foresting and mining were also expected, incentivising a shift towards renewable energy system and improved investment strategies with a less harmful environmental impact.
In 2016, the Québec Cap-and-trade scheme was responsible for an 11% reduction in emissions compared to 1990 emission levels. Due to the associated increased energy costs, fuel prices rose 2-3 cents per litre over the duration of the cap and trade scheme.
In 2014, the Clean Development Mechanism was responsible for a 1% reduction in global greenhouse gas emissions. The Clean Development Mechanism has been responsible for removing 7 billion tons of greenhouse gasses from the atmosphere through the efforts of almost 8000 individual projects. Despite this success, as the economies of developing nations participating in Clean Development Mechanisms improves, the financial payout to the country supplying such infrastructure increases at a greater rate than economic growth, thus leading to an unoptimised and counterproductive system.
References
Bibliography
Environmental economics | Environmental finance | [
"Environmental_science"
] | 2,147 | [
"Environmental economics",
"Environmental social science"
] |
47,796 | https://en.wikipedia.org/wiki/Patience | Patience, or forbearance, is the ability to endure difficult or undesired long-term circumstances. Patience involves perseverance or tolerance in the face of delay, provocation, or stress without responding negatively, such as reacting with disrespect or anger. Patience is also used to refer to the character trait of being disciplined and steadfast. Antonyms of patience include impatience, hastiness, and impetuousness.
Scientific perspectives
In psychology and in cognitive neuroscience, patience is studied as a decision-making problem, involving the choice of either a small reward in the short-term, versus a more valuable reward in the long-term.
In a 2005 study, common marmosets and cottontop tamarins chose between taking an immediate small reward and waiting a variable amount of time for a large reward. Under these conditions, marmosets waited significantly longer for food than tamarins. This difference cannot be explained by life history, social behaviour, or brain size. It can, however, be explained by feeding ecology: marmosets rely on gum, a food product acquired by waiting for exudate to flow from trees, whereas tamarins feed on insects, a food product requiring impulsive action. Foraging ecology, therefore, may provide a selective pressure for the evolution of self-control.
Patience of human users in the online world has been a subject of research. In a 2012 study of tens of millions of users who watched videos on the Internet, Krishnan and Sitaraman showed that users lose patience in as little as two seconds while waiting for their chosen video to start playing. Users who connect to the Internet at faster speeds are less patient than their counterparts at slower speeds, demonstrating a link between the human expectation of speed and human patience. These and other studies of patience led commentators to conclude that the rapid pace of technology is rewiring humans to be less patient.
Religious perspectives
Judaism
Patience and fortitude are prominent themes in Judaism. The Talmud extols patience as an important personal trait. The story of Micah, for example, is that he suffers many challenging conditions and yet endures, saying "I will wait for the God who saves me." Patience in God, it is said, will aid believers in finding the strength to be delivered from the evils that are inherent in the physical life.
In the Hebrew Torah, patience is referred to in several proverbs, such as "The patient man shows much good sense, but the quick-tempered man displays folly at its height" (); "An ill-tempered man stirs up strife, but a patient man allays discord." (); and "A patient man is better than a warrior, and he who rules his temper, than he who takes a city." (). Patience is also discussed in other sections, such as Ecclesiastes: "Better is the patient spirit than the lofty spirit. Do not in spirit become quickly discontented, for discontent lodges in the bosom of a fool." ().
Christianity
In the Christian religion, patience is one of the most valuable virtues. The Holy Ghost increases patience in the Christian who has accepted the gift of salvation. While patience is not one of the traditional biblical three theological virtues or one of the traditional cardinal virtues, it is part of the fruit of the Holy Spirit, according to the Apostle Paul in his Epistle to the Galatians. Patience was included in later formulations of the seven virtues.
In the Christian Bible, patience is referred to in several sections. The Book of Proverbs notes that "through patience a ruler can be persuaded, and a gentle tongue can break a bone" (, NIV); Ecclesiastes points out that the "end of a matter is better than its beginning, and patience is better than pride" (, NIV); and 1 Thessalonians states that we should "be patient with all. See that no one returns evil for evil; rather, always seek what is good for each other and for all" (, NAB). In the Epistle of James, the Bible urges Christians to be patient, and "see how the farmer waits for the precious fruit of the earth... until it receives the early and the late rains." (, NAB). In Galatians, patience is listed as part of the "fruit of the Spirit": "love, joy, peace, patience, kindness, goodness, faithfulness, gentleness, and self-control. Against such things there is no law" (, NIV). In Timothy, the Bible states that "Jesus might display his unlimited patience as an example for those who would believe on him and receive eternal life" ( NIV).
Islam
Patience with steadfast belief in Allah is called (), one of the best virtues in Islam. Through , a Muslim believes that an individual can grow closer to God and thus attain true peace. Islam stresses that Allah is with those who are patient, more specifically during calamity and suffering. Several verses in Quran urge Muslims to seek Allah's help when faced with fear and loss, with patient prayers and perseverance for Allah. For example:
Similarly, patience is mentioned in hadith Sahih Bukhari:
In Islamic tradition, Job (Arabic: , romanized: ) demonstrated patience and steadfast belief in Allah. Ibn Kathir narrates the story in this manner: Job was a very rich person with much land, and many animals and children — all of which were lost and soon he was struck with disease as a test from Allah. He remained steadfast and patient in his prayers to Allah, so Allah eventually relieved him of the disease, gave him double the money he lost, and raised to life twice the number of children who had died before him.
Buddhism
In Buddhism, patience (Skt.: ; Pali: ) is one of the "perfections" () that a bodhisattva trains in and practices to realize perfect enlightenment (). The Buddhist concept of patience is distinct from the English definition of the word. In Buddhism, patience refers to not returning harm, rather than merely enduring a difficult situation. It is the ability to control one's emotions even when being criticized or attacked. Verse 184 of the Dhammapada says "enduring patience is the highest austerity".
Tibetan Buddhist Thubten Zopa recommended that people train in forbearance by taking advantage of encounters with difficult people:
Hinduism
Patience/forbearance is considered an essential virtue in Hinduism. In ancient literature of Hinduism, the concept of patience is referred to with the word (patience and forbearance, Sanskrit: ), and several other words such as (patient toleration, Sanskrit: ), (forbearance, Sanskrit: ), or (suffer with patience, Sanskrit: , ) and several others.
Patience, in Hindu philosophy, is the cheerful endurance of trying conditions and the consequence of one's action and deeds (karma). It is also the capacity to wait, to endure opposites—such as pain and pleasure, cold and heat, sorrows and joys—calmly, without anxiety, and without a desire to seek revenge. In interpersonal relationships, virtuous means that if someone attacks or insults without cause, one must endure it without feeling enmity, anger, resentment, or anxiety. Patience is explained as being more than trust, as a value that reflects the state of one's body and mind. The term is sometimes also translated as test or exam, in other contexts. Some of these concepts have been carried into the spiritual understanding of yoga. Sandilya Upanishad of Hinduism identifies ten sources of patience and forbearance. In each of these ten forbearances, the virtuous implicit belief is that our current spirit and the future for everyone, including oneself, will be stronger if these forbearances are one's guide. The ten are:
The classical literature of Hinduism exists in many Indian languages. For example, Tirukkuṛaḷ written between and , and sometimes called the Tamil Veda, is one of the most cherished classics on Hinduism written in a South Indian language. It too discusses patience and forbearance, dedicating Chapter 16 of Book 1 to that topic. Tirukkuṛaḷ suggests patience is necessary for an ethical life and for one's long term happiness, even if patience is sometimes difficult in the short term. Excerpts from this book include: "our conduct must always foster forbearance"; "one must patiently endure rude remarks, because it delivers us to purity"; "if we are unjustly wronged by others, it is best to conquer our hurt with patience, accept suffering, and refrain from unrighteous retaliation"; "it is good to patiently endure injuries done to you, but to forget them is even better"; "just as the Earth bears those who dig into her, one must with patience bear with those who despise us", and so on.
Meher Baba
The spiritual teacher Meher Baba stated that "[O]ne of the first requirements of the [spiritual] aspirant is that he should combine unfailing enthusiasm with unyielding patience.... Spiritual effort demands not only physical endurance and courage, but also unshrinking forbearance and unassailable moral courage."
Philosophical perspectives
In his 1878 book Human, All Too Human, philosopher Friedrich Nietzsche argued that "being able to wait is so hard that the greatest poets did not disdain to make the inability to wait the theme of their poetry". He notes that "Passion will not wait", and gives the example of cases of duels, in which the "advising friends have to determine whether the parties involved might be able to wait a while longer. If they cannot, then a duel is reasonable [because]... to wait would be to continue suffering the horrible torture of offended honor...".
See also
– Psychological insight into people's queuing behavior
– Research into an aspect of inhibitory control
References
Spirituality
Emotions
Time management
Virtue
Seven virtues
Fruit of the Holy Spirit | Patience | [
"Physics",
"Biology"
] | 2,090 | [
"Behavior",
"Physical quantities",
"Time",
"Spirituality",
"Time management",
"Spacetime",
"Human behavior"
] |
47,868 | https://en.wikipedia.org/wiki/Data%20stream | In connection-oriented communication, a data stream is the transmission of a sequence of digitally encoded signals to convey information. Typically, the transmitted symbols are grouped into a series of packets.
Data streaming has become ubiquitous. Anything transmitted over the Internet is transmitted as a data stream. Using a mobile phone to have a conversation transmits the sound as a data stream.
Formal definition
In a formal way, a data stream is any ordered pair where:
is a sequence of tuples and
is a sequence of positive real time intervals.
Content
Data Stream contains different sets of data, that depend on the chosen data format.
Attributes – each attribute of the data stream represents a certain type of data, e.g. segment / data point ID, timestamp, geodata.
Timestamp attribute helps to identify when an event occurred.
Subject ID is an encoded-by-algorithm ID, that has been extracted out of a cookie.
Raw Data includes information straight from the data provider without being processed by an algorithm nor human.
Processed Data is a data that has been prepared (somehow modified, validated or cleaned), to be used for future actions.
Usage
There are various areas where data streams are used:
Fraud detection & scoring – raw data is used as source data for an anti-fraud algorithm (data analysis techniques for fraud detection). For example, timestamps, cookie occurrences or analysis of data points are used within the scoring system to detect fraud or to make sure that a message receiver is not a bot (so-called Non-Human Traffic).
Artificial intelligence – raw data is treated like a train set and a test set during AI and machine learning algorithms building.
Raw data is used for profiling and personalization to customize user profiles and divide them for segmentation, e.g., per gender or location (based on data point).
Business intelligence – raw data is a source of information for BI systems, used for enriching user profiles with detailed information about them, e.g., purchase path or geodata. This information is used for business analysis and predictive research.
Targeting – processed data by data scientists improve online campaigns and is used for reaching the target audience.
CRM Enrichment – raw data is integrated with customer-relationship management system. CRM integration allows to fill the gaps in users' profiles with demographic data, interests or buying intentions.
Integration
Core integrations with data streams are:
Data streams are integrated with systems such as customer data platform (CDP), customer relationship management (CRM) or data management platform (DMP) to enrich users' profiles with external data. It is possible to expand the knowledge about existing users by using external sources.
Data streams are used to enrich business intelligence systems and make analysis more precise and conclusions more accurate.
In the case of content management system (CMS) integration, Data Stream is used to identify the users and personalize their visit, even if it's their first one. By data analysis, the actual content of the website is adapted to the user.
Data streams are integrated with demand side platform (DSP) within programmatic advertising ecosystem. Parties (e.g., advertisers) can exchange the users' IDs and concatenate with them existing profiles.
Data streams are used to choose respective user segments (e.g., people interested in the automotive industry) and use them in an online campaign. Segments are enriched with more user characteristics out of data stream and then sent to DSP.
Data sources visible
In a data stream it is visible what device has been used by the user side – it is visible on user agent:
mobile – when a user uses a mobile browser to explore, it has narrow screen resolution and mobile app version, respectively;
desktop – when a user uses a desktop browser or app version.
The following information is shared out of used device:
Actual URL to the visited website, where an event occurred
User Agent
Geolocation
Internet Protocol (IP)
Formats
A data point is a tag that collects information about a certain action, performed by a user on a website. Data points exists in two types, the values of which are used to create appropriate audiences. Those are:
'event' with information about occurrences of the specific event (e.g., click on a link or displaying ad)
'attribute' with numerical or alphanumerical values.
Segment is a logical statement, built on specific Data Points using AND, OR or NOT operators.
Hybrid data – raw data out of both Data Point and Segment data formats.
URLs – is a set of information about a particular URL that has been visited.
GDPR
Information gathered out of websites are based on user behavior. Data providers deliver both personal or non-personal information. There are two types of user data available in data stream:
Personally identifiable information (PII) – information that allows clearly or by combining with data identification methods identify a person. Examples of PII are: insurance ID, email address, phone number, IP address, geolocation, biometric data.
Non-personally identifiable information (non-PII) is information that can't be used to identify a person or to track a location. A cookie or a device ID is an example of non-PII.
See also
Streaming algorithm
References
Computing terminology
Big data
Business analysis | Data stream | [
"Technology"
] | 1,080 | [
"Computing terminology",
"Data",
"Big data"
] |
47,902 | https://en.wikipedia.org/wiki/Upload | Uploading refers to transmitting data from one computer system to another through means of a network. Common methods of uploading include: uploading via web browsers, FTP clients, and terminals (SCP/SFTP). Uploading can be used in the context of (potentially many) clients that send files to a central server. While uploading can also be defined in the context of sending files between distributed clients, such as with a peer-to-peer (P2P) file-sharing protocol like BitTorrent, the term file sharing is more often used in this case. Moving files within a computer system, as opposed to over a network, is called file copying.
Uploading directly contrasts with downloading, where data is received over a network. In the case of users uploading files over the internet, uploading is often slower than downloading as many internet service providers (ISPs) offer asymmetric connections, which offer more network bandwidth for downloading than uploading.
Definition
To transfer something (such as data or files), from a computer or other digital device to the memory of another device (such as a larger or remote computer) especially via the internet.
Historical development
Remote file sharing first came into fruition in January 1978, when Ward Christensen and Randy Suess, who were members of the Chicago Area Computer Hobbyists' Exchange (CACHE), created the Computerized Bulletin Board System (CBBS). This used an early file transfer protocol (MODEM, later XMODEM) to send binary files via a hardware modem, accessible by another modem via a telephone number.
In the following years, new protocols such as Kermit were released, until the File Transfer Protocol (FTP) was standardized 1985 (). FTP is based on TCP/IP and gave rise to many FTP clients, which, in turn, gave users all around the world access to the same standard network protocol to transfer data between devices.
The transfer of data saw a significant increase in popularity after the release of the World Wide Web in 1991, which, for the first time, allowed users who were not computer hobbyists to easily share files, directly from their web browser over HTTP.
Resumability of file transfers
Transfers became more reliable with the launch of HTTP/1.1 in 1997 (), which gave users the option to resume downloads that were interrupted, for instance due to unreliable connections. Before web browsers widely rolled out support, software programs like GetRight could be used to resume downloads. Resuming uploads is not currently supported by HTTP, but can be added with the Tus open protocol for resumable file uploads, which layers resumability of uploads on top of existing HTTP connections.
Types of uploading
Client-to-server uploading
Transmitting a local file to a remote system following the client–server model, e.g., a web browser transferring a video to a website, is called client-to-server uploading.
Remote uploading
Transferring data from one remote system to another remote system under the control of a local system is called remote uploading or site-to-site transferring. This is used when a local computer has a slow connection to the remote systems, but these systems have a fast connection between them. Without remote uploading functionality, the data would have to first be downloaded to the local system and then uploaded to the remote server, both times over a slower connection. Remote uploading is used by some online file hosting services. Another example can be found in FTP clients, which often support the File eXchange Protocol (FXP) in order to instruct two FTP servers with high-speed connections to exchange files. A web-based example is the Uppy file uploader that can transfer files from a user's cloud storage such as Dropbox, directly to a website without first going to the user's device.
Peer-to-peer
Peer-to-peer (P2P) is a decentralized communications model in which each party has the same capabilities, and either party can initiate a communication session. Unlike the client–server model, in which the client makes a service request and the server fulfils the request (by sending or accepting a file transfer), the P2P network model allows each node to function as both client and server. BitTorrent is an example of this, as is the InterPlanetary File System (IPFS). Peer-to-peer allows users to both receive (download) and host (upload) content. Files are transferred directly between the users' computers. The same file transfer constitutes an upload for one party, and a download for the other party.
Copyright issues
The rising popularity of file sharing during the 1990s culminated in the emergence of Napster, a music-sharing platform specialized in MP3 files that used peer-to-peer (P2P) file-sharing technology to allow users exchange files freely. The P2P nature meant there was no central gatekeeper for the content, which eventually led to the widespread availability of copyrighted material through Napster.
The Recording Industry Association of America (RIAA) took notice of Napster's ability to distribute copyrighted music among its user base, and, on December 6, 1999, filed a motion for a preliminary injunction in order to stop the exchange of copyrighted songs on the service. After a failed appeal by Napster, the injunction was granted on March 5, 2001. On September 24, 2001, Napster, which had already shut down its entire network two months earlier, agreed to pay a $26 million dollar settlement.
After Napster had ceased operations, many other P2P file-sharing services also shut down, such as Limewire, Kazaa and Popcorn Time. Besides software programs, there were many BitTorrent websites that allowed files to be indexed and searched. These files could then be downloaded via a BitTorrent client. While the BitTorrent protocol itself is legal and agnostic of the type of content shared, many of the services that did not enforce a strict policy to take down copyrighted material would eventually also run into legal difficulties.
See also
Bandwidth
Comparison of file transfer protocols
Computer network
Data
Download
File sharing
Lftp
Sideload
Timeline of file sharing
Upload components
References
External links
An All Too-Brief History of File Sharing
Computer networking
Data transmission
Servers (computing) | Upload | [
"Technology",
"Engineering"
] | 1,304 | [
"Computer networking",
"Computer science",
"Computer engineering"
] |
47,921 | https://en.wikipedia.org/wiki/Free%20will | Free will is the capacity or ability to choose between different possible courses of action.
Free will is closely linked to the concepts of moral responsibility, praise, culpability, and other judgements which apply only to actions that are freely chosen. It is also connected with the concepts of advice, persuasion, deliberation, and prohibition. Traditionally, only actions that are freely willed are seen as deserving credit or blame. Whether free will exists, what it is and the implications of whether it exists or not constitute some of the longest running debates of philosophy. Some conceive of free will as the ability to act beyond the limits of external influences or wishes.
Some conceive free will to be the capacity to make choices undetermined by past events. Determinism suggests that only one course of events is possible, which is inconsistent with a libertarian model of free will. Ancient Greek philosophy identified this issue, which remains a major focus of philosophical debate. The view that posits free will as incompatible with determinism is called incompatibilism and encompasses both metaphysical libertarianism (the claim that determinism is false and thus free will is at least possible) and hard determinism (the claim that determinism is true and thus free will is not possible). Another incompatibilist position is hard incompatibilism, which holds not only determinism but also indeterminism to be incompatible with free will and thus free will to be impossible whatever the case may be regarding determinism.
In contrast, compatibilists hold that free will is compatible with determinism. Some compatibilists even hold that determinism is necessary for free will, arguing that choice involves preference for one course of action over another, requiring a sense of how choices will turn out. Compatibilists thus consider the debate between libertarians and hard determinists over free will vs. determinism a false dilemma. Different compatibilists offer very different definitions of what "free will" means and consequently find different types of constraints to be relevant to the issue. Classical compatibilists considered free will nothing more than freedom of action, considering one free of will simply if, had one counterfactually wanted to do otherwise, one could have done otherwise without physical impediment. Many contemporary compatibilists instead identify free will as a psychological capacity, such as to direct one's behavior in a way responsive to reason, and there are still further different conceptions of free will, each with their own concerns, sharing only the common feature of not finding the possibility of determinism a threat to the possibility of free will.
History of free will
The problem of free will has been identified in ancient Greek philosophical literature. The notion of compatibilist free will has been attributed to both Aristotle (4th century BCE) and Epictetus (1st century CE): "it was the fact that nothing hindered us from doing or choosing something that made us have control over them". According to Susanne Bobzien, the notion of incompatibilist free will is perhaps first identified in the works of Alexander of Aphrodisias (3rd century CE): "what makes us have control over things is the fact that we are causally undetermined in our decision and thus can freely decide between doing/choosing or not doing/choosing them".
The term "free will" (liberum arbitrium) was introduced by Christian philosophy (4th century CE). It has traditionally meant (until the Enlightenment proposed its own meanings) lack of necessity in human will, so that "the will is free" meant "the will does not have to be such as it is". This requirement was universally embraced by both incompatibilists and compatibilists.
Western philosophy
The underlying questions are whether we have control over our actions, and if so, what sort of control, and to what extent. These questions predate the early Greek stoics (for example, Chrysippus), and some modern philosophers lament the lack of progress over all these centuries.
On one hand, humans have a strong sense of freedom, which leads them to believe that they have free will. On the other hand, an intuitive feeling of free will could be mistaken.
It is difficult to reconcile the intuitive evidence that conscious decisions are causally effective with the view that the physical world can be explained entirely by physical law. The conflict between intuitively felt freedom and natural law arises when either causal closure or physical determinism (nomological determinism) is asserted. With causal closure, no physical event has a cause outside the physical domain, and with physical determinism, the future is determined entirely by preceding events (cause and effect).
The puzzle of reconciling 'free will' with a deterministic universe is known as the problem of free will or sometimes referred to as the dilemma of determinism. This dilemma leads to a moral dilemma as well: the question of how to assign responsibility for actions if they are caused entirely by past events.
Compatibilists maintain that mental reality is not of itself causally effective. Classical compatibilists have addressed the dilemma of free will by arguing that free will holds as long as humans are not externally constrained or coerced. Modern compatibilists make a distinction between freedom of will and freedom of action, that is, separating freedom of choice from the freedom to enact it. Given that humans all experience a sense of free will, some modern compatibilists think it is necessary to accommodate this intuition. Compatibilists often associate freedom of will with the ability to make rational decisions.
A different approach to the dilemma is that of incompatibilists, namely, that if the world is deterministic, then our feeling that we are free to choose an action is simply an illusion. Metaphysical libertarianism is the form of incompatibilism which posits that determinism is false and free will is possible (at least some people have free will). This view is associated with non-materialist constructions, including both traditional dualism, as well as models supporting more minimal criteria; such as the ability to consciously veto an action or competing desire. Yet even with physical indeterminism, arguments have been made against libertarianism in that it is difficult to assign Origination (responsibility for "free" indeterministic choices).
Free will here is predominantly treated with respect to physical determinism in the strict sense of nomological determinism, although other forms of determinism are also relevant to free will. For example, logical and theological determinism challenge metaphysical libertarianism with ideas of destiny and fate, and biological, cultural and psychological determinism feed the development of compatibilist models. Separate classes of compatibilism and incompatibilism may even be formed to represent these.
Below are the classic arguments bearing upon the dilemma and its underpinnings.
Incompatibilism
Incompatibilism is the position that free will and determinism are logically incompatible, and that the major question regarding whether or not people have free will is thus whether or not their actions are determined. "Hard determinists", such as d'Holbach, are those incompatibilists who accept determinism and reject free will. In contrast, "metaphysical libertarians", such as Thomas Reid, Peter van Inwagen, and Robert Kane, are those incompatibilists who accept free will and deny determinism, holding the view that some form of indeterminism is true. Another view is that of hard incompatibilists, which state that free will is incompatible with both determinism and indeterminism.
Traditional arguments for incompatibilism are based on an "intuition pump": if a person is like other mechanical things that are determined in their behavior such as a wind-up toy, a billiard ball, a puppet, or a robot, then people must not have free will. This argument has been rejected by compatibilists such as Daniel Dennett on the grounds that, even if humans have something in common with these things, it remains possible and plausible that we are different from such objects in important ways.
Another argument for incompatibilism is that of the "causal chain". Incompatibilism is key to the idealist theory of free will. Most incompatibilists reject the idea that freedom of action consists simply in "voluntary" behavior. They insist, rather, that free will means that someone must be the "ultimate" or "originating" cause of his actions. They must be causa sui, in the traditional phrase. Being responsible for one's choices is the first cause of those choices, where first cause means that there is no antecedent cause of that cause. The argument, then, is that if a person has free will, then they are the ultimate cause of their actions. If determinism is true, then all of a person's choices are caused by events and facts outside their control. So, if everything someone does is caused by events and facts outside their control, then they cannot be the ultimate cause of their actions. Therefore, they cannot have free will. This argument has also been challenged by various compatibilist philosophers.
A third argument for incompatibilism was formulated by Carl Ginet in the 1960s and has received much attention in the modern literature. The simplified argument runs along these lines: if determinism is true, then we have no control over the events of the past that determined our present state and no control over the laws of nature. Since we can have no control over these matters, we also can have no control over the consequences of them. Since our present choices and acts, under determinism, are the necessary consequences of the past and the laws of nature, then we have no control over them and, hence, no free will. This is called the consequence argument. Peter van Inwagen remarks that C.D. Broad had a version of the consequence argument as early as the 1930s.
The difficulty of this argument for some compatibilists lies in the fact that it entails the impossibility that one could have chosen other than one has. For example, if Jane is a compatibilist and she has just sat down on the sofa, then she is committed to the claim that she could have remained standing, if she had so desired. But it follows from the consequence argument that, if Jane had remained standing, she would have either generated a contradiction, violated the laws of nature or changed the past. Hence, compatibilists are committed to the existence of "incredible abilities", according to Ginet and van Inwagen. One response to this argument is that it equivocates on the notions of abilities and necessities, or that the free will evoked to make any given choice is really an illusion and the choice had been made all along, oblivious to its "decider". David Lewis suggests that compatibilists are only committed to the ability to do something otherwise if different circumstances had actually obtained in the past.
Using T, F for "true" and "false" and ? for undecided, there are exactly nine positions regarding determinism/free will that consist of any two of these three possibilities:
Incompatibilism may occupy any of the nine positions except (5), (8) or (3), which last corresponds to soft determinism. Position (1) is hard determinism, and position (2) is libertarianism. The position (1) of hard determinism adds to the table the contention that D implies FW is untrue, and the position (2) of libertarianism adds the contention that FW implies D is untrue. Position (9) may be called hard incompatibilism if one interprets ? as meaning both concepts are of dubious value. Compatibilism itself may occupy any of the nine positions, that is, there is no logical contradiction between determinism and free will, and either or both may be true or false in principle. However, the most common meaning attached to compatibilism is that some form of determinism is true and yet we have some form of free will, position (3).
Alex Rosenberg makes an extrapolation of physical determinism as inferred on the macroscopic scale by the behaviour of a set of dominoes to neural activity in the brain where; "If the brain is nothing but a complex physical object whose states are as much governed by physical laws as any other physical object, then what goes on in our heads is as fixed and determined by prior events as what goes on when one domino topples another in a long row of them." Physical determinism is currently disputed by prominent interpretations of quantum mechanics, and while not necessarily representative of intrinsic indeterminism in nature, fundamental limits of precision in measurement are inherent in the uncertainty principle. The relevance of such prospective indeterminate activity to free will is, however, contested, even when chaos theory is introduced to magnify the effects of such microscopic events.
Below these positions are examined in more detail.
Hard determinism
Determinism can be divided into causal, logical and theological determinism. Corresponding to each of these different meanings, there arises a different problem for free will. Hard determinism is the claim that determinism is true, and that it is incompatible with free will, so free will does not exist. Although hard determinism generally refers to nomological determinism (see causal determinism below), it can include all forms of determinism that necessitate the future in its entirety. Relevant forms of determinism include:
Causal determinism The idea that everything is caused by prior conditions, making it impossible for anything else to happen. In its most common form, nomological (or scientific) determinism, future events are necessitated by past and present events combined with the laws of nature. Such determinism is sometimes illustrated by the thought experiment of Laplace's demon. Imagine an entity that knows all facts about the past and the present, and knows all natural laws that govern the universe. If the laws of nature were determinate, then such an entity would be able to use this knowledge to foresee the future, down to the smallest detail.
Logical determinism The notion that all propositions, whether about the past, present or future, are either true or false. The problem of free will, in this context, is the problem of how choices can be free, given that what one does in the future is already determined as true or false in the present.
Theological determinism The idea that the future is already determined, either by a creator deity decreeing or knowing its outcome in advance. The problem of free will, in this context, is the problem of how our actions can be free if there is a being who has determined them for us in advance, or if they are already set in time.
Other forms of determinism are more relevant to compatibilism, such as biological determinism, the idea that all behaviors, beliefs, and desires are fixed by our genetic endowment and our biochemical makeup, the latter of which is affected by both genes and environment, cultural determinism and psychological determinism. Combinations and syntheses of determinist theses, such as bio-environmental determinism, are even more common.
Suggestions have been made that hard determinism need not maintain strict determinism, where something near to, like that informally known as adequate determinism, is perhaps more relevant. Despite this, hard determinism has grown less popular in present times, given scientific suggestions that determinism is false – yet the intention of their position is sustained by hard incompatibilism.
Metaphysical libertarianism
One kind of incompatibilism, metaphysical libertarianism holds onto a concept of free will that requires that the agent be able to take more than one possible course of action under a given set of circumstances.
Accounts of libertarianism subdivide into non-physical theories and physical or naturalistic theories. Non-physical theories hold that the events in the brain that lead to the performance of actions do not have an entirely physical explanation, which requires that the world is not closed under physics. This includes interactionist dualism, which claims that some non-physical mind, will, or soul overrides physical causality. Physical determinism implies there is only one possible future and is therefore not compatible with libertarian free will. As consequent of incompatibilism, metaphysical libertarian explanations that do not involve dispensing with physicalism require physical indeterminism, such as probabilistic subatomic particle behavior – theory unknown to many of the early writers on free will. Incompatibilist theories can be categorised based on the type of indeterminism they require; uncaused events, non-deterministically caused events, and agent/substance-caused events.
Non-causal theories
Non-causal accounts of incompatibilist free will do not require a free action to be caused by either an agent or a physical event. They either rely upon a world that is not causally closed, or physical indeterminism. Non-causal accounts often claim that each intentional action requires a choice or volition – a willing, trying, or endeavoring on behalf of the agent (such as the cognitive component of lifting one's arm). Such intentional actions are interpreted as free actions. It has been suggested, however, that such acting cannot be said to exercise control over anything in particular. According to non-causal accounts, the causation by the agent cannot be analysed in terms of causation by mental states or events, including desire, belief, intention of something in particular, but rather is considered a matter of spontaneity and creativity. The exercise of intent in such intentional actions is not that which determines their freedom – intentional actions are rather self-generating. The "actish feel" of some intentional actions do not "constitute that event's activeness, or the agent's exercise of active control", rather they "might be brought about by direct stimulation of someone's brain, in the absence of any relevant desire or intention on the part of that person". Another question raised by such non-causal theory, is how an agent acts upon reason, if the said intentional actions are spontaneous.
Some non-causal explanations involve invoking panpsychism, the theory that a quality of mind is associated with all particles, and pervades the entire universe, in both animate and inanimate entities.
Event-causal theories
Event-causal accounts of incompatibilist free will typically rely upon physicalist models of mind (like those of the compatibilist), yet they presuppose physical indeterminism, in which certain indeterministic events are said to be caused by the agent. A number of event-causal accounts of free will have been created, referenced here as deliberative indeterminism, centred accounts, and efforts of will theory. The first two accounts do not require free will to be a fundamental constituent of the universe. Ordinary randomness is appealed to as supplying the "elbow room" that libertarians believe necessary. A first common objection to event-causal accounts is that the indeterminism could be destructive and could therefore diminish control by the agent rather than provide it (related to the problem of origination). A second common objection to these models is that it is questionable whether such indeterminism could add any value to deliberation over that which is already present in a deterministic world.
Deliberative indeterminism asserts that the indeterminism is confined to an earlier stage in the decision process. This is intended to provide an indeterminate set of possibilities to choose from, while not risking the introduction of luck (random decision making). The selection process is deterministic, although it may be based on earlier preferences established by the same process. Deliberative indeterminism has been referenced by Daniel Dennett and John Martin Fischer. An obvious objection to such a view is that an agent cannot be assigned ownership over their decisions (or preferences used to make those decisions) to any greater degree than that of a compatibilist model.
Centred accounts propose that for any given decision between two possibilities, the strength of reason will be considered for each option, yet there is still a probability the weaker candidate will be chosen. An obvious objection to such a view is that decisions are explicitly left up to chance, and origination or responsibility cannot be assigned for any given decision.
Efforts of will theory is related to the role of will power in decision making. It suggests that the indeterminacy of agent volition processes could map to the indeterminacy of certain physical events – and the outcomes of these events could therefore be considered caused by the agent. Models of volition have been constructed in which it is seen as a particular kind of complex, high-level process with an element of physical indeterminism. An example of this approach is that of Robert Kane, where he hypothesizes that "in each case, the indeterminism is functioning as a hindrance or obstacle to her realizing one of her purposes – a hindrance or obstacle in the form of resistance within her will which must be overcome by effort." According to Robert Kane such "ultimate responsibility" is a required condition for free will. An important factor in such a theory is that the agent cannot be reduced to physical neuronal events, but rather mental processes are said to provide an equally valid account of the determination of outcome as their physical processes (see non-reductive physicalism).
Although at the time quantum mechanics (and physical indeterminism) was only in the initial stages of acceptance, in his book Miracles: A preliminary study C.S. Lewis stated the logical possibility that if the physical world were proved indeterministic this would provide an entry point to describe an action of a non-physical entity on physical reality. Indeterministic physical models (particularly those involving quantum indeterminacy) introduce random occurrences at an atomic or subatomic level. These events might affect brain activity, and could seemingly allow incompatibilist free will if the apparent indeterminacy of some mental processes (for instance, subjective perceptions of control in conscious volition) map to the underlying indeterminacy of the physical construct. This relationship, however, requires a causative role over probabilities that is questionable, and it is far from established that brain activity responsible for human action can be affected by such events. Secondarily, these incompatibilist models are dependent upon the relationship between action and conscious volition, as studied in the neuroscience of free will. It is evident that observation may disturb the outcome of the observation itself, rendering limited our ability to identify causality. Niels Bohr, one of the main architects of quantum theory, suggested, however, that no connection could be made between indeterminism of nature and freedom of will.
Agent/substance-causal theories
Agent/substance-causal accounts of incompatibilist free will rely upon substance dualism in their description of mind. The agent is assumed power to intervene in the physical world.
Agent (substance)-causal accounts have been suggested by both George Berkeley and Thomas Reid. It is required that what the agent causes is not causally determined by prior events. It is also required that the agent's causing of that event is not causally determined by prior events. A number of problems have been identified with this view. Firstly, it is difficult to establish the reason for any given choice by the agent, which suggests they may be random or determined by luck (without an underlying basis for the free will decision). Secondly, it has been questioned whether physical events can be caused by an external substance or mind – a common problem associated with interactionalist dualism.
Hard incompatibilism
Hard incompatibilism is the idea that free will cannot exist, whether the world is deterministic or not. Derk Pereboom has defended hard incompatibilism, identifying a variety of positions where free will is irrelevant to indeterminism/determinism, among them the following:
Determinism (D) is true, D does not imply we lack free will (F), but in fact we do lack F.
D is true, D does not imply we lack F, but in fact we don't know if we have F.
D is true, and we do have F.
D is true, we have F, and F implies D.
D is unproven, but we have F.
D isn't true, we do have F, and would have F even if D were true.
D isn't true, we don't have F, but F is compatible with D.
Derk Pereboom, Living without Free Will, p. xvi.
Pereboom calls positions 3 and 4 soft determinism, position 1 a form of hard determinism, position 6 a form of classical libertarianism, and any position that includes having F as compatibilism.
John Locke denied that the phrase "free will" made any sense (compare with theological noncognitivism, a similar stance on the existence of God). He also took the view that the truth of determinism was irrelevant. He believed that the defining feature of voluntary behavior was that individuals have the ability to postpone a decision long enough to reflect or deliberate upon the consequences of a choice: "...the will in truth, signifies nothing but a power, or ability, to prefer or choose".
The contemporary philosopher Galen Strawson agrees with Locke that the truth or falsity of determinism is irrelevant to the
problem. He argues that the notion of free will leads to an infinite regress and is therefore senseless.
According to Strawson, if one is responsible for what one does in a given situation, then one must be responsible for the way one is in certain mental respects. But it is impossible for one to be responsible for the way one is in any respect. This is because to be responsible in some situation S, one must have been responsible for the way one was at S−1. To be responsible for the way one was at S−1, one must have been responsible for the way one was at S−2, and so on. At some point in the chain, there must have been an act of origination of a new causal chain. But this is impossible. Man cannot create himself or his mental states ex nihilo. This argument entails that free will itself is absurd, but not that it is incompatible with determinism. Strawson calls his own view "pessimism" but it can be classified as hard incompatibilism.
Causal determinism
Causal determinism is the concept that events within a given paradigm are bound by causality in such a way that any state (of an object or event) is completely determined by prior states. Causal determinism proposes that there is an unbroken chain of prior occurrences stretching back to the origin of the universe. Causal determinists believe that there is nothing uncaused or self-caused. The most common form of causal determinism is nomological determinism (or scientific determinism), the notion that the past and the present dictate the future entirely and necessarily by rigid natural laws, that every occurrence results inevitably from prior events. Quantum mechanics poses a serious challenge to this view.
Fundamental debate continues over whether the physical universe is likely to be deterministic. Although the scientific method cannot be used to rule out indeterminism with respect to violations of causal closure, it can be used to identify indeterminism in natural law. Interpretations of quantum mechanics at present are both deterministic and indeterministic, and are being constrained by ongoing experimentation.
Destiny and fate
Destiny or fate is a predetermined course of events. It may be conceived as a predetermined future, whether in general or of an individual. It is a concept based on the belief that there is a fixed natural order to the cosmos.
Although often used interchangeably, the words "fate" and "destiny" have distinct connotations.
Fate generally implies there is a set course that cannot be deviated from, and over which one has no control. Fate is related to determinism, but makes no specific claim of physical determinism. Even with physical indeterminism an event could still be fated externally (see for instance theological determinism). Destiny likewise is related to determinism, but makes no specific claim of physical determinism. Even with physical indeterminism an event could still be destined to occur.
Destiny implies there is a set course that cannot be deviated from, but does not of itself make any claim with respect to the setting of that course (i.e., it does not necessarily conflict with incompatibilist free will). Free will if existent could be the mechanism by which that destined outcome is chosen (determined to represent destiny).
Logical determinism
Discussion regarding destiny does not necessitate the existence of supernatural powers. Logical determinism or determinateness is the notion that all propositions, whether about the past, present, or future, are either true or false. This creates a unique problem for free will given that propositions about the future already have a truth value in the present (that is it is already determined as either true or false), and is referred to as the problem of future contingents.
Omniscience
Omniscience is the capacity to know everything that there is to know (included in which are all future events), and is a property often attributed to a creator deity. Omniscience implies the existence of destiny. Some authors have claimed that free will cannot coexist with omniscience. One argument asserts that an omniscient creator not only implies destiny but a form of high level predeterminism such as hard theological determinism or predestination – that they have independently fixed all events and outcomes in the universe in advance. In such a case, even if an individual could have influence over their lower level physical system, their choices in regard to this cannot be their own, as is the case with libertarian free will. Omniscience features as an incompatible-properties argument for the existence of God, known as the argument from free will, and is closely related to other such arguments, for example the incompatibility of omnipotence with a good creator deity (i.e. if a deity knew what they were going to choose, then they are responsible for letting them choose it).
Predeterminism
Predeterminism is the idea that all events are determined in advance. Predeterminism is the philosophy that all events of history, past, present and future, have been decided or are known (by God, fate, or some other force), including human actions. Predeterminism is frequently taken to mean that human actions cannot interfere with (or have no bearing on) the outcomes of a pre-determined course of events, and that one's destiny was established externally (for example, exclusively by a creator deity). The concept of predeterminism is often argued by invoking causal determinism, implying that there is an unbroken chain of prior occurrences stretching back to the origin of the universe. In the case of predeterminism, this chain of events has been pre-established, and human actions cannot interfere with the outcomes of this pre-established chain. Predeterminism can be used to mean such pre-established causal determinism, in which case it is categorised as a specific type of determinism. It can also be used interchangeably with causal determinism – in the context of its capacity to determine future events. Despite this, predeterminism is often considered as independent of causal determinism. The term predeterminism is also frequently used in the context of biology and heredity, in which case it represents a form of biological determinism.
The term predeterminism suggests not just a determining of all events, but the prior and deliberately conscious determining of all events (therefore done, presumably, by a conscious being). While determinism usually refers to a naturalistically explainable causality of events, predeterminism seems by definition to suggest a person or a "someone" who is controlling or planning the causality of events before they occur and who then perhaps resides beyond the natural, causal universe. Predestination asserts that a supremely powerful being has indeed fixed all events and outcomes in the universe in advance, and is a famous doctrine of the Calvinists in Christian theology. Predestination is often considered a form of hard theological determinism.
Predeterminism has therefore been compared to fatalism. Fatalism is the idea that everything is fated to happen, so that humans have no control over their future.
Theological determinism
Theological determinism is a form of determinism stating that all events that happen are pre-ordained, or predestined to happen, by a monotheistic deity, or that they are destined to occur given its omniscience. Two forms of theological determinism exist, here referenced as strong and weak theological determinism.
The first one, strong theological determinism, is based on the concept of a creator deity dictating all events in history: "everything that happens has been predestined to happen by an omniscient, omnipotent divinity."
The second form, weak theological determinism, is based on the concept of divine foreknowledge – "because God's omniscience is perfect, what God knows about the future will inevitably happen, which means, consequently, that the future is already fixed."
There exist slight variations on the above categorisation. Some claim that theological determinism requires predestination of all events and outcomes by the divinity (that is, they do not classify the weaker version as 'theological determinism' unless libertarian free will is assumed to be denied as a consequence), or that the weaker version does not constitute 'theological determinism' at all. Theological determinism can also be seen as a form of causal determinism, in which the antecedent conditions are the nature and will of God. With respect to free will and the classification of theological compatibilism/incompatibilism below, "theological determinism is the thesis that God exists and has infallible knowledge of all true propositions including propositions about our future actions," more minimal criteria designed to encapsulate all forms of theological determinism.
There are various implications for metaphysical libertarian free will as consequent of theological determinism and its philosophical interpretation.
Strong theological determinism is not compatible with metaphysical libertarian free will, and is a form of hard theological determinism (equivalent to theological fatalism below). It claims that free will does not exist, and God has absolute control over a person's actions. Hard theological determinism is similar in implication to hard determinism, although it does not invalidate compatibilist free will. Hard theological determinism is a form of theological incompatibilism (see figure, top left).
Weak theological determinism is either compatible or incompatible with metaphysical libertarian free will depending upon one's philosophical interpretation of omniscience – and as such is interpreted as either a form of hard theological determinism (known as theological fatalism), or as soft theological determinism (terminology used for clarity only). Soft theological determinism claims that humans have free will to choose their actions, holding that God, while knowing their actions before they happen, does not affect the outcome. God's providence is "compatible" with voluntary choice. Soft theological determinism is known as theological compatibilism (see figure, top right). A rejection of theological determinism (or divine foreknowledge) is classified as theological incompatibilism also (see figure, bottom), and is relevant to a more general discussion of free will.
The basic argument for theological fatalism in the case of weak theological determinism is as follows:
Assume divine foreknowledge or omniscience
Infallible foreknowledge implies destiny (it is known for certain what one will do)
Destiny eliminates alternate possibility (one cannot do otherwise)
Assert incompatibility with metaphysical libertarian free will
This argument is very often accepted as a basis for theological incompatibilism: denying either libertarian free will or divine foreknowledge (omniscience) and therefore theological determinism. On the other hand, theological compatibilism must attempt to find problems with it. The formal version of the argument rests on a number of premises, many of which have received some degree of contention. Theological compatibilist responses have included:
Deny the truth value of future contingents, although this denies foreknowledge and therefore theological determinism.
Assert differences in non-temporal knowledge (space-time independence), an approach taken for example by Boethius, Thomas Aquinas, and C.S. Lewis.
Deny the Principle of Alternate Possibilities: "If you cannot do otherwise when you do an act, you do not act freely." For example, a human observer could in principle have a machine that could detect what will happen in the future, but the existence of this machine or their use of it has no influence on the outcomes of events.
In the definition of compatibilism and incompatibilism, the literature often fails to distinguish between physical determinism and higher level forms of determinism (predeterminism, theological determinism, etc.) As such, hard determinism with respect to theological determinism (or "Hard Theological Determinism" above) might be classified as hard incompatibilism with respect to physical determinism (if no claim was made regarding the internal causality or determinism of the universe), or even compatibilism (if freedom from the constraint of determinism was not considered necessary for free will), if not hard determinism itself. By the same principle, metaphysical libertarianism (a form of incompatibilism with respect to physical determinism) might be classified as compatibilism with respect to theological determinism (if it was assumed such free will events were pre-ordained and therefore were destined to occur, but of which whose outcomes were not "predestined" or determined by God). If hard theological determinism is accepted (if it was assumed instead that such outcomes were predestined by God), then metaphysical libertarianism is not, however, possible, and would require reclassification (as hard incompatibilism for example, given that the universe is still assumed to be indeterministic – although the classification of hard determinism is technically valid also).
Mind–body problem
The idea of free will is one aspect of the mind–body problem, that is, consideration of the relation between mind (for example, consciousness, memory, and judgment) and body (for example, the human brain and nervous system). Philosophical models of mind are divided into physical and non-physical expositions.
Cartesian dualism holds that the mind is a nonphysical substance, the seat of consciousness and intelligence, and is not identical with physical states of the brain or body. It is suggested that although the two worlds do interact, each retains some measure of autonomy. Under cartesian dualism external mind is responsible for bodily action, although unconscious brain activity is often caused by external events (for example, the instantaneous reaction to being burned). Cartesian dualism implies that the physical world is not deterministic – and in which external mind controls (at least some) physical events, providing an interpretation of incompatibilist free will. Stemming from Cartesian dualism, a formulation sometimes called interactionalist dualism suggests a two-way interaction, that some physical events cause some mental acts and some mental acts cause some physical events. One modern vision of the possible separation of mind and body is the "three-world" formulation of Popper. Cartesian dualism and Popper's three worlds are two forms of what is called epistemological pluralism, that is the notion that different epistemological methodologies are necessary to attain a full description of the world. Other forms of epistemological pluralist dualism include psychophysical parallelism and epiphenomenalism. Epistemological pluralism is one view in which the mind-body problem is not reducible to the concepts of the natural sciences.
A contrasting approach is called physicalism. Physicalism is a philosophical theory holding that everything that exists is no more extensive than its physical properties; that is, that there are no non-physical substances (for example physically independent minds). Physicalism can be reductive or non-reductive. Reductive physicalism is grounded in the idea that everything in the world can actually be reduced analytically to its fundamental physical, or material, basis. Alternatively, non-reductive physicalism asserts that mental properties form a separate ontological class to physical properties: that mental states (such as qualia) are not ontologically reducible to physical states. Although one might suppose that mental states and neurological states are different in kind, that does not rule out the possibility that mental states are correlated with neurological states. In one such construction, anomalous monism, mental events supervene on physical events, describing the emergence of mental properties correlated with physical properties – implying causal reducibility. Non-reductive physicalism is therefore often categorised as property dualism rather than monism, yet other types of property dualism do not adhere to the causal reducibility of mental states (see epiphenomenalism).
Incompatibilism requires a distinction between the mental and the physical, being a commentary on the incompatibility of (determined) physical reality and one's presumably distinct experience of will. Secondarily, metaphysical libertarian free will must assert influence on physical reality, and where mind is responsible for such influence (as opposed to ordinary system randomness), it must be distinct from body to accomplish this. Both substance and property dualism offer such a distinction, and those particular models thereof that are not causally inert with respect to the physical world provide a basis for illustrating incompatibilist free will (i.e. interactionalist dualism and non-reductive physicalism).
It has been noted that the laws of physics have yet to resolve the hard problem of consciousness: "Solving the hard problem of consciousness involves determining how physiological processes such as ions flowing across the nerve membrane cause us to have experiences." According to some, "Intricately related to the hard problem of consciousness, the hard problem of free will represents the core problem of conscious free will: Does conscious volition impact the material world?" Others however argue that "consciousness plays a far smaller role in human life than Western culture has tended to believe."
Compatibilism
Compatibilists maintain that determinism is compatible with free will. They believe freedom can be present or absent in a situation for reasons that have nothing to do with metaphysics. For instance, courts of law make judgments about whether individuals are acting under their own free will under certain circumstances without bringing in metaphysics. Similarly, political liberty is a non-metaphysical concept. Likewise, some compatibilists define free will as freedom to act according to one's determined motives without hindrance from other individuals. So for example Aristotle in his Nicomachean Ethics, and the Stoic Chrysippus.
In contrast, the incompatibilist positions are concerned with a sort of "metaphysically free will", which compatibilists claim has never been coherently defined. Compatibilists argue that determinism does not matter; though they disagree among themselves about what, in turn, does matter. To be a compatibilist, one need not endorse any particular conception of free will, but only deny that determinism is at odds with free will.
Although there are various impediments to exercising one's choices, free will does not imply freedom of action. Freedom of choice (freedom to select one's will) is logically separate from freedom to implement that choice (freedom to enact one's will), although not all writers observe this distinction. Nonetheless, some philosophers have defined free will as the absence of various impediments. Some "modern compatibilists", such as Harry Frankfurt and Daniel Dennett, argue free will is simply freely choosing to do what constraints allow one to do. In other words, a coerced agent's choices can still be free if such coercion coincides with the agent's personal intentions and desires.
Free will as lack of physical restraint
Most "classical compatibilists", such as Thomas Hobbes, claim that a person is acting on the person's own will only when it is the desire of that person to do the act, and also possible for the person to be able to do otherwise, if the person had decided to. Hobbes sometimes attributes such compatibilist freedom to each individual and not to some abstract notion of will, asserting, for example, that "no liberty can be inferred to the will, desire, or inclination, but the liberty of the man; which consisteth in this, that he finds no stop, in doing what he has the will, desire, or inclination to doe." In articulating this crucial proviso, David Hume writes, "this hypothetical liberty is universally allowed to belong to every one who is not a prisoner and in chains." Similarly, Voltaire, in his Dictionnaire philosophique, claimed that "Liberty then is only and can be only the power to do what one will." He asked, "would you have everything at the pleasure of a million blind caprices?" For him, free will or liberty is "only the power of acting, what is this power? It is the effect of the constitution and present state of our organs."
Free will as a psychological state
Compatibilism often regards the agent free as virtue of their reason. Some explanations of free will focus on the internal causality of the mind with respect to higher-order brain processing – the interaction between conscious and unconscious brain activity. Likewise, some modern compatibilists in psychology have tried to revive traditionally accepted struggles of free will with the formation of character. Compatibilist free will has also been attributed to our natural sense of agency, where one must believe they are an agent in order to function and develop a theory of mind.
The notion of levels of decision is presented in a different manner by Frankfurt. Frankfurt argues for a version of compatibilism called the "hierarchical mesh". The idea is that an individual can have conflicting desires at a first-order level and also have a desire about the various first-order desires (a second-order desire) to the effect that one of the desires prevails over the others. A person's will is identified with their effective first-order desire, that is, the one they act on, and this will is free if it was the desire the person wanted to act upon, that is, the person's second-order desire was effective. So, for example, there are "wanton addicts", "unwilling addicts" and "willing addicts". All three groups may have the conflicting first-order desires to want to take the drug they are addicted to and to not want to take it.
The first group, wanton addicts, have no second-order desire not to take the drug. The second group, "unwilling addicts", have a second-order desire not to take the drug, while the third group, "willing addicts", have a second-order desire to take it. According to Frankfurt, the members of the first group are devoid of will and therefore are no longer persons. The members of the second group freely desire not to take the drug, but their will is overcome by the addiction. Finally, the members of the third group willingly take the drug they are addicted to. Frankfurt's theory can ramify to any number of levels. Critics of the theory point out that there is no certainty that conflicts will not arise even at the higher-order levels of desire and preference. Others argue that Frankfurt offers no adequate explanation of how the various levels in the hierarchy mesh together.
Free will as unpredictability
In Elbow Room, Dennett presents an argument for a compatibilist theory of free will, which he further elaborated in the book Freedom Evolves. The basic reasoning is that, if one excludes God, an infinitely powerful demon, and other such possibilities, then because of chaos and epistemic limits on the precision of our knowledge of the current state of the world, the future is ill-defined for all finite beings. The only well-defined things are "expectations". The ability to do "otherwise" only makes sense when dealing with these expectations, and not with some unknown and unknowable future.
According to Dennett, because individuals have the ability to act differently from what anyone expects, free will can exist. Incompatibilists claim the problem with this idea is that we may be mere "automata responding in predictable ways to stimuli in our environment". Therefore, all of our actions are controlled by forces outside ourselves, or by random chance. More sophisticated analyses of compatibilist free will have been offered, as have other critiques.
In the philosophy of decision theory, a fundamental question is: From the standpoint of statistical outcomes, to what extent do the choices of a conscious being have the ability to influence the future? Newcomb's paradox and other philosophical problems pose questions about free will and predictable outcomes of choices.
The physical mind
Compatibilist models of free will often consider deterministic relationships as discoverable in the physical world (including the brain). Cognitive naturalism is a physicalist approach to studying human cognition and consciousness in which the mind is simply part of nature, perhaps merely a feature of many very complex self-programming feedback systems (for example, neural networks and cognitive robots), and so must be studied by the methods of empirical science, such as the behavioral and cognitive sciences (i.e. neuroscience and cognitive psychology). Cognitive naturalism stresses the role of neurological sciences. Overall brain health, substance dependence, depression, and various personality disorders clearly influence mental activity, and their impact upon volition is also important. For example, an addict may experience a conscious desire to escape addiction, but be unable to do so. The "will" is disconnected from the freedom to act. This situation is related to an abnormal production and distribution of dopamine in the brain. The neuroscience of free will places restrictions on both compatibilist and incompatibilist free will conceptions.
Compatibilist models adhere to models of mind in which mental activity (such as deliberation) can be reduced to physical activity without any change in physical outcome. Although compatibilism is generally aligned to (or is at least compatible with) physicalism, some compatibilist models describe the natural occurrences of deterministic deliberation in the brain in terms of the first person perspective of the conscious agent performing the deliberation. Such an approach has been considered a form of identity dualism. A description of "how conscious experience might affect brains" has been provided in which "the experience of conscious free will is the first-person perspective of the neural correlates of choosing."
Recently, Claudio Costa developed a neocompatibilist theory based on the causal theory of action that is complementary to classical compatibilism. According to him, physical, psychological and rational restrictions can interfere at different levels of the causal chain that would naturally lead to action. Correspondingly, there can be physical restrictions to the body, psychological restrictions to the decision, and rational restrictions to the formation of reasons (desires plus beliefs) that should lead to what we would call a reasonable action. The last two are usually called "restrictions of free will". The restriction at the level of reasons is particularly important since it can be motivated by external reasons that are insufficiently conscious to the agent. One example was the collective suicide led by Jim Jones. The suicidal agents were not conscious that their free will have been manipulated by external, even if ungrounded, reasons.
Non-naturalism
Alternatives to strictly naturalist physics, such as mind–body dualism positing a mind or soul existing apart from one's body while perceiving, thinking, choosing freely, and as a result acting independently on the body, include both traditional religious metaphysics and less common newer compatibilist concepts. Also consistent with both autonomy and Darwinism, they allow for free personal agency based on practical reasons within the laws of physics. While less popular among 21st-century philosophers, non-naturalist compatibilism is present in most if not almost all religions.
Other views
Some philosophers' views are difficult to categorize as either compatibilist or incompatibilist, hard determinist or libertarian. For example, Ted Honderich holds the view that "determinism is true, compatibilism and incompatibilism are both false" and the real problem lies elsewhere. Honderich maintains that determinism is true because quantum phenomena are not events or things that can be located in space and time, but are abstract entities. Further, even if they were micro-level events, they do not seem to have any relevance to how the world is at the macroscopic level. He maintains that incompatibilism is false because, even if indeterminism is true, incompatibilists have not provided, and cannot provide, an adequate account of origination. He rejects compatibilism because it, like incompatibilism, assumes a single, fundamental notion of freedom. There are really two notions of freedom: voluntary action and origination. Both notions are required to explain freedom of will and responsibility. Both determinism and indeterminism are threats to such freedom. To abandon these notions of freedom would be to abandon moral responsibility. On the one side, we have our intuitions; on the other, the scientific facts. The "new" problem is how to resolve this conflict.
Free will as an illusion
"Experience teaches us no less clearly than reason, that men believe themselves free, simply because they are conscious of their actions, and unconscious of the causes whereby those actions are determined." Baruch Spinoza, Ethics
David Hume discussed the possibility that the entire debate about free will is nothing more than a merely "verbal" issue. He suggested that it might be accounted for by "a false sensation or seeming experience" (a velleity), which is associated with many of our actions when we perform them. On reflection, we realize that they were necessary and determined all along.
According to Arthur Schopenhauer, the actions of humans, as phenomena, are subject to the principle of sufficient reason and thus liable to necessity. Thus, he argues, humans do not possess free will as conventionally understood. However, the will [urging, craving, striving, wanting, and desiring], as the noumenon underlying the phenomenal world, is in itself groundless: that is, not subject to time, space, and causality (the forms that governs the world of appearance). Thus, the will, in itself and outside of appearance, is free. Schopenhauer discussed the puzzle of free will and moral responsibility in The World as Will and Representation, Book 2, Sec. 23:
Schopenhauer elaborated on the topic in Book IV of the same work and in even greater depth in his later essay On the Freedom of the Will. In this work, he stated, "You can do what you will, but in any given moment of your life you can will only one definite thing and absolutely nothing other than that one thing."
Free will as "moral imagination"
Rudolf Steiner, who collaborated in a complete edition of Arthur Schopenhauer's work, wrote The Philosophy of Freedom, which focuses on the problem of free will. Steiner (1861–1925) initially divides this into the two aspects of freedom: freedom of thought and freedom of action. The controllable and uncontrollable aspects of decision making thereby are made logically separable, as pointed out in the introduction. This separation of will from action has a very long history, going back at least as far as Stoicism and the teachings of Chrysippus (279–206 BCE), who separated external antecedent causes from the internal disposition receiving this cause.
Steiner then argues that inner freedom is achieved when we integrate our sensory impressions, which reflect the outer appearance of the world, with our thoughts, which lend coherence to these impressions and thereby disclose to us an understandable world. Acknowledging the many influences on our choices, he nevertheless points out that they do not preclude freedom unless we fail to recognise them. Steiner argues that outer freedom is attained by permeating our deeds with moral imagination. "Moral" in this case refers to action that is willed, while "imagination" refers to the mental capacity to envision conditions that do not already hold. Both of these functions are necessarily conditions for freedom. Steiner aims to show that these two aspects of inner and outer freedom are integral to one another, and that true freedom is only achieved when they are united.
Free will as a pragmatically useful concept
William James' views were ambivalent. While he believed in free will on "ethical grounds", he did not believe that there was evidence for it on scientific grounds, nor did his own introspections support it. Ultimately he believed that the problem of free will was a metaphysical issue and, therefore, could not be settled by science. Moreover, he did not accept incompatibilism as formulated below; he did not believe that the indeterminism of human actions was a prerequisite of moral responsibility. In his work Pragmatism, he wrote that "instinct and utility between them can safely be trusted to carry on the social business of punishment and praise" regardless of metaphysical theories. He did believe that indeterminism is important as a "doctrine of relief" – it allows for the view that, although the world may be in many respects a bad place, it may, through individuals' actions, become a better one. Determinism, he argued, undermines meliorism – the idea that progress is a real concept leading to improvement in the world.
Free will and views of causality
In 1739, David Hume in his A Treatise of Human Nature approached free will via the notion of causality. It was his position that causality was a mental construct used to explain the repeated association of events, and that one must examine more closely the relation between things regularly succeeding one another (descriptions of regularity in nature) and things that result in other things (things that cause or necessitate other things). According to Hume, 'causation' is on weak grounds: "Once we realise that 'A must bring about B' is tantamount merely to 'Due to their constant conjunction, we are psychologically certain that B will follow A,' then we are left with a very weak notion of necessity."
This empiricist view was often denied by trying to prove the so-called apriority of causal law (i.e. that it precedes all experience and is rooted in the construction of the perceivable world):
Kant's proof in Critique of Pure Reason (which referenced time and time ordering of causes and effects)
Schopenhauer's proof from The Fourfold Root of the Principle of Sufficient Reason (which referenced the so-called intellectuality of representations, that is, in other words, objects and qualia perceived with senses)
In the 1780s Immanuel Kant suggested at a minimum our decision processes with moral implications lie outside the reach of everyday causality, and lie outside the rules governing material objects. "There is a sharp difference between moral judgments and judgments of fact... Moral judgments... must be a priori judgments."
Freeman introduces what he calls "circular causality" to "allow for the contribution of self-organizing dynamics", the "formation of macroscopic population dynamics that shapes the patterns of activity of the contributing individuals", applicable to "interactions between neurons and neural masses... and between the behaving animal and its environment". In this view, mind and neurological functions are tightly coupled in a situation where feedback between collective actions (mind) and individual subsystems (for example, neurons and their synapses) jointly decide upon the behaviour of both.
Free will according to Thomas Aquinas
Thirteenth century philosopher Thomas Aquinas viewed humans as pre-programmed (by virtue of being human) to seek certain goals, but able to choose between routes to achieve these goals (our Aristotelian telos). His view has been associated with both compatibilism and libertarianism.
In facing choices, he argued that humans are governed by intellect, will, and passions. The will is "the primary mover of all the powers of the soul... and it is also the efficient cause of motion in the body." Choice falls into five stages: (i) intellectual consideration of whether an objective is desirable, (ii) intellectual consideration of means of attaining the objective, (iii) will arrives at an intent to pursue the objective, (iv) will and intellect jointly decide upon choice of means (v) will elects execution. Free will enters as follows: Free will is an "appetitive power", that is, not a cognitive power of intellect (the term "appetite" from Aquinas's definition "includes all forms of internal inclination"). He states that judgment "concludes and terminates counsel. Now counsel is terminated, first, by the judgment of reason; secondly, by the acceptation of the appetite [that is, the free-will]."
A compatibilist interpretation of Aquinas's view is defended thus: "Free-will is the cause of its own movement, because by his free-will man moves himself to act. But it does not of necessity belong to liberty that what is free should be the first cause of itself, as neither for one thing to be cause of another need it be the first cause. God, therefore, is the first cause, Who moves causes both natural and voluntary. And just as by moving natural causes He does not prevent their acts being natural, so by moving voluntary causes He does not deprive their actions of being voluntary: but rather is He the cause of this very thing in them; for He operates in each thing according to its own nature."
Free will as a pseudo-problem
Historically, most of the philosophical effort invested in resolving the dilemma has taken the form of close examination of definitions and ambiguities in the concepts designated by "free", "freedom", "will", "choice" and so forth. Defining 'free will' often revolves around the meaning of phrases like "ability to do otherwise" or "alternative possibilities". This emphasis upon words has led some philosophers to claim the problem is merely verbal and thus a pseudo-problem. In response, others point out the complexity of decision making and the importance of nuances in the terminology.
Eastern philosophy
Buddhist philosophy
Buddhism accepts both freedom and determinism (or something similar to it), but despite its focus on human agency, it rejects the western concept of a total agent from external sources. According to the Buddha, "There is free action, there is retribution, but I see no agent that passes out from one set of momentary elements into another one, except the [connection] of those elements." Buddhists believe in neither absolute free will, nor determinism. It preaches a middle doctrine, named pratītyasamutpāda in Sanskrit, often translated as "dependent origination", "dependent arising" or "conditioned genesis". It teaches that every volition is a conditioned action as a result of ignorance. In part, it states that free will is inherently conditioned and not "free" to begin with. It is also part of the theory of karma in Buddhism. The concept of karma in Buddhism is different from the notion of karma in Hinduism. In Buddhism, the idea of karma is much less deterministic. The Buddhist notion of karma is primarily focused on the cause and effect of moral actions in this life, while in Hinduism the concept of karma is more often connected with determining one's destiny in future lives.
In Buddhism it is taught that the idea of absolute freedom of choice (that is that any human being could be completely free to make any choice) is unwise, because it denies the reality of one's physical needs and circumstances. Equally incorrect is the idea that humans have no choice in life or that their lives are pre-determined. To deny freedom would be to deny the efforts of Buddhists to make moral progress (through our capacity to freely choose compassionate action). Pubbekatahetuvada, the belief that all happiness and suffering arise from previous actions, is considered a wrong view according to Buddhist doctrines. Because Buddhists also reject agenthood, the traditional compatibilist strategies are closed to them as well. Instead, the Buddhist philosophical strategy is to examine the metaphysics of causality. Ancient India had many heated arguments about the nature of causality with Jains, Nyayists, Samkhyists, Cārvākans, and Buddhists all taking slightly different lines. In many ways, the Buddhist position is closer to a theory of "conditionality" (idappaccayatā) than a theory of "causality", especially as it is expounded by Nagarjuna in the Mūlamadhyamakakārikā.
Hindu philosophy
The six orthodox (astika) schools of thought in Hindu philosophy do not agree with each other entirely on the question of free will. For the Samkhya, for instance, matter is without any freedom, and soul lacks any ability to control the unfolding of matter. The only real freedom (kaivalya) consists in realizing the ultimate separateness of matter and self. For the Yoga school, only Ishvara is truly free, and its freedom is also distinct from all feelings, thoughts, actions, or wills, and is thus not at all a freedom of will. The metaphysics of the Nyaya and Vaisheshika schools strongly suggest a belief in determinism, but do not seem to make explicit claims about determinism or free will.
A quotation from Swami Vivekananda, a Vedantist, offers a good example of the worry about free will in the Hindu tradition.
However, the preceding quote has often been misinterpreted as Vivekananda implying that everything is predetermined. What Vivekananda actually meant by lack of free will was that the will was not "free" because it was heavily influenced by the law of cause and effect – "The will is not free, it is a phenomenon bound by cause and effect, but there is something behind the will which is free." Vivekananda never said things were absolutely determined and placed emphasis on the power of conscious choice to alter one's past karma: "It is the coward and the fool who says this is his fate. But it is the strong man who stands up and says I will make my own fate."
Within Vedanta, Madhvacharya argues that souls do not have any free will as Lord Vishnu prescribes all their actions.
Scientific approaches
Science has contributed to the free will problem in at least three ways. First, physics has addressed the question of whether nature is deterministic, which is viewed as crucial by incompatibilists (compatibilists, however, view it as irrelevant). Second, although free will can be defined in various ways, all of them involve aspects of the way people make decisions and initiate actions, which have been studied extensively by neuroscientists. Some of the experimental observations are widely viewed as implying that free will does not exist or is an illusion (but many philosophers see this as a misunderstanding). Third, psychologists have studied the beliefs that the majority of ordinary people hold about free will and its role in assigning moral responsibility.
From an anthropological perspective, free will can be regarded as an explanation for human behavior that justifies a socially sanctioned system of rewards and punishments. Under this definition, free will may be described as a political ideology. In a society where people are taught to believe that humans have free will, free will may be described as a political doctrine.
Quantum physics
Early scientific thought often portrayed the universe as deterministic – for example in the thought of Democritus or the Cārvākans – and some thinkers claimed that the simple process of gathering sufficient information would allow them to predict future events with perfect accuracy. Modern science, on the other hand, is a mixture of deterministic and stochastic theories. Quantum mechanics predicts events only in terms of probabilities, casting doubt on whether the universe is deterministic at all, although evolution of the universal state vector is completely deterministic. Current physical theories cannot resolve the question of whether determinism is true of the world, being very far from a potential theory of everything, and open to many different interpretations.
Assuming that an indeterministic interpretation of quantum mechanics is correct, one may still object that such indeterminism is for all practical purposes confined to microscopic phenomena. This is not always the case: many macroscopic phenomena are based on quantum effects. For instance, some hardware random number generators work by amplifying quantum effects into practically usable signals. A more significant question is whether the indeterminism of quantum mechanics allows for the traditional idea of free will (based on a perception of free will). If a person's action is, however, only a result of complete quantum randomness, mental processes as experienced have no influence on the probabilistic outcomes (such as volition). According to many interpretations, indeterminism enables free will to exist, while others assert the opposite (because the action was not controllable by the physical being who claims to possess the free will).
Genetics
Like physicists, biologists have frequently addressed questions related to free will. One of the most heated debates in biology is that of "nature versus nurture", concerning the relative importance of genetics and biology as compared to culture and environment in human behavior. The view of many researchers is that many human behaviors can be explained in terms of humans' brains, genes, and evolutionary histories. This point of view raises the fear that such attribution makes it impossible to hold others responsible for their actions. Steven Pinker's view is that fear of determinism in the context of "genetics" and "evolution" is a mistake, that it is "a confusion of explanation with exculpation". Responsibility does not require that behavior be uncaused, as long as behavior responds to praise and blame. Moreover, it is not certain that environmental determination is any less threatening to free will than genetic determination.
Neuroscience and neurophilosophy
It has become possible to study the living brain, and researchers can now watch the brain's decision-making process at work. A seminal experiment in this field was conducted by Benjamin Libet in the 1980s, in which he asked each subject to choose a random moment to flick their wrist while he measured the associated activity in their brain; in particular, the build-up of electrical signal called the readiness potential (after German Bereitschaftspotential, which was discovered by Kornhuber & Deecke in 1965.). Although it was well known that the readiness potential reliably preceded the physical action, Libet asked whether it could be recorded before the conscious intention to move. To determine when subjects felt the intention to move, he asked them to watch the second hand of a clock. After making a movement, the volunteer reported the time on the clock when they first felt the conscious intention to move; this became known as Libet's W time.
Libet found that the unconscious brain activity of the readiness potential leading up to subjects' movements began approximately half a second before the subject was aware of a conscious intention to move.
These studies of the timing between actions and the conscious decision bear upon the role of the brain in understanding free will. A subject's declaration of intention to move a finger appears after the brain has begun to implement the action, suggesting to some that unconsciously the brain has made the decision before the conscious mental act to do so. Some believe the implication is that free will was not involved in the decision and is an illusion. The first of these experiments reported the brain registered activity related to the move about 0.2 s before movement onset. However, these authors also found that awareness of action was anticipatory to activity in the muscle underlying the movement; the entire process resulting in action involves more steps than just the onset of brain activity. The bearing of these results upon notions of free will appears complex.
Some argue that placing the question of free will in the context of motor control is too narrow. The objection is that the time scales involved in motor control are very short, and motor control involves a great deal of unconscious action, with much physical movement entirely unconscious. On that basis "...free will cannot be squeezed into time frames of 150–350 ms; free will is a longer term phenomenon" and free will is a higher level activity that "cannot be captured in a description of neural activity or of muscle activation..." The bearing of timing experiments upon free will is still under discussion.
More studies have since been conducted, including some that try to:
support Libet's original findings
suggest that the cancelling or "veto" of an action may first arise subconsciously as well
explain the underlying brain structures involved
suggest models that explain the relationship between conscious intention and action
Benjamin Libet's results are quoted in favor of epiphenomenalism, but he believes subjects still have a "conscious veto", since the readiness potential does not invariably lead to an action. In Freedom Evolves, Daniel Dennett argues that a no-free-will conclusion is based on dubious assumptions about the location of consciousness, as well as questioning the accuracy and interpretation of Libet's results. Kornhuber and Deecke underlined that absence of conscious will during the early Bereitschaftspotential (termed BP1) is not a proof of the non-existence of free will, as also unconscious agendas may be free and non-deterministic. According to their suggestion, man has relative freedom, i.e. freedom in degrees, that can be increased or decreased through deliberate choices that involve both conscious and unconscious (panencephalic) processes.
Others have argued that data such as the Bereitschaftspotential undermine epiphenomenalism for the same reason, that such experiments rely on a subject reporting the point in time at which a conscious experience occurs, thus relying on the subject to be able to consciously perform an action. That ability would seem to be at odds with early epiphenomenalism, which according to Huxley is the broad claim that consciousness is "completely without any power... as the steam-whistle which accompanies the work of a locomotive engine is without influence upon its machinery".
Adrian G. Guggisberg and Annaïs Mottaz have also challenged those findings.
A study by Aaron Schurger and colleagues published in the Proceedings of the National Academy of Sciences challenged assumptions about the causal nature of the readiness potential itself (and the "pre-movement buildup" of neural activity in general), casting doubt on conclusions drawn from studies such as Libet's and Fried's.
A study that compared deliberate and arbitrary decisions, found that the early signs of decision are absent for the deliberate ones.
It has been shown that in several brain-related conditions, individuals cannot entirely control their own actions, though the existence of such conditions does not directly refute the existence of free will. Neuroscientific studies are valuable tools in developing models of how humans experience free will.
For example, people with Tourette syndrome and related tic disorders make involuntary movements and utterances (called tics) despite the fact that they would prefer not to do so when it is socially inappropriate. Tics are described as semi-voluntary or unvoluntary, because they are not strictly involuntary: they may be experienced as a voluntary response to an unwanted, premonitory urge. Tics are experienced as irresistible and must eventually be expressed. People with Tourette syndrome are sometimes able to suppress their tics for limited periods, but doing so often results in an explosion of tics afterward. The control exerted (from seconds to hours at a time) may merely postpone and exacerbate the ultimate expression of the tic.
In alien hand syndrome, the affected individual's limb will produce unintentional movements without the will of the person. The affected limb effectively demonstrates 'a will of its own.' The sense of agency does not emerge in conjunction with the overt appearance of the purposeful act even though the sense of ownership in relationship to the body part is maintained. This phenomenon corresponds with an impairment in the premotor mechanism manifested temporally by the appearance of the readiness potential recordable on the scalp several hundred milliseconds before the overt appearance of a spontaneous willed movement. Using functional magnetic resonance imaging with specialized multivariate analyses to study the temporal dimension in the activation of the cortical network associated with voluntary movement in human subjects, an anterior-to-posterior sequential activation process beginning in the supplementary motor area on the medial surface of the frontal lobe and progressing to the primary motor cortex and then to parietal cortex has been observed. The sense of agency thus appears to normally emerge in conjunction with this orderly sequential network activation incorporating premotor association cortices together with primary motor cortex. In particular, the supplementary motor complex on the medial surface of the frontal lobe appears to activate prior to primary motor cortex presumably in associated with a preparatory pre-movement process. In a recent study using functional magnetic resonance imaging, alien movements were characterized by a relatively isolated activation of the primary motor cortex contralateral to the alien hand, while voluntary movements of the same body part included the natural activation of motor association cortex associated with the premotor process. The clinical definition requires "feeling that one limb is foreign or has a will of its own, together with observable involuntary motor activity" (emphasis in original). This syndrome is often a result of damage to the corpus callosum, either when it is severed to treat intractable epilepsy or due to a stroke. The standard neurological explanation is that the felt will reported by the speaking left hemisphere does not correspond with the actions performed by the non-speaking right hemisphere, thus suggesting that the two hemispheres may have independent senses of will.
In addition, one of the most important ("first rank") diagnostic symptoms of schizophrenia is the patient's delusion of being controlled by an external force. People with schizophrenia will sometimes report that, although they are acting in the world, they do not recall initiating the particular actions they performed. This is sometimes likened to being a robot controlled by someone else. Although the neural mechanisms of schizophrenia are not yet clear, one influential hypothesis is that there is a breakdown in brain systems that compare motor commands with the feedback received from the body (known as proprioception), leading to attendant hallucinations and delusions of control.
Experimental psychology
Experimental psychology's contributions to the free will debate have come primarily through social psychologist Daniel Wegner's work on conscious will. In his book, The Illusion of Conscious Will, Wegner summarizes what he believes is empirical evidence supporting the view that human perception of conscious control is an illusion. Wegner summarizes some empirical evidence that may suggest that the perception of conscious control is open to modification (or even manipulation). Wegner observes that one event is inferred to have caused a second event when two requirements are met:
The first event immediately precedes the second event, and
The first event is consistent with having caused the second event.
For example, if a person hears an explosion and sees a tree fall down that person is likely to infer that the explosion caused the tree to fall over. However, if the explosion occurs after the tree falls down (that is, the first requirement is not met), or rather than an explosion, the person hears the ring of a telephone (that is, the second requirement is not met), then that person is not likely to infer that either noise caused the tree to fall down.
Wegner has applied this principle to the inferences people make about their own conscious will. People typically experience a thought that is consistent with a behavior, and then they observe themselves performing this behavior. As a result, people infer that their thoughts must have caused the observed behavior. However, Wegner has been able to manipulate people's thoughts and behaviors so as to conform to or violate the two requirements for causal inference. Through such work, Wegner has been able to show that people often experience conscious will over behaviors that they have not, in fact, caused – and conversely, that people can be led to experience a lack of will over behaviors they did cause. For instance, priming subjects with information about an effect increases the probability that a person falsely believes is the cause. The implication for such work is that the perception of conscious will (which he says might be more accurately labelled as 'the emotion of authorship') is not tethered to the execution of actual behaviors, but is inferred from various cues through an intricate mental process, authorship processing. Although many interpret this work as a blow against the argument for free will, both psychologists and philosophers have criticized Wegner's theories.
Emily Pronin has argued that the subjective experience of free will is supported by the introspection illusion. This is the tendency for people to trust the reliability of their own introspections while distrusting the introspections of other people. The theory implies that people will more readily attribute free will to themselves rather than others. This prediction has been confirmed by three of Pronin and Kugler's experiments. When college students were asked about personal decisions in their own and their roommate's lives, they regarded their own choices as less predictable. Staff at a restaurant described their co-workers' lives as more determined (having fewer future possibilities) than their own lives. When weighing up the influence of different factors on behavior, students gave desires and intentions the strongest weight for their own behavior, but rated personality traits as most predictive of other people.
Caveats have, however, been identified in studying a subject's awareness of mental events, in that the process of introspection itself may alter the experience.
Regardless of the validity of belief in free will, it may be beneficial to understand where the idea comes from. One contribution is randomness. While it is established that randomness is not the only factor in the perception of the free will, it has been shown that randomness can be mistaken as free will due to its indeterminacy. This misconception applies both when considering oneself and others. Another contribution is choice. It has been demonstrated that people's belief in free will increases if presented with a simple level of choice. The specificity of the amount of choice is important, as too little or too great a degree of choice may negatively influence belief. It is also likely that the associative relationship between level of choice and perception of free will is influentially bidirectional. It is also possible that one's desire for control, or other basic motivational patterns, act as a third variable.
Believing in free will
Since at least 1959, free will belief in individuals has been analysed with respect to traits in social behaviour. In general, the concept of free will researched to date in this context has been that of the incompatibilist, or more specifically, the libertarian, that is freedom from determinism.
What people believe
Whether people naturally adhere to an incompatibilist model of free will has been questioned in the research. Eddy Nahmias has found that incompatibilism is not intuitive – it was not adhered to, in that determinism does not negate belief in moral responsibility (based on an empirical study of people's responses to moral dilemmas under a deterministic model of reality). Edward Cokely has found that incompatibilism is intuitive – it was naturally adhered to, in that determinism does indeed negate belief in moral responsibility in general. Joshua Knobe and Shaun Nichols have proposed that incompatibilism may or may not be intuitive, and that it is dependent to some large degree upon the circumstances; whether or not the crime incites an emotional response – for example if it involves harming another human being. They found that belief in free will is a cultural universal, and that the majority of participants said that (a) our universe is indeterministic and (b) moral responsibility is not compatible with determinism.
Studies indicate that peoples' belief in free will is inconsistent. Emily Pronin and Matthew Kugler found that people believe they have more free will than others.
Studies also reveal a correlation between the likelihood of accepting a deterministic model of mind and personality type. For example, Adam Feltz and Edward Cokely found that people of an extrovert personality type are more likely to dissociate belief in determinism from belief in moral responsibility.
Roy Baumeister and colleagues reviewed literature on the psychological effects of a belief (or disbelief) in free will and found that most people tend to believe in a sort of "naive compatibilistic free will".
The researchers also found that people consider acts more "free" when they involve a person opposing external forces, planning, or making random actions. Notably, the last behaviour, "random" actions, may not be possible; when participants attempt to perform tasks in a random manner (such as generating random numbers), their behaviour betrays many patterns.
Among philosophers
A recent 2020 survey has shown that compatibilism is quite a popular stance among those who specialize in philosophy (59.2%). Belief in libertarianism amounted to 18.8%, while a lack of belief in free will equaled 11.2%.
Among evolutionary biologists
79 percent of evolutionary biologists said that they believe in free will according to a survey conducted in 2007, 14 percent chose no free will, and 7 percent did not answer the question.
Effects of the belief itself
Baumeister and colleagues found that provoking disbelief in free will seems to cause various negative effects. The authors concluded, in their paper, that it is belief in determinism that causes those negative effects. Kathleen Vohs has found that those whose belief in free will had been eroded were more likely to cheat. In a study conducted by Roy Baumeister, after participants read an article arguing against free will, they were more likely to lie about their performance on a test where they would be rewarded with cash. Provoking a rejection of free will has also been associated with increased aggression and less helpful behaviour. However, although these initial studies suggested that believing in free will is associated with more morally praiseworthy behavior, more recent studies (including direct, multi-site replications) with substantially larger sample sizes have reported contradictory findings (typically, no association between belief in free will and moral behavior), casting doubt over the original findings.
Moreover, whether or not these experimental findings are a result of actual manipulations in belief in free will is a matter of debate. First of all, free will can at least refer to either libertarian (indeterministic) free will or compatibilistic (deterministic) free will. Having participants read articles that simply "disprove free will" is unlikely to increase their understanding of determinism, or the compatibilistic free will that it still permits. In other words, experimental manipulations purporting to "provoke disbelief in free will" may instead cause a belief in fatalism, which may provide an alternative explanation for previous experimental findings. To test the effects of belief in determinism, it has been argued that future studies would need to provide articles that do not simply "attack free will", but instead focus on explaining determinism and compatibilism.
Baumeister and colleagues also note that volunteers disbelieving in free will are less capable of counterfactual thinking. This is worrying because counterfactual thinking ("If I had done something different...") is an important part of learning from one's choices, including those that harmed others. Again, this cannot be taken to mean that belief in determinism is to blame; these are the results we would expect from increasing people's belief in fatalism.
Along similar lines, Tyler Stillman has found that belief in free will predicts better job performance.
In theology
Christianity
The notions of free will and predestination are heavily debated among Christians. Free will in the Christian sense is the ability to choose between good or evil. Among Catholics, there are those holding to Thomism, adopted from what Thomas Aquinas put forth in the Summa Theologica. There are also some holding to Molinism which was put forth by Jesuit priest Luis de Molina. Among Protestants there is Arminianism, held primarily by the Methodist Churches, and formulated by Dutch theologian Jacobus Arminius; and there is also Calvinism held by most in the Reformed tradition which was formulated by the French Reformed theologian, John Calvin. John Calvin was heavily influenced by Augustine of Hippo views on predestination put forth in his work On the Predestination of the Saints. Martin Luther seems to have held views on predestination similar to Calvinism in his On the Bondage of the Will, thus rejecting free will. In condemnation of Calvin and Luther views, the Roman Catholic Council of Trent declared that "the free will of man, moved and excited by God, can by its consent co-operate with God, Who excites and invites its action; and that it can thereby dispose and prepare itself to obtain the grace of justification. The will can resist grace if it chooses. It is not like a lifeless thing, which remains purely passive. Weakened and diminished by Adam's fall, free will is yet not destroyed in the race (Sess. VI, cap. i and v)." John Wesley, the father of the Methodist tradition, taught that humans, enabled by prevenient grace, have free will through which they can choose God and to do good works, with the goal of Christian perfection. Upholding synergism (the belief that God and man cooperate in salvation), Methodism teaches that "Our Lord Jesus Christ did so die for all men as to make salvation attainable by every man that cometh into the world. If men are not saved that fault is entirely their own, lying solely in their own unwillingness to obtain the salvation offered to them. (John 1:9; I Thess. 5:9; Titus 2:11-12)."
Paul the Apostle discusses Predestination in some of his Epistles.
"For whom He foreknew, He also predestined to become conformed to the image of His Son, that He might be the first-born among many brethren; and whom He predestined, these He also called; and whom He called, these He also justified; and whom He justified, these He also glorified." —Romans 8:29–30
"He predestined us to adoption as sons through Jesus Christ to Himself, according to the kind intention of His will." —Ephesians 1:5
There are also mentions of moral freedom in what are now termed as 'Deuterocanonical' works which the Orthodox and Catholic Churches use. In Sirach 15 the text states:
"Do not say: "It was God's doing that I fell away," for what he hates he does not do. Do not say: "He himself has led me astray," for he has no need of the wicked. Abominable wickedness the Lord hates and he does not let it happen to those who fear him. God in the beginning created human beings and made them subject to their own free choice. If you choose, you can keep the commandments; loyalty is doing the will of God. Set before you are fire and water; to whatever you choose, stretch out your hand. Before everyone are life and death, whichever they choose will be given them. Immense is the wisdom of the Lord; mighty in power, he sees all things. The eyes of God behold his works, and he understands every human deed. He never commands anyone to sin, nor shows leniency toward deceivers."
- Ben Sira 15:11-20 NABRE
The exact meaning of these verses has been debated by Christian theologians throughout history.
Judaism
In Jewish thought the concept of "Free will" (; , ) is foundational.
The most succinct statement is by Maimonides, in a two part treatment, where human free will is specified as part of the universe's Godly design:
Maimonides's reasoned that human beings must have free will (at least in the context of choosing to do good or evil), as without this, the demands of the prophets would have been meaningless, there would be no need for the Torah and Mitzvot ("commandments"), and justice could not be administered.
At the same time, Maimonides – and other thinkers – recognizes the paradox that will arise given (i) that Judaism simultaneously recognizes God's omniscience, and further (ii) the nature of Divine providence as understood in Judaism. (In fact the problem may be seen to overlap several others in Jewish Philosophy.)
Islam
In Islam the theological issue is not usually how to reconcile free will with God's foreknowledge, but with God's jabr, or divine commanding power. al-Ash'ari developed an "acquisition" or "dual-agency" form of compatibilism, in which human free will and divine jabr were both asserted, and which became a cornerstone of the dominant Ash'ari position. In Shia Islam, Ash'aris understanding of a higher balance toward predestination is challenged by most theologians. Free will, according to Islamic doctrine is the main factor for man's accountability in his/her actions throughout life. Actions taken by people exercising free will are counted on the Day of Judgement because they are their own; however, the free will happens with the permission of God.
In contrast, the Mu'tazila, known as the rationalist school of Islam, has a position that is opposite to the Ash'arite and other Islamic theology in its view of free will and divine justice. Because the Mu'tazila have a doctrine that emphasizes God's justice ('Adl). The Mu'tazila believe that humans themselves create their will and actions, so human actions and movements are not destiny that are solely driven by God and do not necessarily require God's permission. For the Mu'tazila, humans themselves create their actions and behavior consciously through free will which is formulated and carried out by the brain and nervous system. Thus, this condition guarantees God's justice when judging every human being in the Day of Judgement.
Others
The philosopher Søren Kierkegaard claimed that divine omnipotence cannot be separated from divine goodness. As a truly omnipotent and good being, God could create beings with true freedom over God. Furthermore, God would voluntarily do so because "the greatest good... which can be done for a being, greater than anything else that one can do for it, is to be truly free." Alvin Plantinga's free-will defense is a contemporary expansion of this theme, adding how God, free will, and evil are consistent.
Some philosophers follow William of Ockham in holding that necessity and possibility are defined with respect to a given point in time and a given matrix of empirical circumstances, and so something that is merely possible from the perspective of one observer may be necessary from the perspective of an omniscient. Some philosophers follow Philo of Alexandria, a philosopher known for his anthropocentrism, in holding that free will is a feature of a human's soul, and thus that non-human animals lack free will.
See also
Agency in Mormonism
Angst#Existentialist angst
Buridan's ass
De libero arbitrio – early treatise about the freedom of will by Augustine of Hippo
Free will theorem
Locus of control
Problem of mental causation
Prospection
Superdeterminism
True Will
Voluntarism (philosophy)
Will to power
References
Citations
Bibliography
Hawking, Stephen, and Mlodinow, Leonard, The Grand Design, New York, Bantam Books, 2010.
Horst, Steven (2011), Laws, Mind, and Free Will. (MIT Press)
Sri Aurobindo about freedom and free will(PDF)
Further reading
Dennett, Daniel C. (2003). Freedom Evolves. New York: Viking Press
Epstein J.M. (1999). Agent Based Models and Generative Social Science. Complexity, IV (5).
Gazzaniga, M. & Steven, M.S. (2004) Free Will in the 21st Century: A Discussion of Neuroscience and Law, in Garland, B. (ed.) Neuroscience and the Law: Brain, Mind and the Scales of Justice, New York: Dana Press, , pp. 51–70.
Gleick, James, "The Fate of Free Will" (review of Kevin J. Mitchell, Free Agents: How Evolution Gave Us Free Will, Princeton University Press, 2023, 333 pp.), The New York Review of Books, vol. LXXI, no. 1 (18 January 2024), pp. 27–28, 30. "Agency is what distinguishes us from machines. For biological creatures, reason and purpose come from acting in the world and experiencing the consequences. Artificial intelligences – disembodied, strangers to blood, sweat, and tears – have no occasion for that." (p. 30.)
Harnad, Stevan (2009) The Explanatory Gap #PhilPapers
Harris, Sam. 2012. Free Will. Free Press.
Hofstadter, Douglas. (2007) I Am A Strange Loop. Basic Books.
Kane, Robert (1998). The Significance of Free Will. New York: Oxford University Press
Lawhead, William F. (2005). The Philosophical Journey: An Interactive Approach. McGraw-Hill Humanities/Social Sciences/Languages .
Libet, Benjamin; Anthony Freeman; and Keith Sutherland, eds. (1999). The Volitional Brain: Towards a Neuroscience of Free Will. Exeter, UK: Imprint Academic. Collected essays by scientists and philosophers.
Muhm, Myriam (2004). Abolito il libero arbitrio – Colloquio con Wolf Singer. L'Espresso 19.08.2004 larchivio.org
Nowak A., Vallacher R.R., Tesser A., Borkowski W. (2000). Society of Self: The emergence of collective properties in self-structure. Psychological Review. 107
Schopenhauer, Arthur (1839). On the Freedom of the Will., Oxford: Basil Blackwell .
Tosun, Ender (2020). Free Will Under the Light of the Quran,
Van Inwagen, Peter (1986). An Essay on Free Will. New York: Oxford University Press .
Velmans, Max (2003) How Could Conscious Experiences Affect Brains? Exeter: Imprint Academic .
Dick Swaab, Wij Zijn Ons Brein, Publishing Centre, 2010.
Williams, Clifford (1980). Free Will and Determinism: A Dialogue. Indianapolis: Hackett Publishing Company
John Baer, James C. Kaufman, Roy F. Baumeister (2008). Are We Free? Psychology and Free Will. Oxford University Press, New York
George Musser, "Is the Cosmos Random? (Einstein's assertion that God does not play dice with the universe has been misinterpreted)", Scientific American, vol. 313, no. 3 (September 2015), pp. 88–93.
External links
Causality
Concepts in ethics
Concepts in metaphysics
Philosophical problems
Philosophy of life
Philosophy of religion
Religious ethics | Free will | [
"Physics"
] | 21,035 | [] |
47,922 | https://en.wikipedia.org/wiki/Determinism | Determinism is the philosophical view that all events in the universe, including human decisions and actions, are causally inevitable. Deterministic theories throughout the history of philosophy have developed from diverse and sometimes overlapping motives and considerations. Like eternalism, determinism focuses on particular events rather than the future as a concept. Determinism is often contrasted with free will, although some philosophers claim that the two are compatible. A more extreme antonym of determinism is indeterminism, or the view that events are not deterministically caused but rather occur due to random chance.
Historically, debates about determinism have involved many philosophical positions and given rise to multiple varieties or interpretations of determinism. One topic of debate concerns the scope of determined systems. Some philosophers have maintained that the entire universe is a single determinate system, while others identify more limited determinate systems. Another common debate topic is whether determinism and free will can coexist; compatibilism and incompatibilism represent the opposing sides of this debate.
Determinism should not be confused with the self-determination of human actions by reasons, motives, and desires. Determinism is about interactions which affect cognitive processes in people's lives. It is about the cause and the result of what people have done. Cause and result are always bound together in cognitive processes. It assumes that if an observer has sufficient information about an object or human being, that such an observer might be able to predict every consequent move of that object or human being. Determinism rarely requires that perfect prediction be practically possible.
Varieties
Determinism may commonly refer to any of the following viewpoints.
Causal
Causal determinism, sometimes synonymous with historical determinism (a sort of path dependence), is "the idea that every event is necessitated by antecedent events and conditions together with the laws of nature." However, it is a broad enough term to consider that:...One's deliberations, choices, and actions will often be necessary links in the causal chain that brings something about. In other words, even though our deliberations, choices, and actions are themselves determined like everything else, it is still the case, according to causal determinism, that the occurrence or existence of yet other things depends upon our deliberating, choosing and acting in a certain way.Causal determinism proposes that there is an unbroken chain of prior occurrences stretching back to the origin of the universe. The relation between events and the origin of the universe may not be specified. Causal determinists believe that there is nothing in the universe that has no cause or is self-caused.
Causal determinism has also been considered more generally as the idea that everything that happens or exists is caused by antecedent conditions. In the case of nomological determinism, these conditions are considered events also, implying that the future is determined completely by preceding events—a combination of prior states of the universe and the laws of nature. These conditions can also be considered metaphysical in origin (such as in the case of theological determinism).
Nomological
Nomological determinism is the most common form of causal determinism and is generally synonymous with physical determinism. This is the notion that the past and the present dictate the future entirely and necessarily by rigid natural laws and that every occurrence inevitably results from prior events. Nomological determinism is sometimes illustrated by the thought experiment of Laplace's demon. Although sometimes called scientific determinism, the term is a misnomer for nomological determinism.
Necessitarianism
Necessitarianism is a metaphysical principle that denies all mere possibility and maintains that there is only one possible way for the world to exist. Leucippus claimed there are no uncaused events and that everything occurs for a reason and by necessity.
Predeterminism
Predeterminism is the idea that all events are determined in advance. The concept is often argued by invoking causal determinism, implying that there is an unbroken chain of prior occurrences stretching back to the origin of the universe. In the case of predeterminism, this chain of events has been pre-established, and human actions cannot interfere with the outcomes of this pre-established chain.
Predeterminism can be categorized as a specific type of determinism when it is used to mean pre-established causal determinism. It can also be used interchangeably with causal determinism—in the context of its capacity to determine future events. However, predeterminism is often considered as independent of causal determinism.
Biological
The term predeterminism is also frequently used in the context of biology and heredity, in which case it represents a form of biological determinism, sometimes called genetic determinism. Biological determinism is the idea that all human behaviors, beliefs, and desires are fixed by human genetic nature.
Friedrich Nietzsche explained that human beings are "determined" by their bodies and are subject to its passions, impulses, and instincts.
Fatalism
Fatalism is normally distinguished from determinism, as a form of teleological determinism. Fatalism is the idea that everything is fated to happen, resulting in humans having no control over their future. Fate has arbitrary power, and does not necessarily follow any causal or deterministic laws. Types of fatalism include hard theological determinism and the idea of predestination, where there is a God who determines all that humans will do. This may be accomplished through either foreknowledge of their actions, achieved through omniscience or by predetermining their actions.
Theological
Theological determinism is a form of determinism that holds that all events that happen are either preordained (i.e., predestined) to happen by a monotheistic deity, or are destined to occur given its omniscience. Two forms of theological determinism exist, referred to as strong and weak theological determinism.
Strong theological determinism is based on the concept of a creator deity dictating all events in history: "everything that happens has been predestined to happen by an omniscient, omnipotent divinity."
Weak theological determinism is based on the concept of divine foreknowledge—"because God's omniscience is perfect, what God knows about the future will inevitably happen, which means, consequently, that the future is already fixed." There exist slight variations on this categorization, however. Some claim either that theological determinism requires predestination of all events and outcomes by the divinity—i.e., they do not classify the weaker version as theological determinism unless libertarian free will is assumed to be denied as a consequence—or that the weaker version does not constitute theological determinism at all.
With respect to free will, "theological determinism is the thesis that God exists and has infallible knowledge of all true propositions including propositions about our future actions," more minimal criteria designed to encapsulate all forms of theological determinism.
Theological determinism can also be seen as a form of causal determinism, in which the antecedent conditions are the nature and will of God. Some have asserted that Augustine of Hippo introduced theological determinism into Christianity in 412 CE, whereas all prior Christian authors supported free will against Stoic and Gnostic determinism. However, there are many Biblical passages that seem to support the idea of some kind of theological determinism.
Adequate
Adequate determinism is the idea, because of quantum decoherence, that quantum indeterminacy can be ignored for most macroscopic events. Random quantum events "average out" in the limit of large numbers of particles (where the laws of quantum mechanics asymptotically approach the laws of classical mechanics). Stephen Hawking explains a similar idea: he says that the microscopic world of quantum mechanics is one of determined probabilities. That is, quantum effects rarely alter the predictions of classical mechanics, which are quite accurate (albeit still not perfectly certain) at larger scales. Something as large as an animal cell, then, would be "adequately determined" (even in light of quantum indeterminacy).
Many-worlds
The many-worlds interpretation accepts the linear causal sets of sequential events with adequate consistency yet also suggests constant forking of causal chains creating "multiple universes" to account for multiple outcomes from single events. Meaning the causal set of events leading to the present are all valid yet appear as a singular linear time stream within a much broader unseen conic probability field of other outcomes that "split off" from the locally observed timeline. Under this model causal sets are still "consistent" yet not exclusive to singular iterated outcomes.
The interpretation sidesteps the exclusive retrospective causal chain problem of "could not have done otherwise" by suggesting "the other outcome does exist" in a set of parallel universe time streams that split off when the action occurred. This theory is sometimes described with the example of agent based choices but more involved models argue that recursive causal splitting occurs with all wave functions at play. This model is highly contested with multiple objections from the scientific community.
Philosophical varieties
Nature/nurture controversy
Although some of the above forms of determinism concern human behaviors and cognition, others frame themselves as an answer to the debate on nature and nurture. They will suggest that one factor will entirely determine behavior. As scientific understanding has grown, however, the strongest versions of these theories have been widely rejected as a single-cause fallacy. In other words, the modern deterministic theories attempt to explain how the interaction of both nature and nurture is entirely predictable. The concept of heritability has been helpful in making this distinction.
Biological determinism, sometimes called genetic determinism, is the idea that each of human behaviors, beliefs, and desires are fixed by human genetic nature.
Behaviorism involves the idea that all behavior can be traced to specific causes—either environmental or reflexive. John B. Watson and B. F. Skinner developed this nurture-focused determinism.
Cultural materialism, contends that the physical world impacts and sets constraints on human behavior.
Cultural determinism, along with social determinism, is the nurture-focused theory that the culture in which we are raised determines who we are.
Environmental determinism, also known as climatic or geographical determinism, proposes that the physical environment, rather than social conditions, determines culture. Supporters of environmental determinism often also support behavioral determinism. Key proponents of this notion have included Ellen Churchill Semple, Ellsworth Huntington, Thomas Griffith Taylor and possibly Jared Diamond, although his status as an environmental determinist is debated.
Determinism and prediction
Other "deterministic" theories actually seek only to highlight the importance of a particular factor in predicting the future. These theories often use the factor as a sort of guide or constraint on the future. They need not suppose that complete knowledge of that one factor would allow the making of perfect predictions.
Psychological determinism can mean that humans must act according to reason, but it can also be synonymous with some sort of psychological egoism. The latter is the view that humans will always act according to their perceived best interest.
Linguistic determinism proposes that language determines (or at least limits) the things that humans can think and say and thus know. The Sapir–Whorf hypothesis argues that individuals experience the world based on the grammatical structures they habitually use.
Economic determinism attributes primacy to economic structure over politics in the development of human history. It is associated with the dialectical materialism of Karl Marx.
Technological determinism is the theory that a society's technology drives the development of its social structure and cultural values.
Structural
Structural determinism is the philosophical view that actions, events, and processes are predicated on and determined by structural factors. Given any particular structure or set of estimable components, it is a concept that emphasizes rational and predictable outcomes. Chilean biologists Humberto Maturana and Francisco Varela popularized the notion, writing that a living system's general order is maintained via a circular process of ongoing self-referral, and thus its organization and structure defines the changes it undergoes. According to the authors, a system can undergo changes of state (alteration of structure without loss of identity) or disintegrations (alteration of structure with loss of identity). Such changes or disintegrations are not ascertained by the elements of the disturbing agent, as each disturbance will only trigger responses in the respective system, which in turn, are determined by each system's own structure.
On an individualistic level, what this means is that human beings as free and independent entities are triggered to react by external stimuli or change in circumstance. However, their own internal state and existing physical and mental capacities determine their responses to those triggers. On a much broader societal level, structural determinists believe that larger issues in the society—especially those pertaining to minorities and subjugated communities—are predominantly assessed through existing structural conditions, making change of prevailing conditions difficult, and sometimes outright impossible. For example, the concept has been applied to the politics of race in the United States of America and other Western countries such as the United Kingdom and Australia, with structural determinists lamenting structural factors for the prevalence of racism in these countries. Additionally, Marxists have conceptualized the writings of Karl Marx within the context of structural determinism as well. For example, Louis Althusser, a structural Marxist, argued that the state, in its political, economic, and legal structures, reproduces the discourse of capitalism, in turn, allowing for the burgeoning of capitalistic structures.
Proponents of the notion highlight the usefulness of structural determinism to study complicated issues related to race and gender, as it highlights often gilded structural conditions that block meaningful change. Critics call it too rigid, reductionist and inflexible. Additionally, they also criticize the notion for overemphasizing deterministic forces such as structure over the role of human agency and the ability of the people to act. These critics argue that politicians, academics, and social activists have the capability to bring about significant change despite stringent structural conditions.
With free will
Philosophers have debated both the truth of determinism, and the truth of free will. This creates the four possible positions in the figure. Compatibilism refers to the view that free will is, in some sense, compatible with determinism. The three incompatibilist positions deny this possibility. The hard incompatibilists hold that free will is incompatible with both determinism and indeterminism, the libertarians that determinism does not hold, and free will might exist, and the hard determinists that determinism does hold and free will does not exist. The Dutch philosopher Baruch Spinoza was a determinist thinker, and argued that human freedom can be achieved through knowledge of the causes that determine desire and affections. He defined human servitude as the state of bondage of anyone who is aware of their own desires, but ignorant of the causes that determined them. However, the free or virtuous person becomes capable, through reason and knowledge, to be genuinely free, even as they are being "determined". For the Dutch philosopher, acting out of one's own internal necessity is genuine freedom while being driven by exterior determinations is akin to bondage. Spinoza's thoughts on human servitude and liberty are respectively detailed in the fourth and fifth volumes of his work Ethics.
The standard argument against free will, according to philosopher J. J. C. Smart, focuses on the implications of determinism for free will. He suggests free will is denied whether determinism is true or not. He says that if determinism is true, all actions are predicted and no one is assumed to be free; however, if determinism is false, all actions are presumed to be random and as such no one seems free because they have no part in controlling what happens.
With the soul
Some determinists argue that materialism does not present a complete understanding of the universe, because while it can describe determinate interactions among material things, it ignores the minds or souls of conscious beings.
A number of positions can be delineated:
Immaterial souls are all that exist (idealism).
Immaterial souls exist and exert a non-deterministic causal influence on bodies (traditional free-will, interactionist dualism).
Immaterial souls exist but are part of a deterministic framework.
Immaterial souls exist, but exert no causal influence, free or determined (epiphenomenalism, occasionalism)
Immaterial souls do not exist – there is no mind–body dichotomy, and there is a materialistic explanation for intuitions to the contrary.
With ethics and morality
Another topic of debate is the implication that determinism has on morality.
Philosopher and incompatibilist Peter van Inwagen introduced this thesis, when arguments that free will is required for moral judgments, as such:
The moral judgment that X should not have been done implies that something else should have been done instead.
That something else should have been done instead implies that there was something else to do.
That there was something else to do, implies that something else could have been done.
That something else could have been done implies that there is free will.
If there is no free will to have done other than X we cannot make the moral judgment that X should not have been done.
History
Determinism was developed by the Greek philosophers during the 7th and 6th centuries BCE by the Pre-socratic philosophers Heraclitus and Leucippus, later Aristotle, and mainly by the Stoics. Some of the main philosophers who have dealt with this issue are Marcus Aurelius, Omar Khayyam, Thomas Hobbes, Baruch Spinoza, Gottfried Leibniz, David Hume, Baron d'Holbach (Paul Heinrich Dietrich), Pierre-Simon Laplace, Arthur Schopenhauer, William James, Friedrich Nietzsche, Albert Einstein, Niels Bohr, Ralph Waldo Emerson and, more recently, John Searle, Ted Honderich, and Daniel Dennett.
Mecca Chiesa notes that the probabilistic or selectionistic determinism of B. F. Skinner comprised a wholly separate conception of determinism that was not mechanistic at all. Mechanistic determinism assumes that every event has an unbroken chain of prior occurrences, but a selectionistic or probabilistic model does not.
Western tradition
In the West, some elements of determinism have been expressed in Greece from the 6th century BCE by the Presocratics Heraclitus and Leucippus. The first notions of determinism appears to originate with the Stoics, as part of their theory of universal causal determinism. The resulting philosophical debates, which involved the confluence of elements of Aristotelian Ethics with Stoic psychology, led in the 1st–3rd centuries CE in the works of Alexander of Aphrodisias to the first recorded Western debate over determinism and freedom, an issue that is known in theology as the paradox of free will. The writings of Epictetus as well as middle Platonist and early Christian thought were instrumental in this development. Jewish philosopher Moses Maimonides said of the deterministic implications of an omniscient god: "Does God know or does He not know that a certain individual will be good or bad? If thou sayest 'He knows', then it necessarily follows that [that] man is compelled to act as God knew beforehand he would act, otherwise God's knowledge would be imperfect."
Newtonian mechanics
Determinism in the West is often associated with Newtonian mechanics/physics, which depicts the physical matter of the universe as operating according to a set of fixed laws. The "billiard ball" hypothesis, a product of Newtonian physics, argues that once the initial conditions of the universe have been established, the rest of the history of the universe follows inevitably. If it were actually possible to have complete knowledge of physical matter and all of the laws governing that matter at any one time, then it would be theoretically possible to compute the time and place of every event that will ever occur (Laplace's demon). In this sense, the basic particles of the universe operate in the same fashion as the rolling balls on a billiard table, moving and striking each other in predictable ways to produce predictable results.
Whether or not it is all-encompassing in so doing, Newtonian mechanics deals only with caused events; for example, if an object begins in a known position and is hit dead on by an object with some known velocity, then it will be pushed straight toward another predictable point. If it goes somewhere else, the Newtonians argue, one must question one's measurements of the original position of the object, the exact direction of the striking object, gravitational or other fields that were inadvertently ignored, etc. Then, they maintain, repeated experiments and improvements in accuracy will always bring one's observations closer to the theoretically predicted results. When dealing with situations on an ordinary human scale, Newtonian physics has been successful. But it fails as velocities become some substantial fraction of the speed of light and when interactions at the atomic scale are studied. Before the discovery of quantum effects and other challenges to Newtonian physics, "uncertainty" was always a term that applied to the accuracy of human knowledge about causes and effects, and not to the causes and effects themselves.
Newtonian mechanics, as well as any following physical theories, are results of observations and experiments, and so they describe "how it all works" within a tolerance. However, old western scientists believed if there are any logical connections found between an observed cause and effect, there must be also some absolute natural laws behind. Belief in perfect natural laws driving everything, instead of just describing what we should expect, led to searching for a set of universal simple laws that rule the world. This movement significantly encouraged deterministic views in Western philosophy, as well as the related theological views of classical pantheism.
Eastern tradition
Throughout history, the belief that the entire universe is a deterministic system subject to the will of fate or destiny has been articulated in both Eastern and Western religions, philosophy, music, and literature.
The ancient Arabs that inhabited the Arabian Peninsula before the advent of Islam used to profess a widespread belief in fatalism (ḳadar) alongside a fearful consideration for the sky and the stars as divine beings, which they held to be ultimately responsible for every phenomena that occurs on Earth and for the destiny of humankind. Accordingly, they shaped their entire lives in accordance with their interpretations of astral configurations and phenomena.
In the I Ching and philosophical Taoism, the ebb and flow of favorable and unfavorable conditions suggests the path of least resistance is effortless (see: Wu wei). In the philosophical schools of the Indian Subcontinent, the concept of karma deals with similar philosophical issues to the Western concept of determinism. Karma is understood as a spiritual mechanism which causes the eternal cycle of birth, death, and rebirth (saṃsāra). Karma, either positive or negative, accumulates according to an individual's actions throughout their life, and at their death determines the nature of their next life in the cycle of Saṃsāra. Most major religions originating in India hold this belief to some degree, most notably Hinduism, Jainism, Sikhism, and Buddhism.
The views on the interaction of karma and free will are numerous, and diverge from each other. For example, in Sikhism, god's grace, gained through worship, can erase one's karmic debts, a belief which reconciles the principle of karma with a monotheistic god one must freely choose to worship. Jainists believe in compatibilism, in which the cycle of Saṃsara is a completely mechanistic process, occurring without any divine intervention. The Jains hold an atomic view of reality, in which particles of karma form the fundamental microscopic building material of the universe.
Ājīvika
In ancient India, the Ājīvika school of philosophy founded by Makkhali Gosāla (around 500 BCE), otherwise referred to as "Ājīvikism" in Western scholarship, upheld the Niyati ("Fate") doctrine of absolute fatalism or determinism, which negates the existence of free will and karma, and is therefore considered one of the nāstika or "heterodox" schools of Indian philosophy. The oldest descriptions of the Ājīvika fatalists and their founder Gosāla can be found both in the Buddhist and Jaina scriptures of ancient India. The predetermined fate of all sentient beings and the impossibility to achieve liberation (mokṣa) from the eternal cycle of birth, death, and rebirth (saṃsāra) was the major distinctive philosophical and metaphysical doctrine of this heterodox school of Indian philosophy, annoverated among the other Śramaṇa movements that emerged in India during the Second urbanization (600–200 BCE).
Buddhism
Buddhist philosophy contains several concepts which some scholars describe as deterministic to various levels. However, the direct analysis of Buddhist metaphysics through the lens of determinism is difficult, due to the differences between European and Buddhist traditions of thought.
One concept which is argued to support a hard determinism is the doctrine of dependent origination (pratītyasamutpāda) in the early Buddhist texts, which states that all phenomena (dharma) are necessarily caused by some other phenomenon, which it can be said to be dependent on, like links in a massive, never-ending chain; the basic principle is that all things (dharmas, phenomena, principles) arise in dependence upon other things, which means that they are fundamentally "empty" or devoid of any intrinsic, eternal essence and therefore are impermanent. In traditional Buddhist philosophy, this concept is used to explain the functioning of the eternal cycle of birth, death, and rebirth (saṃsāra); all thoughts and actions exert a karmic force that attaches to the individual's consciousness, which will manifest through reincarnation and results in future lives. In other words, righteous or unrighteous actions in one life will necessarily cause good or bad responses in another future life or more lives. The early Buddhist texts and later Tibetan Buddhist scriptures associate dependent arising with the fundamental Buddhist doctrines of emptiness (śūnyatā) and non-self (anattā).
Another Buddhist concept which many scholars perceive to be deterministic is the doctrine of non-self (anattā). In Buddhism, attaining enlightenment involves one realizing that neither in humans nor any other sentient beings there is a fundamental core of permanent being, identity, or personality which can be called the "soul", and that all sentient beings (including humans) are instead made of several, constantly changing factors which bind them to the eternal cycle of birth, death, and rebirth (saṃsāra). Sentient beings are composed of the five aggregates of existence (skandha): matter, sensation, perception, mental formations, and consciousness. In the Saṃyutta Nikāya of the Pāli Canon, the historical Buddha is recorded as saying that "just as the word 'chariot' exists on the basis of the aggregation of parts, even so the concept of 'being' exists when the five aggregates are available." The early Buddhist texts outline different ways in which dependent origination is a middle way between different sets of "extreme" views (such as "monist" and "pluralist" ontologies or materialist and dualist views of mind-body relation). In the Kaccānagotta Sutta of the Pāli Canon (SN 12.15, parallel at SA 301), the historical Buddha stated that "this world mostly relies on the dual notions of existence and non-existence" and then explains the right view as follows:
Some Western scholars argue that the concept of non-self necessarily disproves the ideas of free will and moral responsibility. If there is no autonomous self, in this view, and all events are necessarily and unchangeably caused by others, then no type of autonomy can be said to exist, moral or otherwise. However, other scholars disagree, claiming that the Buddhist conception of the universe allows for a form of compatibilism. Buddhism perceives reality occurring on two different levels: the ultimate reality, which can only be truly understood by the enlightened ones, and the illusory or false reality of the material world, which is considered to be "real" or "true" by those who are ignorant about the nature of metaphysical reality; i.e., those who still haven't achieved enlightenment. Therefore, Buddhism perceives free will as a notion belonging to the illusory belief in the unchanging self or personhood that pertains to the false reality of the material world, while concepts like non-self and dependent origination belong to the ultimate reality; the transition between the two can be truly understood, Buddhists claim, by one who has attained enlightenment.
Modern scientific perspective
Generative processes
Although it was once thought by scientists that any indeterminism in quantum mechanics occurred at too small a scale to influence biological or neurological systems, there is indication that nervous systems are influenced by quantum indeterminism due to chaos theory. It is unclear what implications this has for the problem of free will given various possible reactions to the problem in the first place. Many biologists do not grant determinism: Christof Koch, for instance, argues against it, and in favour of libertarian free will, by making arguments based on generative processes (emergence). Other proponents of emergentist or generative philosophy, cognitive sciences, and evolutionary psychology, argue that a certain form of determinism (not necessarily causal) is true. They suggest instead that an illusion of free will is experienced due to the generation of infinite behaviour from the interaction of finite-deterministic set of rules and parameters. Thus the unpredictability of the emerging behaviour from deterministic processes leads to a perception of free will, even though free will as an ontological entity does not exist.
As an illustration, the strategy board-games chess and Go have rigorous rules in which no information (such as cards' face-values) is hidden from either player and no random events (such as dice-rolling) happen within the game. Yet, chess and especially Go with its extremely simple deterministic rules, can still have an extremely large number of unpredictable moves. When chess is simplified to 7 or fewer pieces, however, endgame tables are available that dictate which moves to play to achieve a perfect game. This implies that, given a less complex environment (with the original 32 pieces reduced to 7 or fewer pieces), a perfectly predictable game of chess is possible. In this scenario, the winning player can announce that a checkmate will happen within a given number of moves, assuming a perfect defense by the losing player, or fewer moves if the defending player chooses sub-optimal moves as the game progresses into its inevitable, predicted conclusion. By this analogy, it is suggested, the experience of free will emerges from the interaction of finite rules and deterministic parameters that generate nearly infinite and practically unpredictable behavioural responses. In theory, if all these events could be accounted for, and there were a known way to evaluate these events, the seemingly unpredictable behaviour would become predictable. Another hands-on example of generative processes is John Horton Conway's playable Game of Life. Nassim Taleb is wary of such models, and coined the term "ludic fallacy."
Compatibility with the existence of science
Certain philosophers of science argue that, while causal determinism (in which everything including the brain/mind is subject to the laws of causality) is compatible with minds capable of science, fatalism and predestination is not. These philosophers make the distinction that causal determinism means that each step is determined by the step before and therefore allows sensory input from observational data to determine what conclusions the brain reaches, while fatalism in which the steps between do not connect an initial cause to the results would make it impossible for observational data to correct false hypotheses. This is often combined with the argument that if the brain had fixed views and the arguments were mere after-constructs with no causal effect on the conclusions, science would have been impossible and the use of arguments would have been a meaningless waste of energy with no persuasive effect on brains with fixed views.
Mathematical models
Many mathematical models of physical systems are deterministic. This is true of most models involving differential equations (notably, those measuring rate of change over time). Mathematical models that are not deterministic because they involve randomness are called stochastic. Because of sensitive dependence on initial conditions, some deterministic models may appear to behave non-deterministically; in such cases, a deterministic interpretation of the model may not be useful due to numerical instability and a finite amount of precision in measurement. Such considerations can motivate the consideration of a stochastic model even though the underlying system is governed by deterministic equations.
Quantum and classical mechanics
Day-to-day physics
Since the beginning of the 20th century, quantum mechanics—the physics of the extremely small—has revealed previously concealed aspects of events. Before that, Newtonian physics—the physics of everyday life—dominated. Taken in isolation (rather than as an approximation to quantum mechanics), Newtonian physics depicts a universe in which objects move in perfectly determined ways. At the scale where humans exist and interact with the universe, Newtonian mechanics remain useful, and make relatively accurate predictions (e.g. calculating the trajectory of a bullet). But whereas in theory, absolute knowledge of the forces accelerating a bullet would produce an absolutely accurate prediction of its path, modern quantum mechanics casts reasonable doubt on this main thesis of determinism.
Quantum realm
Quantum physics works differently in many ways from Newtonian physics. Physicist Aaron D. O'Connell explains that understanding the universe, at such small scales as atoms, requires a different logic than day-to-day life does. O'Connell does not deny that it is all interconnected: the scale of human existence ultimately does emerge from the quantum scale. Quantum mechanics is the product of a careful application of the scientific method, logic and empiricism. The Heisenberg uncertainty principle is frequently confused with the observer effect. The uncertainty principle describes how precisely we may measure the position and momentum of a particle at the same time—if we increase the accuracy in measuring one quantity, we are forced to lose accuracy in measuring the other.
This is where statistical mechanics come into play, and where physicists begin to require rather unintuitive mental models: A particle's path cannot be exactly specified in its full quantum description. "Path" is a classical, practical attribute in everyday life, but one that quantum particles do not possess in any meaningful way. The probabilities discovered in quantum mechanics do nevertheless arise from measurement (of the perceived path of the particle). As Stephen Hawking explains, the result is not traditional determinism, but rather determined probabilities. In some cases, a quantum particle may indeed trace an exact path, and the probability of finding the particles in that path is one (certain to be true). In fact, as far as prediction goes, the quantum development is at least as predictable as the classical motion, but the key is that it describes wave functions that cannot be easily expressed in ordinary language. As far as the thesis of determinism is concerned, these probabilities, at least, are quite determined. These findings from quantum mechanics have found many applications, and allow people to build transistors and lasers. Put another way: personal computers, Blu-ray players and the Internet all work because humankind discovered the determined probabilities of the quantum world.
On the topic of predictable probabilities, the double-slit experiments are a popular example. Photons are fired one-by-one through a double-slit apparatus at a distant screen. They do not arrive at any single point, nor even the two points lined up with the slits (the way it might be expected of bullets fired by a fixed gun at a distant target). Instead, the light arrives in varying concentrations at widely separated points, and the distribution of its collisions with the target can be calculated reliably. In that sense the behavior of light in this apparatus is predictable, but there is no way to predict where in the resulting interference pattern any individual photon will make its contribution (although, there may be ways to use weak measurement to acquire more information without violating the uncertainty principle).
Some (including Albert Einstein) have argued that the inability to predict any more than probabilities is simply due to ignorance. The idea is that, beyond the conditions and laws can be observed or deduced, there are also hidden factors or "hidden variables" that determine absolutely in which order photons reach the detector screen. They argue that the course of the universe is absolutely determined, but that humans are screened from knowledge of the determinative factors. So, they say, it only appears that things proceed in a merely probabilistically determinative way. In actuality, they proceed in an absolutely deterministic way.
John S. Bell criticized Einstein's work in his famous Bell's theorem, which, under a strict set of assumptions, demonstrates that quantum mechanics can make statistical predictions that would be violated if local hidden variables really existed. A number of experiments have tried to verify such predictions, and so far they do not appear to be violated. Current experiments continue to verify the result, including the 2015 "Loophole Free Test" that plugged all known sources of error and the 2017 "Cosmic Bell Test" experiment that used cosmic data streaming from different directions toward the Earth, precluding the possibility the sources of data could have had prior interactions.
Bell's theorem has been criticized from the perspective of its strict set of assumptions. A foundational assumption to quantum mechanics is the Principle of locality. To abandon this assumption would require the construction of a non-local hidden variable theory. Therefore, it is possible to augment quantum mechanics with non-local hidden variables to achieve a deterministic theory that is in agreement with experiment. An example is the Bohm interpretation of quantum mechanics. Bohm's Interpretation, though, violates special relativity and it is highly controversial whether or not it can be reconciled without giving up on determinism.
Another foundational assumption to quantum mechanics is that of free will, which has been argued to be foundational to the scientific method as a whole. Bell acknowledged that abandoning this assumption would both allow for the maintenance of determinism as well as locality. This perspective is known as superdeterminism, and is defended by some physicists such as Sabine Hossenfelder and Tim Palmer.
More advanced variations on these arguments include quantum contextuality, by Bell, Simon B. Kochen and Ernst Specker, which argues that hidden variable theories cannot be "sensible", meaning that the values of the hidden variables inherently depend on the devices used to measure them.
This debate is relevant because there are possibly specific situations in which the arrival of an electron at a screen at a certain point and time would trigger one event, whereas its arrival at another point would trigger an entirely different event (e.g. see Schrödinger's cat—a thought experiment used as part of a deeper debate).
In his 1939 address "The Relation between Mathematics and Physics", Paul Dirac pointed out that purely deterministic classical mechanics cannot explain the cosmological origins of the universe; today the early universe is modeled quantum mechanically.
Thus, quantum physics casts reasonable doubt on the traditional determinism of classical, Newtonian physics in so far as reality does not seem to be absolutely determined. This was the subject of the famous Bohr–Einstein debates between Einstein and Niels Bohr and there is still no consensus.
Adequate determinism (see Varieties, above) is the reason that Stephen Hawking called libertarian free will "just an illusion".
References
Notes
Bibliography
Daniel Dennett (2003) Freedom Evolves. Viking Penguin.
John Earman (2007) "Aspects of Determinism in Modern Physics" in Butterfield, J., and Earman, J., eds., Philosophy of Physics, Part B. North Holland: 1369–1434.
George Ellis (2005) "Physics and the Real World", Physics Today.
Epstein, J.M. and Axtell R. (1996) Growing Artificial Societies – Social Science from the Bottom. MIT Press.
Harris, James A. (2005) Of Liberty and Necessity: The Free Will Debate in Eighteenth-Century British Philosophy. Clarendon Press.
Albert Messiah, Quantum Mechanics, English translation by G. M. Temmer of Mécanique Quantique, 1966, John Wiley and Sons, vol. I, chapter IV, section III.
(Online version found here)
Nowak A., Vallacher R.R., Tesser A., Borkowski W., (2000) "Society of Self: The emergence of collective properties in self-structure", Psychological Review 107.
Further reading
George Musser, "Is the Cosmos Random? (Einstein's assertion that God does not play dice with the universe has been misinterpreted)", Scientific American, vol. 313, no. 3 (September 2015), pp. 88–93.
External links
Stanford Encyclopedia of Philosophy entry on Causal Determinism
Determinism in History from the Dictionary of the History of Ideas
Philosopher Ted Honderich's Determinism web resource
Determinism on Information Philosopher
The Society of Natural Science
Determinism and Free Will in Judaism
Snooker, Pool, and Determinism
Metaphysical theories
Causality | Determinism | [
"Physics"
] | 8,686 | [] |
47,949 | https://en.wikipedia.org/wiki/Union%20%28set%20theory%29 | In set theory, the union (denoted by ∪) of a collection of sets is the set of all elements in the collection. It is one of the fundamental operations through which sets can be combined and related to each other.
A refers to a union of zero () sets and it is by definition equal to the empty set.
For explanation of the symbols used in this article, refer to the table of mathematical symbols.
Union of two sets
The union of two sets A and B is the set of elements which are in A, in B, or in both A and B. In set-builder notation,
.
For example, if A = {1, 3, 5, 7} and B = {1, 2, 4, 6, 7} then A ∪ B = {1, 2, 3, 4, 5, 6, 7}. A more elaborate example (involving two infinite sets) is:
A =
B =
As another example, the number 9 is not contained in the union of the set of prime numbers and the set of even numbers , because 9 is neither prime nor even.
Sets cannot have duplicate elements, so the union of the sets and is . Multiple occurrences of identical elements have no effect on the cardinality of a set or its contents.
Algebraic properties
Binary union is an associative operation; that is, for any sets ,
Thus, the parentheses may be omitted without ambiguity: either of the above can be written as . Also, union is commutative, so the sets can be written in any order.
The empty set is an identity element for the operation of union. That is, , for any set . Also, the union operation is idempotent: . All these properties follow from analogous facts about logical disjunction.
Intersection distributes over union
and union distributes over intersection
The power set of a set , together with the operations given by union, intersection, and complementation, is a Boolean algebra. In this Boolean algebra, union can be expressed in terms of intersection and complementation by the formula
where the superscript denotes the complement in the universal set . Alternatively, intersection can be expressed in terms of union and complementation in a similar way: . These two expressions together are called De Morgan's laws.
Finite unions
One can take the union of several sets simultaneously. For example, the union of three sets A, B, and C contains all elements of A, all elements of B, and all elements of C, and nothing else. Thus, x is an element of A ∪ B ∪ C if and only if x is in at least one of A, B, and C.
A finite union is the union of a finite number of sets; the phrase does not imply that the union set is a finite set.
Arbitrary unions
The most general notion is the union of an arbitrary collection of sets, sometimes called an infinitary union. If M is a set or class whose elements are sets, then x is an element of the union of M if and only if there is at least one element A of M such that x is an element of A. In symbols:
This idea subsumes the preceding sections—for example, A ∪ B ∪ C is the union of the collection . Also, if M is the empty collection, then the union of M is the empty set.
Notations
The notation for the general concept can vary considerably. For a finite union of sets one often writes or . Various common notations for arbitrary unions include , , and . The last of these notations refers to the union of the collection , where I is an index set and is a set for every . In the case that the index set I is the set of natural numbers, one uses the notation , which is analogous to that of the infinite sums in series.
When the symbol "∪" is placed before other symbols (instead of between them), it is usually rendered as a larger size.
Notation encoding
In Unicode, union is represented by the character . In TeX, is rendered from \cup and is rendered from \bigcup.
See also
− the union of sets of strings
Notes
External links
Infinite Union and Intersection at ProvenMath De Morgan's laws formally proven from the axioms of set theory.
Basic concepts in set theory
Boolean algebra
Operations on sets
Set theory | Union (set theory) | [
"Mathematics"
] | 879 | [
"Boolean algebra",
"Set theory",
"Mathematical logic",
"Fields of abstract algebra",
"Basic concepts in set theory",
"Operations on sets"
] |
47,964 | https://en.wikipedia.org/wiki/Horse%20teeth | Horse teeth refers to the dentition of equine species, including horses and donkeys. Equines are both heterodontous and diphyodontous, which means that they have teeth in more than one shape (there are up to five shapes of tooth in a horse's mouth), and have two successive sets of teeth, the deciduous ("baby teeth") and permanent sets.
As grazing animals, good dentition is essential to survival. Continued grazing creates specific patterns of wear, which can be used along with patterns of eruption to estimate the age of the horse.
Types of teeth
A fully developed horse of around five years of age will have between 36 and 44 teeth. All equines are heterodontous, which means that they have different shaped teeth for different purposes.
All horses have twelve incisors at the front of the mouth, used primarily for cutting food, most often grass, whilst grazing. They are also used as part of a horse's attack or defence against predators, or as part of establishing social hierarchy within the herd.
Immediately behind the front incisors is the interdental space, where no teeth grow from the gums. This is where the bit is placed, if used, when horses are ridden.
Behind the interdental space, all horses also have twelve premolars and twelve molars, also known as cheek teeth or jaw teeth. These teeth chew food bitten off by incisors, prior to swallowing.
In addition to the incisors, premolars and molars, some, but not all, horses may also have canine teeth and wolf teeth. A horse can have between zero and four canine teeth, also known as tusks (tushes), with a clear prevalence towards male horses (stallions and geldings) who normally have a full set of four. Fewer than 28% of female horses (mares) have any canine teeth. Those that do normally only have one or two, and these may be only partially erupted.
Between 13 and 32% of horses, split equally between male and female, also have wolf teeth, which are not related to canine teeth, but are vestigial first premolars. Wolf teeth are more common on the upper jaw, and can present a problem for horses in work, as they can interfere with the bit. They may also make it difficult during equine dentistry work to rasp the second premolar, and are frequently removed.
Tooth eruption
Horses are diphyodontous, erupting a set of first deciduous teeth (also known as milk, temporary, or baby teeth) soon after birth, with these being replaced by permanent teeth by the age of approximately five years old. The horse will normally have 24 deciduous teeth, emerging in pairs, and eventually pushed out by the permanent teeth, which normally number between 36 and 40. As the deciduous teeth are pushed up, they are termed "caps". Caps will eventually shed on their own, but may cause discomfort when still loose, requiring extraction.
It is possible to estimate the age of a young horse by observing the pattern of teeth in the mouth, based on which teeth have erupted, although the difference between breeds and individuals make precise dating impossible.
All teeth are normally erupted by the age of five, at which point the horse is said to have a "full mouth", but the actual age this occurs will depend on the individual horse, and also by breed, with certain breeds having different average eruption times. For instance, in Shetland ponies the middle and corner incisor tend to erupt late, and in both draft horses and miniature horses, the permanent middle and corner incisors are usually late appearing.
Tooth wear
Horse teeth often wear in specific patterns, based on the way the horse eats its food, and these patterns are often used to conjecture on the age of the horse after it has developed a full mouth. As with aging through observing tooth eruption, this can be imprecise, and may be affected by diet, natural abnormalities, and vices such as cribbing.
The importance of dentition in assessing the age of horses led to veterinary dentistry techniques being used as a method of fraud, with owners and traders altering the teeth of horses to mimic the tooth shapes and characteristics of horses younger than the actual age of the equine.
Equine teeth have evolved to wear against the tooth above or below as the horse chews, thus preventing excess growth. The upper jaw is wider than the lower one. In some cases, sharp edges can occur on the outside of the upper molars and the inside of the lower molars, as they are unopposed by an opposite grinding surface. These sharp edges can reduce chewing efficiency of the teeth, interfere with jaw motion, and in extreme cases can cut the tongue or cheek, making eating and riding painful.
In the wild, natural foodstuffs may have allowed teeth to wear more evenly. Because many modern horses often graze on lusher, softer forage than their ancestors, and are also frequently fed grain or other concentrated feed, it is possible some natural wear may be reduced in the domestic horse. On the other hand, this same uneven wear in the wild may have at times contributed to a shorter lifespan. Modern wild horses live an estimated 20 years at most, while a domesticated horse, depending on breed and management, quite often lives 25 to 30 years. Thus, because domesticated animals also live longer, they may simply have more time to develop dental issues that their wild forebears never faced.
Typical wear patterns
The following are typical wear patterns in horses.
Cups
Cups are hollow and rectangular or oval in shape, appearing on the tables of the permanent incisors, that wear away over time. In general, cups are worn away on the lower central incisors by age 6, the lower intermediates by age 7, and corners at age 8. The cups of the upper central incisors are worn away by 9 years of age, the upper intermediate incisors by 10, and the corners by 11. When all the cups are gone, the horse is referred to as smooth mouthed. In the past, dishonest dealers would "bishop" the teeth of old horses, usually by burning an indentation into the teeth, to imitate cups: but this practice was detectable by the absence of the white edge of enamel that always surrounds the real cup, by the shape of the teeth, and other marks of age about the animal.
Pulp mark/dental star
After some wear has occurred on the teeth, the central pulp cavity is exposed, and the tooth is marked by a "dental star" or "pulp mark" that is smaller than the incisor cups. These begin as a dark line in front of the dental cup, which grows in size and becomes more oval in shape as the cups are worn away. Dental stars are usually first visible at age 6, on the animal's lower central incisors, and very visible by age 8. They appear on the lower intermediates by age 9, and on the other incisors between the ages of 10–12 years.
Hook/notch
A hook appears on the upper corner incisor around age 7, and disappears by age 8. It reappears around age 13, again disappearing about 1 year later.
Galvayne's groove
The Galvayne's groove occurs on the upper corner incisor, producing a vertical line, and is helpful in approximating the age of older horses. It generally first appears at age 10, reaches halfway down the tooth by age 15, and is completely down the tooth at age 20. It then begins to disappear, usually half-way gone by age 25, and completely gone by age 30. The groove is named after horse expert Sydney Frederick Galvayne who claimed he had invented the use the groove to age a horse; however, it had been earlier described by his teacher Professor Hamilton Sample.
Angle and shape of the incisors
As the horse ages, the angle of the incisors generally becomes more acute, slanting forward. The incisors gradually change their form as the horse ages, becoming round, oval, and then triangular.
Continuous eruption and loss
A horse's incisors, premolars, and molars, once fully developed, continue to erupt as the grinding surface is worn down through chewing. A young adult horse's teeth are typically 4.5–5 inches long, but the majority of the crown remaining below the gumline in the dental socket. The rest of the tooth slowly emerges from the jaw, erupting about 1/8" each year, as the horse ages. When the animal reaches old age, the crowns of the teeth are very short and the teeth are often lost altogether. Very old horses, if lacking molars to chew, may need soft feeds to maintain adequate levels of nutrition.
Older horses may appear to have a lean, shallow lower jaw, as the roots of the teeth have begun to disappear. Younger horses may seem to have a lumpy jaw, due to the presence of permanent teeth within the jaw.
The teeth and the bit
If a bit is fitted to a horse, along with a bridle, the normally metal bar of the bit lies in the interdental space between the incisors (or canines, where present) and premolars. If the bridle is adjusted so that the bit rests too low, or too high, it may push against the teeth and cause discomfort.
Sometimes, a "bit seat" is filed in the first premolar, where the surface is rounded so that the flesh of the cheek is not pushed into the sharp edge of the tooth, making riding more comfortable for the horse, although the practice is controversial.
Dental problems
Like all mammals, horses can develop a variety of dental problems, with a variety of dental services available to minimise problems through reactive or prophylactic intervention.
Equine dentistry can be undertaken by a vet or by a trained specialist such as an equine dental technician, or in some cases is performed by lay persons, including owners or trainers.
Problems with dentition for horses in work can result in poor performance or behavioural issues, and must be ruled out as the cause of negative behaviours in horse. Most authorities recommend regular checks by a professional, normally six monthly or annually.
Problems due to wear patterns
The wear of the teeth can cause problems if it is uneven, with sharp points appearing, especially on the outer edge of the molars, the inner edge of the premolars and the posterior end of the last molars on the bottom jaw.
Other specific conditions relating to wear include a "step mouth", where one molar or premolar grows longer than the others in that jaw, normally because the corresponding tooth in the opposite jaw is missing or broken, and therefore could not wear down its opposite, a "wave mouth", where at least two molars or premolars are higher than the others, so that, when viewed from the side, the grinding surfaces produce a wave-like pattern rather than a straight line, leading to periodontal disease and excessive wear of some of the teeth, and a "shear mouth" when the grinding surfaces of the molars or premolars are severely sloped on each individual tooth (so the inner side of the teeth are much higher or lower than the outer side of the teeth), severely affecting chewing.
Horses may also experience an overbite/brachygnathism (parrot mouth), or an underbite/prognathism (sow mouth, monkey mouth). These may affect how the incisors wear. In severe cases, the horse's ability to graze may be affected. Horses also sometimes suffer from equine malocclusion where there is a misalignment between their upper and lower jaws.
The curvature of the incisors may also vary from the normal, straight bite. The curvature may be dorsal or ventral . These curvatures may be the result of an incisor malocclusion (e.g. ventral=overbite/dorsal=underbite). The curvature may also be diagonal, stemming from a wear pattern, offset incisors, or pain in the cheek teeth (rather than the incisors), which causes the horse to chew in one direction over the other.
Other dental problems
Other common problems include abscessed, loose, infected, or cracked teeth, retained deciduous teeth, and plaque buildup. Wolf teeth may also cause problems, and are many times removed, as are retained caps.
Prevention of dental problems
To help prevent dental problems, it is recommended to get a horse's teeth checked by a vet or equine dental technician every 6 months. However, regular checks may be needed more often for individuals, especially if the horse is very young or very old. Additionally, the horse's teeth should be checked if it is having major performance problems or showing any of the above signs of a dental problem.
Many horses require floating (or rasping) of teeth once every 12 months, although this, too, is variable and dependent on the individual horse. The first four or five years of a horse's life are when the most growth-related changes occur and hence frequent checkups may prevent problems from developing. Equine teeth get harder as the horse gets older and may not have rapid changes during the prime adult years of life, but as horses become aged, particularly from the late teens on, additional changes in incisor angle and other molar growth patterns often necessitate frequent care. Once a horse is in its late 20s or early 30s, molar loss becomes a concern. Floating involves a veterinarian wearing down the surface of the teeth, usually to remove sharp points or to balance out the mouth. However, the veterinarian must be careful not to take off too much of the surface, or there will not be enough roughened area on the tooth to allow it to properly tear apart food. Additionally, too much work on a tooth can cause thermal damage (which could lead to having to extract the tooth), or expose the sensitive interior of the tooth (pulp). A person without a veterinary degree who performs this service is called a horse floater or equine dental technician.
In popular culture
The common folk saying "don't look a gift horse in the mouth" is taken from the era when gifting horses was common. The teeth of a horse are a good indication of the age of the animal, and it was considered rude to inspect the teeth of a gifted animal as you would one that you were purchasing. The saying is used in reference to being an ungrateful gift receiver.
References
Further reading
The Household Cyclopedia of General Information, published in 1881.
Illustrated Atlas of Clinical Equine Anatomy and Common Disorders of the Horse, Vol. II. Riegel, Ronald J., and Susan E. Hakola. Equistar Publications, Limited. Copyright 1999.
Equus. "Healthy Teeth, Healthy Horse". November 2006, pp 31–39.
Sound Mouth-Sound Horse, The Gager Method of Equine Dental Care. Gager, E.R., and Rhodes, Bob. Emerson Publishing Company. Copyright 1983.
Horse anatomy
Teeth
Senescence in non-human organisms | Horse teeth | [
"Biology"
] | 3,143 | [
"Senescence",
"Senescence in non-human organisms"
] |
47,966 | https://en.wikipedia.org/wiki/Cattle%20age%20determination | The age of cattle is determined chiefly by examination of the teeth, and less perfectly by the horn rings or the length of the tail brush; due to bang-tailing, which is the act of cutting the long hairs at the tip of the tail short to identify the animal after management practices, the last method is the least reliable.
Teeth method
Cattle are placed in a cattle crush in order to restrain them prior inspecting the mouth and amount of teeth that each animal has.
The temporary teeth are in part erupted at birth, and all the incisors are erupted in twenty days; the first, second and third pairs of temporary molars are erupted in thirty days; the teeth have grown large enough to touch each other by the sixth month. Temporary incisors or "milk" teeth are smaller than the permanent incisors.
Cattle have thirty-two teeth, including six incisors or biting teeth and two canines in the front on the bottom jaw. The canine teeth are not pointed but look like incisors. The incisor teeth meet with the thick hard dental pad of the upper jaw. Cattle have six premolars and six molars on both top and bottom jaws for a total of twenty-four molars. The teeth of cattle are suited primarily for grinding, and they use their rough tongues to grasp grass and then nip it off between their incisors and the dental pad.
There is controversy on the reliability of attempting to tell the age of cattle by their teeth, as rate of wear can be affected by the forage that is grazed. Drought or grazing on sandy country will also affect rate of wear.
The following is a guide:
12 months – All the calf teeth are in place.
15 months – Centre permanent incisors appear.
18 months – Centre permanent incisors showing some wear.
24 months – First intermediates up.
30 months – Six broad incisors up.
36 months – Six broad incisors showing wear.
39 months – Corner teeth up
42 months – Eight broad incisors showing wear.
The development is quite complete at from five to six years. At that time the border of the incisors has been worn away a little below the level of the grinders. At six years, the first grinders are beginning to wear, and are on a level with the incisors. At eight years, the wear of the first grinders is very apparent. At ten or eleven years, used surfaces of the teeth begin to bear a square mark surrounded by a white line, and this is pronounced on all the teeth by the twelfth year; between the twelfth and the fourteenth year this mark takes a round form.
It is a requirement in some locations that prime cattle have a dentition indication mark on them prior to auction. This is normally done by the vendor, or the stock agent. Fat cattle auctions in New South Wales, Australia identify the amount of teeth that prime animals have in the form of sprayed marks along the back. Thus two tooth cattle are marked on the wither, four tooth on the middle of the back and six tooth on their high bone (near tail). Milk and eight tooth cattle are not marked.
Horn method
The rings on the horns are less useful as guides. At ten or twelve months the first ring appears; at twenty months to two years the second; at thirty to thirty-two months the third ring, at forty to forty-six months the fourth ring, at fifty four to sixty months the fifth ring, and so on. But, at the fifth year, the three first rings are indistinguishable, and at the eighth year all the rings.
Tail brush method
The brush of the tail is only useful as a guide when assessing small, stunted or young cattle. A brush that is about fetlock length or longer is an indication that the beast is twelve months old or older. This method cannot be used on cattle which have been bang-tailed. Bang tailing is the act of cutting the long hairs at the tip of the tail short to act as a simple identifier of animals and is commonly used after a procedure has been performed on an individual animal that belongs to a large mob e.g. the mob is run through a race and each animal is vaccinated – immediately after being vaccinated the animal is bang-tailed so they are identified as vaccinated and will not be given a second dose of vaccine. This is useful when large numbers of animals are being processed by a group of individuals.
Other methods
Cattle age in a carcass is determined checking the physiological skeletal maturity (ossification) (red) of the tips or "buttons" of the thoracic vertebrae. The size and shape of the rib bones are important considerations as well as the colour and texture of the flesh.
The use of number (year) branding, tattoos or ear tags with numbers or different colours are good methods of identifying the age of cattle, if they are used according to standards.
References
External links
Age Determination in Beef Cattle
Determining the Age of Cattle by Their Teeth
Using Dentition to Age Cattle
Cattle
Teeth
Senescence in non-human organisms | Cattle age determination | [
"Biology"
] | 1,046 | [
"Senescence",
"Senescence in non-human organisms"
] |
47,967 | https://en.wikipedia.org/wiki/Authentication | Authentication (from authentikos, "real, genuine", from αὐθέντης authentes, "author") is the act of proving an assertion, such as the identity of a computer system user. In contrast with identification, the act of indicating a person or thing's identity, authentication is the process of verifying that identity. It might involve validating personal identity documents, verifying the authenticity of a website with a digital certificate, determining the age of an artifact by carbon dating, or ensuring that a product or document is not counterfeit.
Methods
Authentication is relevant to multiple fields. In art, antiques, and anthropology, a common problem is verifying that a given artifact was produced by a certain person or in a certain place or period of history. In computer science, verifying a user's identity is often required to allow access to confidential data or systems.
Authentication can be considered to be of three types:
The first type of authentication is accepting proof of identity given by a credible person who has first-hand evidence that the identity is genuine. When authentication is required of art or physical objects, this proof could be a friend, family member, or colleague attesting to the item's provenance, perhaps by having witnessed the item in its creator's possession. With autographed sports memorabilia, this could involve someone attesting that they witnessed the object being signed. A vendor selling branded items implies authenticity, while they may not have evidence that every step in the supply chain was authenticated. Centralized authority-based trust relationships back most secure internet communication through known public certificate authorities; decentralized peer-based trust, also known as a web of trust, is used for personal services such as email or files and trust is established by known individuals signing each other's cryptographic key for instance.
The second type of authentication is comparing the attributes of the object itself to what is known about objects of that origin. For example, an art expert might look for similarities in the style of painting, check the location and form of a signature, or compare the object to an old photograph. An archaeologist, on the other hand, might use carbon dating to verify the age of an artifact, do a chemical and spectroscopic analysis of the materials used, or compare the style of construction or decoration to other artifacts of similar origin. The physics of sound and light, and comparison with a known physical environment, can be used to examine the authenticity of audio recordings, photographs, or videos. Documents can be verified as being created on ink or paper readily available at the time of the item's implied creation.
Attribute comparison may be vulnerable to forgery. In general, it relies on the facts that creating a forgery indistinguishable from a genuine artifact requires expert knowledge, that mistakes are easily made, and that the amount of effort required to do so is considerably greater than the amount of profit that can be gained from the forgery.
In art and antiques, certificates are of great importance for authenticating an object of interest and value. Certificates can, however, also be forged, and the authentication of these poses a problem. For instance, the son of Han van Meegeren, the well-known art-forger, forged the work of his father and provided a certificate for its provenance as well.
Criminal and civil penalties for fraud, forgery, and counterfeiting can reduce the incentive for falsification, depending on the risk of getting caught.
Currency and other financial instruments commonly use this second type of authentication method. Bills, coins, and cheques incorporate hard-to-duplicate physical features, such as fine printing or engraving, distinctive feel, watermarks, and holographic imagery, which are easy for trained receivers to verify.
The third type of authentication relies on documentation or other external affirmations. In criminal courts, the rules of evidence often require establishing the chain of custody of evidence presented. This can be accomplished through a written evidence log, or by testimony from the police detectives and forensics staff that handled it. Some antiques are accompanied by certificates attesting to their authenticity. Signed sports memorabilia is usually accompanied by a certificate of authenticity. These external records have their own problems of forgery and perjury and are also vulnerable to being separated from the artifact and lost.
In computer science, a user can be given access to secure systems based on user credentials that imply authenticity. A network administrator can give a user a password, or provide the user with a key card or other access devices to allow system access. In this case, authenticity is implied but not guaranteed.
Consumer goods such as pharmaceuticals, perfume, and clothing can use all forms of authentication to prevent counterfeit goods from taking advantage of a popular brand's reputation. As mentioned above, having an item for sale in a reputable store implicitly attests to it being genuine, the first type of authentication. The second type of authentication might involve comparing the quality and craftsmanship of an item, such as an expensive handbag, to genuine articles. The third type of authentication could be the presence of a trademark on the item, which is a legally protected marking, or any other identifying feature which aids consumers in the identification of genuine brand-name goods. With software, companies have taken great steps to protect from counterfeiters, including adding holograms, security rings, security threads and color shifting ink.
Authentication factors
The ways in which someone may be authenticated fall into three categories, based on what is known as the factors of authentication: something the user knows, something the user has, and something the user is. Each authentication factor covers a range of elements used to authenticate or verify a person's identity before being granted access, approving a transaction request, signing a document or other work product, granting authority to others, and establishing a chain of authority.
Security research has determined that for a positive authentication, elements from at least two, and preferably all three, factors should be verified. The three factors (classes) and some of the elements of each factor are:
Knowledge: Something the user knows (e.g., a password, partial password, passphrase, personal identification number (PIN), challenge–response (the user must answer a question or pattern), security question).
Ownership: Something the user has (e.g., wrist band, ID card, security token, implanted device, cell phone with a built-in hardware token, software token, or cell phone holding a software token).
Inherence: Something the user is or does (e.g., fingerprint, retinal pattern, DNA sequence (there are assorted definitions of what is sufficient), signature, face, voice, unique bio-electric signals, or other biometric identifiers).
Single-factor authentication
As the weakest level of authentication, only a single component from one of the three categories of factors is used to authenticate an individual's identity. The use of only one factor does not offer much protection from misuse or malicious intrusion. This type of authentication is not recommended for financial or personally relevant transactions that warrant a higher level of security.
Multi-factor authentication
Multi-factor authentication involves two or more authentication factors (something you know, something you have, or something you are). Two-factor authentication is a special case of multi-factor authentication involving exactly two factors.
For example, using a bank card (something the user has) along with a PIN (something the user knows) provides two-factor authentication. Business networks may require users to provide a password (knowledge factor) and a pseudorandom number from a security token (ownership factor). Access to a very-high-security system might require a mantrap screening of height, weight, facial, and fingerprint checks (several inherence factor elements) plus a PIN and a day code (knowledge factor elements), but this is still a two-factor authentication.
Authentication types
Strong authentication
The United States government's National Information Assurance Glossary defines strong authentication as a layered authentication approach relying on two or more authenticators to establish the identity of an originator or receiver of information.
The European Central Bank (ECB) has defined strong authentication as "a procedure based on two or more of the three authentication factors". The factors that are used must be mutually independent and at least one factor must be "non-reusable and non-replicable", except in the case of an inherence factor and must also be incapable of being stolen off the Internet. In the European, as well as in the US-American understanding, strong authentication is very similar to multi-factor authentication or 2FA, but exceeding those with more rigorous requirements.
The FIDO Alliance has been striving to establish technical specifications for strong authentication.
Continuous authentication
Conventional computer systems authenticate users only at the initial log-in session, which can be the cause of a critical security flaw. To resolve this problem, systems need continuous user authentication methods that continuously monitor and authenticate users based on some biometric trait(s). A study used behavioural biometrics based on writing styles as a continuous authentication method.
Recent research has shown the possibility of using smartphones sensors and accessories to extract some behavioral attributes such as touch dynamics, keystroke dynamics and gait recognition. These attributes are known as behavioral biometrics and could be used to verify or identify users implicitly and continuously on smartphones. The authentication systems that have been built based on these behavioral biometric traits are known as active or continuous authentication systems.
Digital authentication
The term digital authentication, also known as electronic authentication or e-authentication, refers to a group of processes where the confidence for user identities is established and presented via electronic methods to an information system. The digital authentication process creates technical challenges because of the need to authenticate individuals or entities remotely over a network.
The American National Institute of Standards and Technology (NIST) has created a generic model for digital authentication that describes the processes that are used to accomplish secure authentication:
Enrollment – an individual applies to a credential service provider (CSP) to initiate the enrollment process. After successfully proving the applicant's identity, the CSP allows the applicant to become a subscriber.
Authentication – After becoming a subscriber, the user receives an authenticator e.g., a token and credentials, such as a user name. He or she is then permitted to perform online transactions within an authenticated session with a relying party, where they must provide proof that he or she possesses one or more authenticators.
Life-cycle maintenance – the CSP is charged with the task of maintaining the user's credential over the course of its lifetime, while the subscriber is responsible for maintaining his or her authenticator(s).
The authentication of information can pose special problems with electronic communication, such as vulnerability to man-in-the-middle attacks, whereby a third party taps into the communication stream, and poses as each of the two other communicating parties, in order to intercept information from each. Extra identity factors can be required to authenticate each party's identity.
Product authentication
Counterfeit products are often offered to consumers as being authentic. Counterfeit consumer goods, such as electronics, music, apparel, and counterfeit medications, have been sold as being legitimate. Efforts to control the supply chain and educate consumers help ensure that authentic products are sold and used. Even security printing on packages, labels, and nameplates, however, is subject to counterfeiting.
In their anti-counterfeiting technology guide, the EUIPO Observatory on Infringements of Intellectual Property Rights categorizes the main anti-counterfeiting technologies on the market currently into five main categories: electronic, marking, chemical and physical, mechanical, and technologies for digital media.
Products or their packaging can include a variable QR Code. A QR Code alone is easy to verify but offers a weak level of authentication as it offers no protection against counterfeits unless scan data is analyzed at the system level to detect anomalies. To increase the security level, the QR Code can be combined with a digital watermark or copy detection pattern that are robust to copy attempts and can be authenticated with a smartphone.
A secure key storage device can be used for authentication in consumer electronics, network authentication, license management, supply chain management, etc. Generally, the device to be authenticated needs some sort of wireless or wired digital connection to either a host system or a network. Nonetheless, the component being authenticated need not be electronic in nature as an authentication chip can be mechanically attached and read through a connector to the host e.g. an authenticated ink tank for use with a printer. For products and services that these secure coprocessors can be applied to, they can offer a solution that can be much more difficult to counterfeit than most other options while at the same time being more easily verified.
Packaging
Packaging and labeling can be engineered to help reduce the risks of counterfeit consumer goods or the theft and resale of products. Some package constructions are more difficult to copy and some have pilfer indicating seals. Counterfeit goods, unauthorized sales (diversion), material substitution and tampering can all be reduced with these anti-counterfeiting technologies. Packages may include authentication seals and use security printing to help indicate that the package and contents are not counterfeit; these too are subject to counterfeiting. Packages also can include anti-theft devices, such as dye-packs, RFID tags, or electronic article surveillance tags that can be activated or detected by devices at exit points and require specialized tools to deactivate. Anti-counterfeiting technologies that can be used with packaging include:
Taggant fingerprinting – uniquely coded microscopic materials that are verified from a database
Encrypted micro-particles – unpredictably placed markings (numbers, layers and colors) not visible to the human eye
Holograms – graphics printed on seals, patches, foils or labels and used at the point of sale for visual verification
Micro-printing – second-line authentication often used on currencies
Serialized barcodes
UV printing – marks only visible under UV light
Track and trace systems – use codes to link products to the database tracking system
Water indicators – become visible when contacted with water
DNA tracking – genes embedded onto labels that can be traced
Color-shifting ink or film – visible marks that switch colors or texture when tilted
Tamper evident seals and tapes – destructible or graphically verifiable at point of sale
2d barcodes – data codes that can be tracked
RFID chips
NFC chips
Information content
Literary forgery can involve imitating the style of a famous author. If an original manuscript, typewritten text, or recording is available, then the medium itself (or its packaging – anything from a box to e-mail headers) can help prove or disprove the authenticity of the document. However, text, audio, and video can be copied into new media, possibly leaving only the informational content itself to use in authentication. Various systems have been invented to allow authors to provide a means for readers to reliably authenticate that a given message originated from or was relayed by them. These involve authentication factors like:
A difficult-to-reproduce physical artifact, such as a seal, signature, watermark, special stationery, or fingerprint.
A shared secret, such as a passphrase, in the content of the message.
An electronic signature; public-key infrastructure is often used to cryptographically guarantee that a message has been signed by the holder of a particular private key.
The opposite problem is the detection of plagiarism, where information from a different author is passed off as a person's own work. A common technique for proving plagiarism is the discovery of another copy of the same or very similar text, which has different attribution. In some cases, excessively high quality or a style mismatch may raise suspicion of plagiarism.
Literacy and literature authentication
In literacy, authentication is a readers’ process of questioning the veracity of an aspect of literature and then verifying those questions via research. The fundamental question for authentication of literature is – Does one believe it? Related to that, an authentication project is therefore a reading and writing activity in which students document the relevant research process (). It builds students' critical literacy. The documentation materials for literature go beyond narrative texts and likely include informational texts, primary sources, and multimedia. The process typically involves both internet and hands-on library research. When authenticating historical fiction in particular, readers consider the extent that the major historical events, as well as the culture portrayed (e.g., the language, clothing, food, gender roles), are believable for the period.
History and state-of-the-art
Historically, fingerprints have been used as the most authoritative method of authentication, but court cases in the US and elsewhere have raised fundamental doubts about fingerprint reliability. Outside of the legal system as well, fingerprints are easily spoofable, with British Telecom's top computer security official noting that "few" fingerprint readers have not already been tricked by one spoof or another. Hybrid or two-tiered authentication methods offer a compelling solution, such as private keys encrypted by fingerprint inside of a USB device.
In a computer data context, cryptographic methods have been developed which are not spoofable if the originator's key has not been compromised. That the originator (or anyone other than an attacker) knows (or doesn't know) about a compromise is irrelevant. However, it is not known whether these cryptographically based authentication methods are provably secure, since unanticipated mathematical developments may make them vulnerable to attack in the future. If that were to occur, it may call into question much of the authentication in the past. In particular, a digitally signed contract may be questioned when a new attack on the cryptography underlying the signature is discovered.
Authorization
The process of authorization is distinct from that of authentication. Whereas authentication is the process of verifying that "you are who you say you are", authorization is the process of verifying that "you are permitted to do what you are trying to do". While authorization often happens immediately after authentication (e.g., when logging into a computer system), this does not mean authorization presupposes authentication: an anonymous agent could be authorized to a limited action set.
Access control
One familiar use of authentication and authorization is access control. A computer system that is supposed to be used only by those authorized must attempt to detect and exclude the unauthorized. Access to it is therefore usually controlled by insisting on an authentication procedure to establish with some degree of confidence the identity of the user, granting privileges established for that identity.
See also
Authentication protocol
Electronic signature
References
External links
"New NIST Publications Describe Standards for Identity Credentials and Authentication Systems"
Access control
Applications of cryptography
Computer access control
Notary
Packaging | Authentication | [
"Engineering"
] | 3,891 | [
"Cybersecurity engineering",
"Computer access control"
] |
48,020 | https://en.wikipedia.org/wiki/Urban%20growth%20boundary | An urban growth boundary (UGB) is a regional boundary, set in an attempt to control urban sprawl by, in its simplest form, mandating that the area inside the boundary be used for urban development and the area outside be preserved in its natural state or used for agriculture. Legislating for an urban growth boundary is one way, among many others, of managing the major challenges posed by unplanned urban growth and the encroachment of cities upon agricultural and rural land.
An urban growth boundary circumscribes an entire urbanized area and is used by local governments as a guide to zoning and land use decisions, and by utilities and other infrastructure providers to improve efficiency through effective long term planning (e.g. optimising sewerage catchments, school districts, etc.).
If the area affected by the boundary includes multiple jurisdictions a special urban planning agency may be created by the state or regional government to manage the boundary. In a rural context, the terms town boundary, village curtilage or village envelope may be used to apply the same constraining principles. Some jurisdictions refer to the area within an urban growth boundary as an urban growth area (UGA) or urban service area, etc. While the names are different, the concept is the same.
History
Opposition to unregulated urban growth and ribbon development began to grow towards the end of the 19th century in England. The campaign group Campaign to Protect Rural England (CPRE) was formed in 1926 and exerted environmentalist pressure. Implementation of the notion dated from Herbert Morrison's 1934 leadership of the London County Council. It was first formally proposed by the Greater London Regional Planning Committee in 1935, "to provide a reserve supply of public open spaces and of recreational areas and to establish a green belt or girdle of open space".
New provisions for compensation in the 1947 Town and Country Planning Act allowed local authorities around the country to incorporate green belt proposals in their first development plans. The codification of Green Belt policy and its extension to areas other than London came with the historic Circular 42/55 inviting local planning authorities to consider the establishment of Green Belts.
In the United States, the first urban growth boundary was established in 1958, around the city of Lexington, Kentucky. Lexington's population was expanding, and city leaders were concerned about the survival of the surrounding horse farms closely tied to the city's cultural identity. The first statewide urban growth boundary policy was implemented in Oregon, under then governor Tom McCall, as part of the state's land-use planning program in the early 1970s. Tom McCall and his allies convinced the Oregon Legislature in 1973 to adopt the nation's first set of statewide land use planning laws. McCall, with the help of a unique coalition of farmers and environmentalists, persuaded the Legislature that the state's natural beauty and easy access to nature would be lost in a rising tide of urban sprawl. The new goals and guidelines required every city and county in Oregon to have a long-range plan addressing future growth that meets both local and statewide goals. The state of Tennessee passed the Tennessee Growth Policy Act
(TGPA) in 1998 as a response to the state's growing population, increased land development, and conflicts regarding municipal annexation. Tennessee in the year prior to the TGPA's passing was ranked 4th in the United States for fastest rates of land development.
Places with urban growth boundaries
Albania
Albania maintains the 'yellow line' system hailing from its socialist regime — limiting urban development beyond a designated boundary for all municipalities.
Australia
After the release of Melbourne 2030 in October 2002, the state government of Victoria legislated an urban growth boundary to limit urban sprawl. Since then, the urban growth boundary has been significantly increased a number of times.
Canada
In Canada, Vancouver, Toronto, Ottawa (the "Greenbelt"), London, and Waterloo, Ontario have boundaries to restrict growth and preserve greenspace. In Montreal and in the rest of Quebec, an agricultural protection law serves a similar purpose by restricting urban development to white zones and forbidding it on green zones.
Such boundaries are notably absent from cities such as Calgary, Edmonton, and Winnipeg that lie on flat plains and have expanded outwardly on former agricultural land. In British Columbia, the Agricultural Land Reserve serves a similar purpose adjacent to urban areas.
China
In 2017, Chinese Communist Party general secretary Xi Jinping delivered a speech in the 19th National Congress, in which he mentioned the delineation of "boundaries for urban development" (). On November 11, 2019, the Party General Office and the General Office of the State Council issued a guiding opinion, requiring "three control lines", including boundaries of urban development, to be designated in territorial spatial planning.
Hong Kong
In the plan of some new towns, green belts are included and growth cannot sprawl into or across the green belts. In addition a majority of new towns are surrounded by country parks.
France
In France, Rennes decided in the 1960s to maintain a green belt after its ring road. This green belt is named Ceinture verte.
New Zealand
Over the past two decades, Greater Auckland has been subject to a process of growth management facilitated through various strategic and legislative documents. An overarching objective has been to manage the growth of Auckland in a higherdensity, centres-based manner consistent with the Auckland Regional Growth Strategy. Effect is given to that strategy through a series of layers of control including the Local Government Amendment (Auckland) Act, the Regional Policy Statement and then via District Plans. A key outcome of the process was the establishment of a metropolitan urban limit (MUL) or urban fence, which dictated the nature and extent of urban activities that could occur within the MUL and hence also dictated the relative values of land within the MUL.
Romania
Containment of built-up development is defined through a General Zoning Plan in the case of both urban and rural municipalities. The plan defines the 'intravilan' as the boundary within which built-up development is allowed. Oradea municipality provides tools to verify if land parcels are located in intravilan or not.
South Africa
An integrated development plan is required in terms of Chapter 5 of the national Municipal Systems Act No 32 of 2000 for all local authorities in South Africa. This plan would include a spatial development framework plan as one of its components, which would require larger metropolitan areas to indicate an urban edge beyond which urban-type development would be largely restricted or forbidden. The concept was introduced in the 1970s by the Natal Town and Regional Planning Commission of the Province of Natal (now known as KwaZulu-Natal) in the regional guide plans for Durban and Pietermaritzburg. The concept was at that stage termed an "urban fence".
United Kingdom
Controls to constrain the area of urban development existed in London as early as the 16th century. In the middle of the 20th century the countryside abutting the London conurbation was protected by the Metropolitan Green Belt. Further green belts were then created around other urban areas in the United Kingdom.
United States
The U.S. states of Oregon, Washington and Tennessee require cities and counties to establish urban/county growth boundaries.
Oregon restricts the development of farm and forest land. Oregon's law provides that the growth boundary be adjusted regularly to ensure adequate supply of developable land; as of 2018 the boundary had been expanded more than thirty times since it was created in 1980. In the Metro area, the urban growth boundary has to have enough land within it for 20 years of growth; it is reviewed every six years. Other cities in Oregon seek regulatory review of proposed urban growth boundary expansions as needed. Some economic analysis has concluded that farmland lying immediately outside of Portland's growth boundary is worth as little as one-tenth as much as similar land located immediately on the other side; other analysis have found that the UGB has no effect on prices when some other variables are taken into account.
Washington's Growth Management Act, modeled on Oregon's earlier law and approved in 1990, affected mostly the state's more urban counties: as of 2018, Clark County, King County, Kitsap County, Pierce County, Snohomish County, and Thurston County.
In Tennessee, the boundaries are not used to control growth per se, but rather to define long-term city boundaries. (This was a response to a short-lived law in the late 1990s allowing almost any group of people in the state to form their own city). Every county in the state (except those with consolidated city-county governments) has to set a "planned growth area" for each of its municipalities, which defines how far out services such as water and sewer will go. In the Memphis area, annexation reserves have been created for all municipalities in the county. These are areas that have been set aside for a particular municipality to annex in the future. Cities cannot annex land outside of these reserves, so in effect the urban growth boundaries are along the borders of these annexation reserves. Additionally, new cities are only allowed to incorporate in areas determined to be planned for urban growth.
California requires each county to have a Local Agency Formation Commission, which sets urban growth boundaries for each city and town in the county.
States such as Texas use the delineation of extraterritorial jurisdictional boundaries to map out future city growth with the idea of minimizing competitive annexations rather than controlling growth.
Notable U.S. cities surrounded by UGBs include Portland, Oregon; Boulder, Colorado; Honolulu, Hawaii; Virginia Beach, Virginia; Lexington, Kentucky; Seattle, Washington; Knoxville, Tennessee; and San Jose, California. Urban growth boundaries also exist in Miami-Dade County, Florida and the Minneapolis–Saint Paul metropolitan area of Minnesota. In Miami-Dade, it is referred to as the Urban Development Boundary (UDB), and is generally to protect from continued sprawl into and drainage of the Everglades. Portland, Oregon is required to have an urban growth boundary which contains at least of vacant land.
Urban growth boundaries have come under an increasing amount of scrutiny in the past 10 years as housing prices have substantially risen, especially on the West Coast of the U.S. By limiting the supply of developable land, critics argue, UGBs increase the price of existing developable and already-developed land. As a result, they theorize, housing on that land becomes more expensive. In Portland, Oregon, for example, the housing boom of the previous four years drove the growth-management authority to substantially increase the UGB in 2004. While some point to affordability for this action, in reality it was in response to Oregon State law. By law, Metro, the regional government, is required to maintain a 20-year supply of land within the boundary. Even with the addition of several thousand acres (several km2) housing prices continued to rise at record-matching paces. Supporters of UGBs point out that Portland's housing market is still more affordable than other West Coast cities, and housing prices have increased across the country.
See also
Community separator
Land use planning
Prime farmland
Urban open space
Urban rural fringe
References
Further reading
Urban planning
Town and country planning in the United Kingdom
Local government in the United Kingdom
Housing in the United Kingdom
Urbanization | Urban growth boundary | [
"Engineering"
] | 2,293 | [
"Urban planning",
"Architecture"
] |
48,029 | https://en.wikipedia.org/wiki/Valerie%20Solanas | Valerie Jean Solanas (April 9, 1936 – April 25, 1988) was an American radical feminist known for the SCUM Manifesto, which she self-published in 1967, and her attempt to murder artist Andy Warhol in 1968.
On June 3, 1968, Solanas went to The Factory, shot Warhol and art critic Mario Amaya, and attempted to shoot Warhol's manager, Fred Hughes. Solanas was charged with attempted murder, assault, and illegal possession of a firearm. After her release, she continued to promote the SCUM Manifesto. She died in 1988 of pneumonia in San Francisco.
Early life
Valerie Solanas was born in 1936 in Ventnor City, New Jersey, to Louis Solanas and Dorothy Marie Biondo. Her father was a bartender and her mother a dental assistant. She had a younger sister, Judith Arlene Solanas Martinez. Her father was born in Montreal, Quebec, Canada, to parents who immigrated from Spain. Her mother was an Italian-American of Genoan and Sicilian descent born in Philadelphia.
Solanas reported that her father regularly sexually abused her. Her parents divorced when she was young, and her mother remarried shortly afterwards. Solanas disliked her stepfather and began rebelling against her mother, becoming a truant. As a child, she wrote insults for children to use on one another, for the cost of a dime. She beat up a girl in high school who was bothering a younger boy, and also hit a nun.
Because of her rebellious behavior, Solanas' mother sent her to be raised by her grandparents in 1949. Solanas reported that her grandfather was a violent alcoholic who often beat her. When she was aged 15, she left her grandparents and became homeless. In 1953, Solanas gave birth to a son, fathered by a married sailor. The child, named David, was taken away and she never saw him again.
Despite this, Solanas graduated from high school on time and earned a degree in psychology from the University of Maryland, College Park, where she was in the Psi Chi Honor Society. While at the University of Maryland, she hosted a call-in radio show where she gave advice on how to combat men. Solanas was an open lesbian, despite the conservative cultural climate of the 1950s.
Solanas attended the University of Minnesota's Graduate School of Psychology, where she worked in the animal research laboratory, before dropping out and moving to attend Berkeley for a few courses. It was during this time that she began writing the SCUM Manifesto.
New York City and the Factory
In the mid-1960s, Solanas moved to New York City and supported herself through begging and prostitution. In 1965, she wrote two works: an autobiographical short story, "A Young Girl's Primer on How to Attain the Leisure Class", and a play, Up Your Ass, about a young prostitute. According to James Martin Harding, the play is "based on a plot about a woman who 'is a man-hating hustler and panhandler' and who ... ends up killing a man." Harding describes it as more a "provocation than ... a work of dramatic literature" and "rather adolescent and contrived". The short story was published in Cavalier magazine in July 1966. Up Your Ass remained unpublished until 2014.
In 1967, Solanas called pop artist Andy Warhol at his studio, The Factory, and asked him to produce Up Your Ass. According to Warhol, he thought the title was "wonderful" and he invited her to come over with it. He accepted the script for review, told Solanas it was "well typed", and promised to read it. However, when he read the script he thought it was so pornographic that it must have been a police trap. Solanas later contacted Warhol about the script and when she was told that he had lost it, she started demanding money. She was staying at the Chelsea Hotel and told Warhol that she needed money for rent so he offered to pay her $25 to appear in his film I, a Man (1967).
In her role in I, a Man, Solanas leaves the film's title character, played by Tom Baker, to fend for himself, explaining, "I gotta go beat my meat" as she exits the scene. She was satisfied with her experience working with Warhol and her performance in the film, and brought Maurice Girodias, the founder of Olympia Press, to see it. Girodias described her as being "very relaxed and friendly with Warhol". Solanas also had a nonspeaking role in Warhol's film Bike Boy (1967).
SCUM Manifesto
In 1967, Solanas self-published her best-known work, the SCUM Manifesto, a scathing critique of patriarchal culture. The manifesto's opening words are:
Some authors have argued that the Manifesto is a parody and satirical work targeting patriarchy. According to Harding, Solanas described herself as "a social propagandist", but she denied that the work was "a put on" and insisted that her intent was "dead serious". The Manifesto has been translated into over a dozen languages and is excerpted in several feminist anthologies.
While living at the Chelsea Hotel, Solanas introduced herself to Girodias, a fellow resident of the hotel. In August 1967, Girodias and Solanas signed an informal contract stating that she would give Girodias her "next writing, and other writings". In exchange, Girodias paid her $500. Solanas took this to mean that Girodias would own her work. She told Paul Morrissey that "everything I write will be his. He's done this to me .... He's screwed me!" Solanas intended to write a novel based on the SCUM Manifesto and believed that a conspiracy was behind Warhol's failure to return the Up Your Ass script. She suspected that he was coordinating with Girodias to steal her work.
Shooting
According to an unquoted source in The Outlaw Bible of American Literature, on June 3, 1968, at , Solanas reportedly arrived at the Hotel Chelsea and asked for Girodias at the desk, only to be told he was gone for the weekend. She remained at the hotel for three hours before heading to the Grove Press, where she asked for Barney Rosset, who was also not available. In her 2014 biography of Solanas, Breanne Fahs argues that it is unlikely that she appeared at the Chelsea Hotel looking for Girodias, speculating that Girodias may have fabricated the account in order to boost sales for the SCUM Manifesto, which he had published.
Fahs states that "the more likely story ... places Valerie at the Actors Studio at 432 West Forty-Fourth Street early that morning". Actress Sylvia Miles states that Solanas appeared at the Actors Studio looking for Lee Strasberg, asking to leave a copy of Up Your Ass for him. Miles said that Solanas "had a different look, a bit tousled, like somebody whose appearance is the last thing on her mind". Miles told Solanas that Strasberg would not be in until the afternoon, accepted the script, and then "shut the door because I knew she was trouble. I didn't know what sort of trouble, but I knew she was trouble."
Fahs records that Solanas then traveled to producer Margo Feiden's (then Margo Eden) residence in Crown Heights, Brooklyn, as she believed that Feiden would be willing to produce Up Your Ass. As related to Fahs, Solanas talked to Feiden for almost four hours, trying to convince her to produce the play and discussing her vision for a world without men. Throughout this time, Feiden repeatedly refused to produce the play. According to Feiden, Solanas then pulled out her gun, and when Feiden again refused to commit to producing the play, she responded, "Yes, you will produce the play because I'll shoot Andy Warhol and that will make me famous and the play famous, and then you'll produce it." As she was leaving Feiden's residence, Solanas handed Feiden a partial copy of an earlier draft of the play and other personal papers.
Fahs describes how Feiden then "frantically called her local police precinct, Andy Warhol's precinct, police headquarters in Lower Manhattan, and the offices of Mayor John Lindsay and Governor Nelson Rockefeller to report what happened and inform them that Solanas was on her way at that very moment to shoot Andy Warhol". In some instances, the police responded that "You can't arrest someone because you believe she is going to kill Andy Warhol", and even asked Feiden, "Listen lady, how would you know what a real gun looked like?" In a 2009 interview with James Barron of The New York Times, Feiden said that she knew Solanas intended to kill Warhol, but could not prevent it. (A New York Times assistant Metro editor responded to an online comment regarding the story, saying that the Times "does not present the account as definitive".)
Solanas proceeded to the Factory and waited outside. Morrissey arrived and asked her what she was doing there, and she replied, "I'm waiting for Andy to get money." Morrissey tried to get rid of her by telling her that Warhol was not coming in that day, but she told him she would wait. At Solanas went up into the studio. Morrissey told her again that Warhol was not coming in and that she had to leave. She left but rode the elevator up and down until Warhol finally boarded it.
Solanas entered The Factory with Warhol, who complimented her on her appearance, as she was uncharacteristically wearing makeup. Morrissey told her to leave, threatening to "beat the hell" out of her and throw her out otherwise. The phone rang and Warhol answered while Morrissey went to the bathroom. While Warhol was on the phone, Solanas fired at him three times. Her first two shots missed, but the third went through his spleen, stomach, liver, esophagus, and lungs. She then shot art critic Mario Amaya in the hip. Solanas further tried to shoot Fred Hughes, Warhol's manager, but her gun jammed. Hughes asked her to leave, which she did, leaving behind a paper bag with her address book on a table. Warhol was taken to Columbus–Mother Cabrini Hospital, where he underwent a successful five-hour operation.
Later that day, Solanas turned herself in to police, gave up her gun, and confessed to the shooting, telling an officer that Warhol "had too much control in my life". She was fingerprinted and charged with felonious assault and possession of a deadly weapon. The next morning, the New York Daily News ran the front-page headline: "Actress Shoots Andy Warhol". Solanas demanded a retraction of the statement that she was an actress. The Daily News changed the headline in its later edition and added a quote from Solanas stating, "I'm a writer, not an actress."
At her arraignment in Manhattan Criminal Court, Solanas denied shooting Warhol because he would not produce her play but said "it was for the opposite reason", that "he has a legal claim on my works". She told the judge that "it's not often that I shoot somebody. I didn't do it for nothing. Warhol had tied me up, lock, stock, and barrel. He was going to do something to me which would have ruined me." She declared that she wanted to represent herself and she insisted that she "was right in what I did! I have nothing to regret!" The judge struck Solanas' comments from the court record and had her admitted to Bellevue Hospital for psychiatric observation.
Trial
After a cursory evaluation, Solanas was declared mentally unstable and transferred to the prison ward of Elmhurst Hospital. She appeared at New York Supreme Court on June 13, 1968. Florynce Kennedy represented her and asked for a writ of , arguing that Solanas was being held inappropriately at Elmhurst. The judge denied the motion and Solanas returned to Elmhurst. On June 28, Solanas was indicted on charges of attempted murder, assault, and illegal possession of a firearm. She was declared "incompetent" in August and sent to Matteawan State Hospital for the Criminally Insane. That same month, Olympia Press published the SCUM Manifesto with essays by Girodias and Krassner.
In January 1969, Solanas underwent psychiatric evaluation and was diagnosed with chronic paranoid schizophrenia. In June, she was deemed fit to stand trial. She represented herself without an attorney and pleaded guilty to "reckless assault with intent to harm". Solanas was sentenced to three years in prison, with one year of time served.
Media response
The shooting of Warhol propelled Solanas into the public spotlight, prompting a flurry of commentary and opinions in the media. Robert Marmorstein, writing in The Village Voice, declared that Solanas "has dedicated the remainder of her life to the avowed purpose of eliminating every single male from the face of the earth". Norman Mailer called her the "Robespierre of feminism". Historian Alice Echols writes that members of New York Radical Women knew "next to nothing" about Solanas until her 1968 shooting of Warhol, but that afterward, Solanas’s case became a among radical feminists, and SCUM Manifesto became "obligatory reading".
Ti-Grace Atkinson, the New York chapter president of the National Organization for Women (NOW), described Solanas as "the first outstanding champion of women's rights" and "a 'heroine' of the feminist movement", and "smuggled [her manifesto] ... out of the mental hospital where Solanas was confined". According to Betty Friedan, the NOW board rejected Atkinson's statement. Atkinson left NOW and founded another feminist organization. According to Friedan, "the media continued to treat Ti-Grace as a leader of the women's movement, despite its repudiation of her". Kennedy, another NOW member, called Solanas "one of the most important spokeswomen of the feminist movement."
English professor Dana Heller argued that Solanas was "very much aware of feminist organizations and activism", but "had no interest in participating in what she often described as 'a civil disobedience luncheon club.'" Heller also stated that Solanas could "reject mainstream liberal feminism for its blind adherence to cultural codes of feminine politeness and decorum which the SCUM Manifesto identifies as the source of women's debased social status".
After Solanas was released from the New York State Prison for Women in 1971, she stalked Warhol and others over the telephone. In November 1971, Solanas was arrested again for aggravated assault after threatening Barney Rosset, editor of Evergreen Review. She was subsequently institutionalized several times and then drifted into obscurity.
Later life and death
Solanas may have intended to write an eponymous autobiography. In a 1977 Village Voice interview, she announced a book with her name as the title. The book, possibly intended as a parody, was supposed to deal with the "conspiracy" that led to her imprisonment. In a corrective 1977 Village Voice interview, Solanas said the book would not be autobiographical other than a small portion and that it would be about many things, include proof of statements in the manifesto, and would "deal intensively with the subject of bullshit", but she said nothing about parody.
In the mid-1970s, according to Heller, Solanas was "apparently homeless" in New York City, "continued to defend her political beliefs and the SCUM Manifesto", and "actively promoted" her new Manifesto revision. In the late 1980s, Ultra Violet tracked down Solanas in northern California and interviewed her over the phone. According to Ultra Violet, Solanas had changed her name to Onz Loh and stated that the August 1968 version of the Manifesto had many errors, unlike her own printed version of October 1967, and that the book had not sold well. Solanas said that until she was informed by Violet, she was unaware of Warhol's death in 1987.
On April 25, 1988, at the age of 52, Valerie Solanas died of pneumonia at the Bristol Hotel in the Tenderloin district of San Francisco. A building superintendent at the hotel, not on duty that night, had a vague memory of Solanas: "Once, he had to enter her room, and he saw her typing at her desk. There was a pile of typewritten pages beside her. What she was writing and what happened to the manuscript remain a mystery." Her mother burned all her belongings posthumously.
Legacy
Popular culture
Composer Pauline Oliveros released "To Valerie Solanas and Marilyn Monroe in Recognition of Their Desperation" in 1970. In the work, Oliveros seeks to explore how, "Both women seemed to be desperate and caught in the traps of inequality: Monroe needed to be recognized for her talent as an actress. Solanas wished to be supported for her own creative work."
Actress Lili Taylor played Solanas in the film I Shot Andy Warhol (1996), which focused on Solanas's assassination attempt on Warhol (played by Jared Harris). Taylor won Special Recognition for Outstanding Performance at the Sundance Film Festival for her role. The film's director, Mary Harron, requested permission to use songs by The Velvet Underground but was denied by Lou Reed, who feared that Solanas would be glorified in the film. Six years before the film's release, Reed and John Cale included a song about Solanas, "I Believe", on their concept album about Warhol, Songs for Drella (1990). In "I Believe", Reed sings, "I believe life's serious enough for retribution ... I believe being sick is no excuse. And I believe I would've pulled the switch on her myself." Reed believed Solanas was to blame for Warhol's death from a gallbladder infection twenty years after she shot him.
Up Your Ass was rediscovered in 1999 and produced in 2000 by George Coates Performance Works in San Francisco. The copy Warhol had lost was found in a trunk of lighting equipment owned by Billy Name. Coates learned about the rediscovered manuscript while at an exhibition at The Andy Warhol Museum marking the 30th anniversary of the shooting. Coates turned the piece into a musical with an all-female cast. Coates consulted with Solanas' sister, Judith, while writing the piece, and sought to create a "very funny satirist" out of Solanas, not just showing her as Warhol's attempted assassin.
Solanas' life has inspired three plays. Valerie Shoots Andy (2001), by Carson Kreitzer, starred two actors playing a younger (Heather Grayson) and an older (Lynne McCollough) Solanas. Tragedy in Nine Lives (2003), by Karen Houppert, examined the encounter between Solanas and Warhol as a Greek tragedy and starred Juliana Francis as Solanas. In 2011, Pop!, a musical by Maggie-Kate Coleman and Anna K. Jacobs, focused mainly on Warhol (played by Tom Story). Rachel Zampelli played Solanas and sang "Big Gun", described as the "evening's strongest number" by The Washington Post.
Swedish author Sara Stridsberg wrote a semi-fictional novel about Solanas called ('The Dream Faculty'), published in 2006. The book's narrator visits Solanas toward the end of her life at the Bristol Hotel. Stridsberg was awarded the Nordic Council's Literature Prize for the book. The novel was later translated into and published in English under the title Valerie, or, The Faculty of Dreams: A Novel in 2019.
In 2006 Solanas was featured in eleventh episode of the second season Adult Swim show The Venture Bros as part of a group called The Groovy Gang. The group was a parody of the Scooby Gang from Scooby-Doo and was made up of parodies of Solanas (Velma), Ted Bundy (Fred), David Berkowitz (Shaggy), Patty Hearst (Daphne), and Groovy (Scooby). In the episode she is voiced by Joanna Adler. Most of her lines in the episode are quotes from the SCUM Manifesto.
Solanas was featured in a 2017 episode of the FX series American Horror Story: Cult, "Valerie Solanas Died for Your Sins: Scumbag". She was played by Lena Dunham. The episode portrayed Solanas as the instigator of most of the Zodiac Killer murders.
Influence and analysis
Author James Martin Harding explained that, by declaring herself independent from Warhol, after her arrest she "aligned herself with the historical avant-garde's rejection of the traditional structures of bourgeois theater"/ and that her anti-patriarchal "militant hostility ... pushed the avant-garde in radically new directions". Harding believed that Solanas' assassination attempt on Warhol was its own theatrical performance. At the shooting, she left on a table at the Factory a paper bag containing a gun, her address book, and a sanitary napkin. Harding stated that leaving behind the sanitary napkin was part of the performance, and called "attention to basic feminine experiences that were taboo and tacitly elided within avant-garde circles".
Feminist philosopher Avital Ronell compared Solanas to an array of people: Lorena Bobbitt, a "girl Nietzsche", Medusa, the Unabomber, and Medea. Ronell believed that Solanas was threatened by the hyper-feminine women of the Factory that Warhol liked and felt lonely because of the rejection she felt due to her own butch androgyny. She believed Solanas was ahead of her time, living in a period before feminist and lesbian activists such as the Guerrilla Girls and the Lesbian Avengers.
Solanas has also been credited with instigating radical feminism. Catherine Lord wrote that "the feminist movement would not have happened without Valerie Solanas". Lord believed that the reissuing of the SCUM Manifesto and the disowning of Solanas by "women's liberation politicos" triggered a wave of radical feminist publications. According to Vivian Gornick, many of the women's liberation activists who initially distanced themselves from Solanas changed their minds a year later, developing the first wave of radical feminism. At the same time, perceptions of Warhol were transformed from largely nonpolitical into political martyrdom because the motive for the shooting was political, according to Harding and Victor Bockris. Solanas' idiosyncratic views on gender are a focus of Andrea Long Chu's 2019 book, Females.
Fahs describes Solanas as a contradiction that "alienates her from the feminist movement", arguing that Solanas never wanted to be "in movement" but nevertheless fractured the feminist movement by provoking NOW members to disagree about her case. Many contradictions are seen in Solanas' lifestyle as a lesbian who sexually serviced men, her claim to be asexual, a rejection of queer culture, and a non-interest in working with others despite a dependency on others. Fahs also brings into question the contradictory stories of Solanas' life. She is described as a victim, a rebel, and a desperate loner, yet her cousin says she worked as a waitress in her late 20s and 30s, not primarily as a prostitute, and friend Geoffrey LaGear said she had a "groovy childhood". Solanas also kept in touch with her father throughout her life, despite claiming that he sexually abused her. Fahs believes that Solanas embraced these contradictions as a key part of her identity.
In 2018, The New York Times started a series of delayed obituaries of significant individuals whose importance the paper's obituary writers had not recognized at the time of their deaths. In June 2020, they started a series of obituaries on LGBTQ individuals, and on June 26, they profiled Solanas.
Works
Up Your Ass (1965)
"A Young Girl's Primer on How to Attain the Leisure Class", Cavalier (1966)
SCUM Manifesto (1967)
Notes
References
Bibliography
External links
Valerie Solanas The Defiant Life of the Woman Who Wrote SCUM (and Shot Andy Warhol), by Breanne Fahs (2014)
About Valerie Solanas , by Freddie Baer (1999)
"Whose Soiree Now?" (), by Alisa Solomon (The Village Voice, February 2001)
Valerie Jean Solanas (1936–88) (Guardian Unlimited, March 2005)
"The Shot That Shattered the Velvet Underground", written June 6, 1968, from The Village Voice archives.
1936 births
1968 crimes in the United States
1988 deaths
20th-century American criminals
20th-century American dramatists and playwrights
20th-century American women writers
American failed assassins
American female criminals
American feminist writers
American lesbian writers
American LGBTQ dramatists and playwrights
American non-fiction writers
American people convicted of attempted murder
American people of Canadian descent
American people of Italian descent
American people of Spanish descent
American prostitutes
American women dramatists and playwrights
American women non-fiction writers
Criminals from California
Criminals from New Jersey
Deaths from pneumonia in California
Lesbian dramatists and playwrights
Lesbian feminists
Lesbian prostitutes
LGBTQ people from New Jersey
People associated with The Factory
People from Ventnor City, New Jersey
People with schizophrenia
Prisoners and detainees of New York (state)
Radical feminists
Stalking
University of Maryland, College Park alumni
University of Minnesota alumni
Writers from California
Writers from New Jersey | Valerie Solanas | [
"Biology"
] | 5,410 | [
"Behavior",
"Aggression",
"Stalking"
] |
48,043 | https://en.wikipedia.org/wiki/Routing%20table | In computer networking, a routing table, or routing information base (RIB), is a data table stored in a router or a network host that lists the routes to particular network destinations, and in some cases, metrics (distances) associated with those routes. The routing table contains information about the topology of the network immediately around it.
The construction of routing tables is the primary goal of routing protocols. Static routes are entries that are fixed, rather than resulting from routing protocols and network topology discovery procedures.
Overview
A routing table is analogous to a distribution map in package delivery. Whenever a node needs to send data to another node on a network, it must first know where to send it. If the node cannot directly connect to the destination node, it has to send it via other nodes along a route to the destination node. Each node needs to keep track of which way to deliver various packages of data, and for this it uses a routing table. A routing table is a database that keeps track of paths, like a map, and uses these to determine which way to forward traffic. A routing table is a data file in RAM that is used to store route information about directly connected and remote networks. Nodes can also share the contents of their routing table with other nodes.
The primary function of a router is to forward a packet toward its destination network, which is the destination IP address of the packet. To do this, a router needs to search the routing information stored in its routing table. The routing table contains network/next hop associations. These associations tell a router that a particular destination can be optimally reached by sending the packet to a specific router that represents the next hop on the way to the final destination. The next hop association can also be the outgoing or exit interface to the final destination.
With hop-by-hop routing, each routing table lists, for all reachable destinations, the address of the next device along the path to that destination: the next hop. Assuming that the routing tables are consistent, the simple algorithm of relaying packets to their destination's next hop thus suffices to deliver data anywhere in a network. Hop-by-hop is the fundamental characteristic of the IP Internet layer and the OSI Network Layer.
When a router interface is configured with an IP address and subnet mask, the interface becomes a host on that attached network. A directly connected network is a network that is directly attached to one of the router interfaces. The network address and subnet mask of the interface, along with the interface type and number, are entered into the routing table as a directly connected network.
A remote network is a network that can only be reached by sending the packet to another router. Routing table entries to remote networks may be either dynamic or static. Dynamic routes are routes to remote networks that were learned automatically by the router through a dynamic routing protocol. Static routes are routes that a network administrator manually configured.
Routing tables are also a key aspect of certain security operations, such as unicast reverse path forwarding (uRPF). In this technique, which has several variants, the router also looks up, in the routing table, the source address of the packet. If there exists no route back to the source address, the packet is assumed to be malformed or involved in a network attack and is dropped.
Difficulties
The need to record routes to large numbers of devices using limited storage space represents a major challenge in routing table construction. In the Internet, the currently dominant address aggregation technology is a bitwise prefix matching scheme called Classless Inter-Domain Routing (CIDR). Supernetworks can also be used to help control routing table size.
Contents
The routing table consists of at least three information fields:
network identifier: The destination subnet and netmask
metric: The routing metric of the path through which the packet is to be sent. The route will go in the direction of the gateway with the lowest metric.
next hop: The next hop, or gateway, is the address of the next station to which the packet is to be sent on the way to its final destination
Depending on the application and implementation, it can also contain additional values that refine path selection:
quality of service associated with the route. For example, the U flag indicates that an IP route is up.
filtering criteria: Access-control lists associated with the route
interface: Such as eth0 for the first Ethernet card, eth1 for the second Ethernet card, etc.
Shown below is an example of what the table above could look like on a computer connected to the internet via a home router:
The columns Network destination and Netmask together describe the Network identifier as mentioned earlier. For example, destination 192.168.0.0 and netmask 255.255.255.0 can be written as 192.168.0.0/24.
The Gateway column contains the same information as the Next hop, i.e. it points to the gateway through which the network can be reached.
The Interface indicates what locally available interface is responsible for reaching the gateway. In this example, gateway 192.168.0.1 (the internet router) can be reached through the local network card with address 192.168.0.100.
Finally, the Metric indicates the associated cost of using the indicated route. This is useful for determining the efficiency of a certain route from two points in a network. In this example, it is more efficient to communicate with the computer itself through the use of address 127.0.0.1 (called localhost) than it would be through 192.168.0.100 (the IP address of the local network card).
Forwarding table
Routing tables are generally not used directly for packet forwarding in modern router architectures; instead, they are used to generate the information for a simpler forwarding table. This forwarding table contains only the routes which are chosen by the routing algorithm as preferred routes for packet forwarding. It is often in a compressed or pre-compiled format that is optimized for hardware storage and lookup.
This router architecture separates the control plane function of the routing table from the forwarding plane function of the forwarding table. This separation of control and forwarding provides uninterrupted high-performance forwarding.
See also
Luleå algorithm
Internet protocol suite
References
External links
IP Routing from the Linux Network Administrators Guide
Internet architecture
Routing
Data structures | Routing table | [
"Technology"
] | 1,311 | [
"Internet architecture",
"IT infrastructure"
] |
48,049 | https://en.wikipedia.org/wiki/Autonomous%20robot | An autonomous robot is a robot that acts without recourse to human control. Historic examples include space probes. Modern examples include self-driving vacuums and cars.
Industrial robot arms that work on assembly lines inside factories may also be considered autonomous robots, though their autonomy is restricted due to a highly structured environment and their inability to locomote.
Components and criteria of robotic autonomy
Self-maintenance
The first requirement for complete physical autonomy is the ability for a robot to take care of itself. Many of the battery-powered robots on the market today can find and connect to a charging station, and some toys like Sony's Aibo are capable of self-docking to charge their batteries.
Self-maintenance is based on "proprioception", or sensing one's own internal status. In the battery charging example, the robot can tell proprioceptively that its batteries are low, and it then seeks the charger. Another common proprioceptive sensor is for heat monitoring. Increased proprioception will be required for robots to work autonomously near people and in harsh environments. Common proprioceptive sensors include thermal, optical, and haptic sensing, as well as the Hall effect (electric).
Sensing the environment
Exteroception is sensing things about the environment. Autonomous robots must have a range of environmental sensors to perform their task and stay out of trouble. The autonomous robot can recognize sensor failures and minimize the impact on the performance caused by failures.
Common exteroceptive sensors include the electromagnetic spectrum, sound, touch, chemical (smell, odor), temperature, range to various objects, and altitude.
Some robotic lawn mowers will adapt their programming by detecting the speed in which grass grows as needed to maintain a perfectly cut lawn, and some vacuum cleaning robots have dirt detectors that sense how much dirt is being picked up and use this information to tell them to stay in one area longer.
Task performance
The next step in autonomous behavior is to actually perform a physical task. A new area showing commercial promise is domestic robots, with a flood of small vacuuming robots beginning with iRobot and Electrolux in 2002. While the level of intelligence is not high in these systems, they navigate over wide areas and pilot in tight situations around homes using contact and non-contact sensors. Both of these robots use proprietary algorithms to increase coverage over simple random bounce.
The next level of autonomous task performance requires a robot to perform conditional tasks. For instance, security robots can be programmed to detect intruders and respond in a particular way depending upon where the intruder is. For example, Amazon (company) launched its Astro for home monitoring, security and eldercare in September 2021.
Autonomous navigation
Indoor navigation
For a robot to associate behaviors with a place (localization) requires it to know where it is and to be able to navigate point-to-point. Such navigation began with wire-guidance in the 1970s and progressed in the early 2000s to beacon-based triangulation. Current commercial robots autonomously navigate based on sensing natural features. The first commercial robots to achieve this were Pyxus' HelpMate hospital robot and the CyberMotion guard robot, both designed by robotics pioneers in the 1980s. These robots originally used manually created CAD floor plans, sonar sensing and wall-following variations to navigate buildings. The next generation, such as MobileRobots' PatrolBot and autonomous wheelchair, both introduced in 2004, have the ability to create their own laser-based maps of a building and to navigate open areas as well as corridors. Their control system changes its path on the fly if something blocks the way.
At first, autonomous navigation was based on planar sensors, such as laser range-finders, that can only sense at one level. The most advanced systems now fuse information from various sensors for both localization (position) and navigation. Systems such as Motivity can rely on different sensors in different areas, depending upon which provides the most reliable data at the time, and can re-map a building autonomously.
Rather than climb stairs, which requires highly specialized hardware, most indoor robots navigate handicapped-accessible areas, controlling elevators, and electronic doors. With such electronic access-control interfaces, robots can now freely navigate indoors. Autonomously climbing stairs and opening doors manually are topics of research at the current time.
As these indoor techniques continue to develop, vacuuming robots will gain the ability to clean a specific user-specified room or a whole floor. Security robots will be able to cooperatively surround intruders and cut off exits. These advances also bring concomitant protections: robots' internal maps typically permit "forbidden areas" to be defined to prevent robots from autonomously entering certain regions.
Outdoor navigation
Outdoor autonomy is most easily achieved in the air, since obstacles are rare. Cruise missiles are rather dangerous highly autonomous robots. Pilotless drone aircraft are increasingly used for reconnaissance. Some of these unmanned aerial vehicles (UAVs) are capable of flying their entire mission without any human interaction at all except possibly for the landing where a person intervenes using radio remote control. Some drones are capable of safe, automatic landings, however. SpaceX operates a number of Autonomous spaceport drone ships, used to safely land and recover Falcon 9 rockets at sea.
Outdoor autonomy is the most difficult for ground vehicles, due to:
Three-dimensional terrain
Great disparities in surface density
Weather exigencies
Instability of the sensed environment
Open problems in autonomous robotics
There are several open problems in autonomous robotics which are special to the field rather than being a part of the general pursuit of AI. According to George A. Bekey's Autonomous Robots: From Biological Inspiration to Implementation and Control, problems include things such as making sure the robot is able to function correctly and not run into obstacles autonomously. Reinforcement learning has been used to control and plan the navigation of autonomous robots, specifically when a group of them operate in collaboration with each other.
Energy autonomy and foraging
Researchers concerned with creating true artificial life are concerned not only with intelligent control, but further with the capacity of the robot to find its own resources through foraging (looking for food, which includes both energy and spare parts).
This is related to autonomous foraging, a concern within the sciences of behavioral ecology, social anthropology, and human behavioral ecology; as well as robotics, artificial intelligence, and artificial life.
Societal impact and issues
As autonomous robots have grown in ability and technical levels, there has been increasing societal awareness and news coverage of the latest advances, and also some of the philosophical issues, economic effects, and societal impacts that arise from the roles and activities of autonomous robots.
Elon Musk, a prominent business executive and billionaire has warned for years of the possible hazards and pitfalls of autonomous robots; however, his own company is one of the most prominent companies that is trying to devise new advanced technologies in this area.
In 2021, a United Nations group of government experts, known as the Convention on Certain Conventional Weapons – Group of Governmental Experts on Lethal Autonomous Weapons Systems, held a conference to highlight the ethical concerns which arise from the increasingly advanced technology for autonomous robots to wield weapons and to play a military role.
Technical development
Early robots
The first autonomous robots were known as Elmer and Elsie, constructed in the late 1940s by W. Grey Walter. They were the first robots programmed to "think" the way biological brains do and were meant to have free will. Elmer and Elsie were often labeled as tortoises because of how they were shaped and the manner in which they moved. They were capable of phototaxis, the movement that occurs in response to light stimulus.
Space probes
The Mars rovers MER-A and MER-B (now known as Spirit rover and Opportunity rover) found the position of the Sun and navigated their own routes to destinations, on the fly, by:
Mapping the surface with 3D vision
Computing safe and unsafe areas on the surface within that field of vision
Computing optimal paths across the safe area towards the desired destination
Driving along the calculated route
Repeating this cycle until either the destination is reached, or there is no known path to the destination
The planned ESA Rover, Rosalind Franklin rover, is capable of vision based relative localisation and absolute localisation to autonomously navigate safe and efficient trajectories to targets by:
Reconstructing 3D models of the terrain surrounding the Rover using a pair of stereo cameras
Determining safe and unsafe areas of the terrain and the general "difficulty" for the Rover to navigate the terrain
Computing efficient paths across the safe area towards the desired destination
Driving the Rover along the planned path
Building up a navigation map of all previous navigation data
During the final NASA Sample Return Robot Centennial Challenge in 2016, a rover, named Cataglyphis, successfully demonstrated fully autonomous navigation, decision-making, and sample detection, retrieval, and return capabilities. The rover relied on a fusion of measurements from inertial sensors, wheel encoders, Lidar, and camera for navigation and mapping, instead of using GPS or magnetometers. During the 2-hour challenge, Cataglyphis traversed over 2.6 km and returned five different samples to its starting position.
General-use autonomous robots
The Seekur robot was the first commercially available robot to demonstrate MDARS-like capabilities for general use by airports, utility plants, corrections facilities and Homeland Security.
The DARPA Grand Challenge and DARPA Urban Challenge have encouraged development of even more autonomous capabilities for ground vehicles, while this has been the demonstrated goal for aerial robots since 1990 as part of the AUVSI International Aerial Robotics Competition.
AMR transfer carts developed by Seyiton are used to transfer loads of up to 1500 kilograms inside factories.
Between 2013 and 2017, TotalEnergies has held the ARGOS Challenge to develop the first autonomous robot for oil and gas production sites. The robots had to face adverse outdoor conditions such as rain, wind and extreme temperatures.
Some significant current robots include:
Sophia is an autonomous robot that is known for its human-like appearance and behavior compared to previous robotic variants. As of 2018, Sophia's architecture includes scripting software, a chat system, and OpenCog, an AI system designed for general reasoning. Sophia imitates human gestures and facial expressions and is able to answer certain questions and to make simple conversations on predefined topics (e.g. on the weather). The AI program analyses conversations and extracts data that allows it to improve responses in the future.
Nine other robot humanoid "siblings" who were also created by Hanson Robotics. Fellow Hanson robots are Alice, Albert Einstein Hubo, BINA48, Han, Jules, Professor Einstein, Philip K. Dick Android, Zeno, and Joey Chaos. Around 2019–20, Hanson released "Little Sophia" as a companion that could teach children how to code, including support for Python, Blockly, and Raspberry Pi.
Military autonomous robots
Lethal autonomous weapons (LAWs) are a type of autonomous robot military system that can independently search for and engage targets based on programmed constraints and descriptions. LAWs are also known as lethal autonomous weapon systems (LAWS), autonomous weapon systems (AWS), robotic weapons, killer robots or slaughterbots. LAWs may operate in the air, on land, on water, under water, or in space. The autonomy of current systems was restricted in the sense that a human gives the final command to attack – though there are exceptions with certain "defensive" systems.
UGV Interoperability Profile (UGV IOP), Robotics and Autonomous Systems – Ground IOP (RAS-G IOP), was originally a research program started by the United States Department of Defense (DoD) to organize and maintain open architecture interoperability standards for Unmanned Ground Vehicles (UGV). The IOP was initially created by U.S. Army Robotic Systems Joint Project Office (RS JPO):
In October 2019, Textron and Howe & Howe unveiled their Ripsaw M5 vehicle, and on 9 January 2020, the U.S. Army awarded them a contract for the Robotic Combat Vehicle-Medium (RCV-M) program. Four Ripsaw M5 prototypes are to be delivered and used in a company-level to determine the feasibility of integrating unmanned vehicles into ground combat operations in late 2021. It can reach speeds of more than , has a combat weight of 10.5 tons and a payload capacity of . The RCV-M is armed with a 30 mm autocannon and a pair of anti-tank missiles. The standard armor package can withstand 12.7×108mm rounds, with optional add-on armor increasing weight to up to 20 tons. If disabled, it will retain the ability to shoot, with its sensors and radio uplink prioritized to continue transmitting as its primary function.
Crusher is a autonomous off-road Unmanned Ground Combat Vehicle developed by researchers at the Carnegie Mellon University's National Robotics Engineering Center for DARPA. It is a follow-up on the previous Spinner vehicle. DARPA's technical name for the Crusher is Unmanned Ground Combat Vehicle and Perceptor Integration System, and the whole project is known by the acronym UPI, which stands for Unmanned Ground Combat Vehicle PerceptOR Integration.
CATS Warrior will be an autonomous wingman drone capable of take-off & landing from land & in sea from an aircraft carrier, it will team up with the existing fighter platforms of the IAF like Tejas, Su-30 MKI and Jaguar which will act like its mothership.
The Warrior is primarily envisioned for the Indian Air Force use and a similar, smaller version will be designed for the Indian Navy. It would be controlled by the mothership and accomplish tasks such as scouting, absorbing enemy fire, attacking the targets if necessary with its internal & external pylons weapons or sacrifice itself by crashing into the target.
The SGR-A1 is a type of autonomous sentry gun that was jointly developed by Samsung Techwin (now Hanwha Aerospace) and Korea University to assist South Korean troops in the Korean Demilitarized Zone. It is widely considered as the first unit of its kind to have an integrated system that includes surveillance, tracking, firing, and voice recognition. While units of the SGR-A1 have been reportedly deployed, their number is unknown due to the project being "highly classified".
Types of robots
Humanoid
Tesla Robot and NVIDIA GR00T are humanoid robots.
Delivery robot
A delivery robot is an autonomous robot used for delivering goods.
Charging Robot
An Automatic Charging Robot, unveiled on July 27, 2022, is an arm-shaped automatic charging robot, charging an electric vehicle. It has been running a pilot operation at Hyundai Motor Group's headquarters since 2021. VISION AI System based on deep learning technology has been applied. When an electric vehicle is parked in front of the charger, the robot arm recognizes the charger of the electric vehicle and derives coordinates. And automatically insert a connector into the electric car and operate fast charging. The robot arm is configured in a vertical multi-joint structure so that it can be applied to chargers at different locations for each vehicle. In addition, waterproof and dustproof functions are applied.
Construction robots
Construction robots are used directly on job sites and perform work such as building, material handling, earthmoving, and surveillance.
Research and education mobile robots
Research and education mobile robots are mainly used during a prototyping phase in the process of building full scale robots. They are a scaled down version of bigger robots with the same types of sensors, kinematics and software stack (e.g. ROS). They are often extendable and provide comfortable programming interface and development tools. Next to full scale robot prototyping they are also used for education, especially at university level, where more and more labs about programming autonomous vehicles are being introduced.
Legislation
In March 2016, a bill was introduced in Washington, D.C., allowing pilot ground robotic deliveries. The program was to take place from September 15 through the end of December 2017. The robots were limited to a weight of 50 pounds unloaded and a maximum speed of 10 miles per hour. In case the robot stopped moving because of malfunction the company was required to remove it from the streets within 24 hours. There were allowed only 5 robots to be tested per company at a time. A 2017 version of the Personal Delivery Device Act bill was under review as of March 2017.
In February 2017, a bill was passed in the US state of Virginia via the House bill, HB2016, and the Senate bill, SB1207, that will allow autonomous delivery robots to travel on sidewalks and use crosswalks statewide beginning on July 1, 2017. The robots will be limited to a maximum speed of 10 mph and a maximum weight of 50 pounds. In the states of Idaho and Florida there are also talks about passing the similar legislature.
It has been discussed that robots with similar characteristics to invalid carriages (e.g. 10 mph maximum, limited battery life) might be a workaround for certain classes of applications. If the robot was sufficiently intelligent and able to recharge itself using the existing electric vehicle (EV) charging infrastructure it would only need minimal supervision and a single arm with low dexterity might be enough to enable this function if its visual systems had enough resolution.
In November 2017, the San Francisco Board of Supervisors announced that companies would need to get a city permit in order to test these robots. In addition, the Board banned sidewalk delivery robots from making non-research deliveries.
See also
Scientific concepts
Artificial intelligence
Cognitive robotics
Developmental robotics
Evolutionary robotics
Simultaneous localization and mapping
Teleoperation
von Neumann machine
Wake-up robot problem
William Grey Walter
Types of robots
Autonomous car
Autonomous research robot
Autonomous spaceport drone ship
Domestic robot
Humanoid robot
Specific robot models
AIBO
Amazon Scout
Microbotics
PatrolBot
RoboBee
Robomow
Others
Remote-control vehicle
Robot control
References
External links
-
Uncrewed vehicles
Self-replication | Autonomous robot | [
"Physics",
"Technology",
"Biology"
] | 3,626 | [
"Machines",
"Behavior",
"Reproduction",
"Robots",
"Self-replication",
"Physical systems"
] |
48,063 | https://en.wikipedia.org/wiki/Fibonacci%20coding | In mathematics and computing, Fibonacci coding is a universal code which encodes positive integers into binary code words. It is one example of representations of integers based on Fibonacci numbers. Each code word ends with "11" and contains no other instances of "11" before the end.
The Fibonacci code is closely related to the Zeckendorf representation, a positional numeral system that uses Zeckendorf's theorem and has the property that no number has a representation with consecutive 1s. The Fibonacci code word for a particular integer is exactly the integer's Zeckendorf representation with the order of its digits reversed and an additional "1" appended to the end.
Definition
For a number , if represent the digits of the code word representing then we have:
where is the th Fibonacci number, and so is the th distinct Fibonacci number starting with . The last bit is always an appended bit of 1 and does not carry place value.
It can be shown that such a coding is unique, and the only occurrence of "11" in any code word is at the end (that is, d(k−1) and d(k)). The penultimate bit is the most significant bit and the first bit is the least significant bit. Also, leading zeros cannot be omitted as they can be in, for example, decimal numbers.
The first few Fibonacci codes are shown below, and also their so-called implied probability, the value for each number that has a minimum-size code in Fibonacci coding.
To encode an integer :
Find the largest Fibonacci number equal to or less than ; subtract this number from , keeping track of the remainder.
If the number subtracted was the th Fibonacci number , put a 1 in place in the code word (counting the left most digit as place 0).
Repeat the previous steps, substituting the remainder for , until a remainder of 0 is reached.
Place an additional 1 after the rightmost digit in the code word.
To decode a code word, remove the final "1", assign the remaining the values 1,2,3,5,8,13... (the Fibonacci numbers) to the bits in the code word, and sum the values of the "1" bits.
Comparison with other universal codes
Fibonacci coding has a useful property that sometimes makes it attractive in comparison to other universal codes: it is an example of a self-synchronizing code, making it easier to recover data from a damaged stream. With most other universal codes, if a single bit is altered, then none of the data that comes after it will be correctly read. With Fibonacci coding, on the other hand, a changed bit may cause one token to be read as two, or cause two tokens to be read incorrectly as one, but reading a "0" from the stream will stop the errors from propagating further. Since the only stream that has no "0" in it is a stream of "11" tokens, the total edit distance between a stream damaged by a single bit error and the original stream is at most three.
This approach, encoding using sequence of symbols, in which some patterns (like "11") are forbidden, can be freely generalized.
Example
The following table shows that the number 65 is represented in Fibonacci coding as 0100100011, since . The first two Fibonacci numbers (0 and 1) are not used, and an additional 1 is always appended.
Generalizations
The Fibonacci encodings for the positive integers are binary strings that end with "11" and contain no other instances of "11". This can be generalized to binary strings that end with N consecutive 1s and contain no other instances of N consecutive 1s. For instance, for N = 3 the positive integers are encoded as 111, 0111, 00111, 10111, 000111, 100111, 010111, 110111, 0000111, 1000111, 0100111, …. In this case, the number of encodings as a function of string length is given by the sequence of tribonacci numbers.
For general constraints defining which symbols are allowed after a given symbol, the maximal information rate can be obtained by first finding the optimal transition probabilities using a maximal entropy random walk, then using an entropy coder (with switched encoder and decoder) to encode a message as a sequence of symbols fulfilling the found optimal transition probabilities.
See also
Golden ratio base
NegaFibonacci coding
Ostrowski numeration
Universal code
Varicode, a practical application
Zeckendorf's theorem
Maximal entropy random walk
References
Further reading
Non-standard positional numeral systems
Lossless compression algorithms
Fibonacci numbers
Data compression | Fibonacci coding | [
"Mathematics"
] | 1,015 | [
"Fibonacci numbers",
"Mathematical relations",
"Golden ratio",
"Recurrence relations"
] |
48,065 | https://en.wikipedia.org/wiki/Polder | A polder () is a low-lying tract of land that forms an artificial hydrological entity, enclosed by embankments known as dikes. The three types of polder are:
Land reclaimed from a body of water, such as a lake or the seabed
Flood plains separated from the sea or river by a dike
Marshes separated from the surrounding water by a dike and subsequently drained; these are also known as koogs, especially in Germany
The ground level in drained marshes subsides over time. All polders will eventually be below the surrounding water level some or all of the time. Water enters the low-lying polder through infiltration and water pressure of groundwater, or rainfall, or transport of water by rivers and canals. This usually means that the polder has an excess of water, which is pumped out or drained by opening sluices at low tide. Care must be taken not to set the internal water level too low. Polder land made up of peat (former marshland) will sink in relation to its previous level, because of peat decomposing when exposed to oxygen from the air.
Polders are at risk of flooding at all times, and care must be taken to protect the surrounding dikes. Dikes are typically built with locally available materials, and each material has its own risks: sand is prone to collapse owing to saturation by water; dry peat is lighter than water and potentially unable to retain water in very dry seasons. Some animals dig tunnels in the barrier, allowing water to infiltrate the structure; the muskrat is known for this activity and hunted in certain European countries because of it. Polders are most commonly, though not exclusively, found in river deltas, former fenlands, and coastal areas.
Flooding of polders has also been used as a military tactic in the past. One example is the flooding of the polders along the Yser River during World War I. Opening the sluices at high tide and closing them at low tide turned the polders into an inaccessible swamp, which allowed the Allied armies to stop the German army.
The Netherlands has a large area of polders: as much as 20% of the land area has at some point in the past been reclaimed from the sea, thus contributing to the development of the country. IJsselmeer is the most famous polder project of the Netherlands. Some other countries which have polders are Bangladesh, Belgium, Canada and China. Some examples of Dutch polder projects are Beemster, Schermer, Flevopolder and Noordoostpolder.
Etymology
The Dutch word derives successively from Middle Dutch , from Old Dutch , and ultimately from pol-, a piece of land elevated above its surroundings, with the augmentative suffix -er and epenthetical -d-. The word has been adopted in thirty-six languages.
Netherlands
The Netherlands is frequently associated with polders, as its engineers became noted for developing techniques to drain wetlands and make them usable for agriculture and other development. This is illustrated by the saying "God created the world, but the Dutch created the Netherlands".
The Dutch have a long history of reclamation of marshes and fenland, resulting in some 3,000 polders nationwide. By 1961, about half of the country's land, , was reclaimed from the sea. About half the total surface area of polders in northwest Europe is in the Netherlands. The first embankments in Europe were constructed in Roman times. The first polders were constructed in the 11th century. The oldest extant polder is the Achtermeer polder, from 1533.
As a result of flooding disasters, water boards called waterschap (when situated more inland) or hoogheemraadschap (near the sea, mainly used in the Holland region) were set up to maintain the integrity of the water defences around polders, maintain the waterways inside a polder, and control the various water levels inside and outside the polder. Water boards hold separate elections, levy taxes, and function independently from other government bodies. Their function is basically unchanged even today. As such, they are the oldest democratic institutions in the country. The necessary cooperation among all ranks to maintain polder integrity gave its name to the Dutch version of third-way politics—the Polder Model.
The 1953 flood disaster prompted a new approach to the design of dikes and other water-retaining structures, based on an acceptable probability of overflowing. Risk is defined as the product of probability and consequences. The potential damage in lives, property, and rebuilding costs is compared with the potential cost of water defences. From these calculations follows an acceptable flood risk from the sea at one in 4,000–10,000 years, while it is one in 100–2,500 years for a river flood. The particular established policy guides the Dutch government to improve flood defences as new data on threat levels become available.
Major Dutch polders and the years they were laid dry include Beemster (1609–1612), Schermer (1633–1635), and Haarlemmermeerpolder (1852). Polders created as part of the Zuiderzee Works include Wieringermeerpolder (1930), Noordoostpolder (1942) and Flevopolder (1956–1968)
Examples of polders
Brazil
Several cities on the Paraíba Valley region (in the state of São Paulo) have polders on land claimed from the floodplains around the Paraíba do Sul river.
Bangladesh
Bangladesh has 139 polders, of which 49 are sea-facing, while the rest are along the numerous distributaries of the Ganges-Brahmaputra-Meghna River delta. These were constructed in the 1960s to protect the coast from tidal flooding and reduce salinity incursion. They reduce long-term flooding and waterlogging following storm surges from tropical cyclones. They are also cultivated for agriculture.
Belgium
De Moeren, near Veurne in West Flanders
Polders along the Yser river between Nieuwpoort and Diksmuide
Polders of Muisbroek and Ettenhoven, in Ekeren and Hoevenen
Polder of Stabroek, in Stabroek
Kabeljauwpolder, in Zandvliet
Scheldepolders on the left bank of the Scheldt
Uitkerkse polders, near Blankenberge in West Flanders
Prosperpolder, near Doel, Antwerp and Kieldrecht.
Canada
Tantramar Marshes
Holland Marsh
Pitt Polder Ecological Reserve
Grand Pré, Nova Scotia
Minas Basin
China
The city of Kunshan has over 100 polders.
History
The Jiangnan region, at the Yangtze River Delta, has a long history of constructing polders. Most of these projects were performed between the 10th and 13th centuries. The Chinese government also assisted local communities in constructing dikes for swampland water drainage. The Lijia (里甲) self-monitoring system of 110 households under a lizhang (里长) headman was used for the purposes of service administration and tax collection in the polder, with a liangzhang (粮长, grain chief) responsible for maintaining the water system and a tangzhang (塘长, dike chief) for polder maintenance.
Denmark
Filsø
Kolindsund
Lammefjorden
Finland
Söderfjärden
Munsmo
Two polders ( in total) near Vassor in Korsholm
France
Marais Poitevin
Les Moëres, adjacent to the Flemish polder De Moeren in Belgium.
Polders de Couesnon near Mont-Saint Michel in Normandy
Germany
In Germany, land reclaimed by diking is called a koog. The German Deichgraf system was similar to the Dutch and is widely known from Theodor Storm's novella The Rider on the White Horse.
Altes Land near Hamburg
Blockland and Hollerland near Bremen
Nordstrand, Germany
Bormerkoog and Meggerkoog near Friedrichstadt
36 koogs in the district of Nordfriesland
12 koogs in the district of Dithmarschen
In southern Germany, the term polder is used for retention basins recreated by opening dikes during river floodplain restoration, a meaning somewhat opposite to that in coastal context.
Guyana
Black Bush Polder, Corentyne, Berbice.
India
Kuttanad Region, Kerala
Ireland
Lough Swilly, County Donegal. Near Inch Island and Newtowncunningham.
Italy
Delta of the river Po, such as Bonifica Valle del Mezzano
Japan
Around the Ariake Sea in Kyushu, mainly in Saga but also in Fukuoka and Kumamoto Prefectures
Lithuania
Rusnė Island
Netherlands
Achtermeer, the oldest polder, from 1533
Alblasserwaard, containing the windmills of Kinderdijk, a World Heritage Site
Alkmaar
Andijk
Anna Paulownapolder
Beemster, a World Heritage Site
Bijlmermeer
Flevopolder, the largest artificial island in the world, last part drained in 1968
's-Gravesloot
Haarlemmermeer, containing Schiphol airport
Krimpenerwaard
Lauwersmeer
Mastenbroek, one of the oldest medieval polders, drained around 1363-1364.
Noordoostpolder
Prins Alexanderpolder
Purmer
Schermer
Watergraafsmeer
Wieringermeer
Wieringerwaard
Wijdewormer
Zestienhoven, home of the Rotterdam The Hague Airport (Overschie), in the city of Rotterdam.
Zuidplaspolder, along with Lammefjord in Denmark the lowest point of the European Union
Poland
Vistula delta near Elbląg and Nowy Dwór Gdański
Warta delta near Kostrzyn nad Odrą
Romania
Danube Delta
Singapore
Parts of Pulau Tekong
Slovenia
The Ankaran/Ancarano Polder (), Semedela Polder (), and Škocjan Polder () in reclaimed land around Koper/Capodistria.
South Korea
Parts of the coast of Ganghwa Island, adjacent to the river Han in Incheon
Delta of the river Nakdong in Busan
Saemangeum in North Jeolla Province
Spain
Parts of Málaga were built on reclaimed land
United Kingdom
Traeth Mawr
Sunk Island, on the north shore of the Humber east of Hull
Caldicot and Wentloog Levels along the Severn Estuary in South Wales
Parts of The Fens
Branston Island, by the River Witham outside the conventional area of the fens but connected to them.
Parts of the coast of Essex
Some land along the River Plym in Plymouth
Some land around Meathop east of Grange-over-Sands, reclaimed as a side-effect of building a railway embankment
The Somerset Levels and North Somerset Levels
Romney Marsh
Sealand, Flintshire
Humberhead Levels
United States
New Orleans
Sacramento – San Joaquin River Delta
See also
Afsluitdijk
Flood control in the Netherlands
Land reclamation
Windpump
References
Further reading
Derex, Jean-Michel, Franco Cazzola (eds.) 2004. 2nd ed. 2013. Eau et développement dans l'Europe moderne. Paris, Maison des Sciences De L'Homme
Farjon, J.M.J., J. Dirkx, A. Koomen, J. Vervloet & W. Lammers. 2001. Neder-landschap Internationaal: bouwstenen voor een selectie van gebieden landschapsbehoud. Alterra, Wageningen. Rapport 358 .
Stenak, Morten. 2005. De inddæmmede Landskaber – En historisk geografi. Landbohistorik Selskab.
Polders of the World. Keynotes International Symposium. 1982. Lelystad, The Netherlands
Ven, G.P. van de (ed.) 1993, 4th ed. 2004. Man-made Lowlands. History of Water Management and Land Reclamation in the Netherlands, Matrijs, Utrecht.
Wagret, Paul. 1972. Polderlands. London: Methuen.
External links
Polder landscapes in the Netherlands – in a northwest European and a landmark context.
How to make a polder – online film
Artificial landforms
Land reclamation
Environmental soil science
Riparian zone
Coastal construction
Freshwater ecology | Polder | [
"Engineering",
"Environmental_science"
] | 2,556 | [
"Hydrology",
"Environmental soil science",
"Construction",
"Coastal construction",
"Riparian zone"
] |
48,082 | https://en.wikipedia.org/wiki/Great%20circle | In mathematics, a great circle or orthodrome is the circular intersection of a sphere and a plane passing through the sphere's center point.
Discussion
Any arc of a great circle is a geodesic of the sphere, so that great circles in spherical geometry are the natural analog of straight lines in Euclidean space. For any pair of distinct non-antipodal points on the sphere, there is a unique great circle passing through both. (Every great circle through any point also passes through its antipodal point, so there are infinitely many great circles through two antipodal points.) The shorter of the two great-circle arcs between two distinct points on the sphere is called the minor arc, and is the shortest surface-path between them. Its arc length is the great-circle distance between the points (the intrinsic distance on a sphere), and is proportional to the measure of the central angle formed by the two points and the center of the sphere.
A great circle is the largest circle that can be drawn on any given sphere. Any diameter of any great circle coincides with a diameter of the sphere, and therefore every great circle is concentric with the sphere and shares the same radius. Any other circle of the sphere is called a small circle, and is the intersection of the sphere with a plane not passing through its center. Small circles are the spherical-geometry analog of circles in Euclidean space.
Every circle in Euclidean 3-space is a great circle of exactly one sphere.
The disk bounded by a great circle is called a great disk: it is the intersection of a ball and a plane passing through its center.
In higher dimensions, the great circles on the n-sphere are the intersection of the n-sphere with 2-planes that pass through the origin in the Euclidean space .
Half of a great circle may be called a great semicircle (e.g., as in parts of a meridian in astronomy).
Derivation of shortest paths
To prove that the minor arc of a great circle is the shortest path connecting two points on the surface of a sphere, one can apply calculus of variations to it.
Consider the class of all regular paths from a point to another point . Introduce spherical coordinates so that coincides with the north pole. Any curve on the sphere that does not intersect either pole, except possibly at the endpoints, can be parametrized by
provided is allowed to take on arbitrary real values. The infinitesimal arc length in these coordinates is
So the length of a curve from to is a functional of the curve given by
According to the Euler–Lagrange equation, is minimized if and only if
,
where is a -independent constant, and
From the first equation of these two, it can be obtained that
.
Integrating both sides and considering the boundary condition, the real solution of is zero. Thus, and can be any value between 0 and , indicating that the curve must lie on a meridian of the sphere. In a Cartesian coordinate system, this is
which is a plane through the origin, i.e., the center of the sphere.
Applications
Some examples of great circles on the celestial sphere include the celestial horizon, the celestial equator, and the ecliptic. Great circles are also used as rather accurate approximations of geodesics on the Earth's surface for air or sea navigation (although it is not a perfect sphere), as well as on spheroidal celestial bodies.
The equator of the idealized earth is a great circle and any meridian and its opposite meridian form a great circle. Another great circle is the one that divides the land and water hemispheres. A great circle divides the earth into two hemispheres and if a great circle passes through a point it must pass through its antipodal point.
The Funk transform integrates a function along all great circles of the sphere.
See also
Great ellipse
Rhumb line
References
External links
Great Circle – from MathWorld Great Circle description, figures, and equations. Mathworld, Wolfram Research, Inc. c1999
Great Circles on Mercator's Chart by John Snyder with additional contributions by Jeff Bryant, Pratik Desai, and Carl Woll, Wolfram Demonstrations Project.
Elementary geometry
Spherical trigonometry
Riemannian geometry
Circles
Spherical curves | Great circle | [
"Mathematics"
] | 871 | [
"Elementary mathematics",
"Elementary geometry",
"Circles",
"Pi"
] |
48,085 | https://en.wikipedia.org/wiki/Virtual%20management | Virtual management is the supervision, leadership, and maintenance of virtual teams—dispersed work groups that rarely meet face to face. As the number of virtual teams has grown, facilitated by the Internet, globalization, outsourcing, and remote work, the need to manage them has also grown. The challenging task of managing these teams have been made much easier by availability of online collaboration tools, adaptive project management software, efficient time tracking programs and other related systems and tools. This article provides information concerning some of the important management factors involved with virtual teams, and the life cycle of managing a virtual team.
Due to developments in information technology within the workplace, along with a need to compete globally and address competitive demands, organizations have embraced virtual management structures. As in face-to-face teams, management of virtual teams is also a crucial component in the effectiveness of the team. However, when compared to leaders of face-to-face teams, virtual team leaders face the following difficulties: (a) logistical problems, including coordination of work across different time zones and physical distances; (b) interpersonal issues, including an ability to establish effective working relationships in the absence of frequent face-to-face communication; and (c) technological difficulties, including finding and learning to use appropriate technology. In global virtual teams, there is an additional dimension of cultural differences which impact on a virtual team's functioning.
Management factors
For the team to reap the benefits mentioned above, the manager considers the following factors.
Trust and Leader Effectiveness
A virtual team leader must ensure a feeling of trust among all team members—something all team members have an influence on and must be aware of. However, the team leader is responsible for this in the first place. Team leaders must ensure a sense of psychological safety within a team by allowing all members to speak honestly and directly, but respectfully, to each other.
For a team to succeed, the manager must schedule meetings to ensure participation. This carries over to the realm of virtual teams, but in this case these meetings are also virtual. Due to the difficulties of communicating in a virtual team, it is imperative that team members attend meetings. The first team meeting is crucial and establishes lasting precedents for the team. Furthermore, there are numerous features of a virtual team environment that may impact on the development of follower trust. The team members have to trust that the leader is allocating work fairly and evaluating team members equally.
An extensive study conducted over 8 years examined what factors increase leader effectiveness in virtual teams. One such factor is that virtual team leaders need to spend more time than conventional team counterparts being explicit about expectations. This is due to the patterns of behavior and dynamics of interaction which are unfamiliar. Moreover, even in information rich virtual teams using video conferencing, it is hard to replicate the rapid exchange of information and cues available in face-to-face discussions. To develop role clarity within virtual teams, leaders should focus on developing: (a) clear objectives and goals for tasks; (b) comprehensive milestones for deliverables; and (c) communication channels for seeking feedback on unclear role guidance.
When determining an effective way of leadership for a culturally diverse team there are various ways: directive (from directive to participatory), transactional (rewarding) or transformational influence. Leadership must ensure effective communication and understanding, clear and shared plans and task assignments and collective sense of belonging in team. Further, the role of a team leader is to coordinate tasks and activities, motivate team members, facilitate collaboration and solve conflicts when needed. This proofs that a team leader's role in effective virtual team management and creating knowledge sharing environment is crucial.
Presence and Instruction
Virtual team leaders must become virtually present so they can closely monitor team members and note changes that might affect their ability to undertake their tasks. Due to the distributed nature of virtual teams, team members have less awareness of the wider situation of the team or dynamics of the overall team environment. Consequently, as situations change in a virtual team environment, such as adjustments to task requirements, modification of milestones, or changes to the goals of the team, it is important that leaders monitor followers to ensure they are aware of these changes and make amendments as required. The leaders of virtual teams do not possess the same powers of physical observation, and have to be creative in setting up structures and processes so that variations from expectations can be observed well virtually (for instance, virtual team leaders have to sense when "electronic" silence means acquiescence rather than inattention). At the same time, leaders of virtual teams cannot assume that members are prepared for virtual meetings and also have to ensure that the unique knowledge of each distributed person on the virtual team is fully utilized. Virtual team leaders should be aware that information overload may result in situations when a leader has provided too much information to a team member.
Virtuality
Finally, when examining virtual teams, it is crucial to consider that they differ in terms of their virtuality. Virtuality refers to a continuum of how "virtual" a team is. There are three predominant factors that contribute to virtuality, namely: (a) the richness of communication media; (b) distance between team members, both in time zones and geographical dispersion; and (c) organizational and cultural diversity.
Detriments
In the field of managing virtual research and development (R&D) teams there have arisen certain detriments to the management decisions made when leading a team. The first of these detriments is the lack of potential for radical innovation, this is brought about by the lack of affinity with certain technologies or processes. This causes a decrease in certainty about the feasibility of the execution. As a result, virtual R&D teams focus on incremental innovations. The second detriment is the nature of the project may need to change. Depending on how interdependent each step is, the ability for a virtual team to successfully complete the project varies at each step. Thirdly, the sharing of knowledge, which was identified above as an important ingredient in managing a virtual team, becomes even more important albeit difficult. There is some knowledge and information that is simple and easy to explain and share, but there is other knowledge that may be more content or domain specific that is not so easy to explain. In a face to face group this can be done by walking a team member through the topic slowly during a lunch break, but in a virtual team this is no longer possible and the information is at risk of being misunderstood leading to set backs in the project. Finally, the distribution and bundling of resources is also very much altered by the move from collocation to virtual space. Where once the team was all in one place and the resources could be split there as needed, now the team can be anywhere, and the same resources still need to get to the correct people. This takes time, effort, and coordination to avoid potential setbacks or conflicts.
Life Cycle
To effectively use the management factors described above, it is important to know when in the life cycle of a virtual team they would be most useful. According to the life cycle of virtual team management includes five stages:
Preparations
Launch
Performance management
Team development
Disbanding
Preparations
The initial task during the implementation of a team is the definition of the general purpose of the team together with the determination of the level of virtuality that might be appropriate to achieve these goals. These decisions are usually determined by strategic factors such as mergers, increase of the market span, cost reductions, flexibility and reactivity to the market, etc. Management-related activities that should take place during preparation phase includes mission statement, personnel selection, task design, rewards system design, choose appropriate technology and organizational integration.
In regards to personnel selection, virtual teams have an advantage. To maximize outcomes, management wants the best team it can have. Before virtual teams, they did this by gathering the "best available" workers and forming a team. These teams did not contain the best workers of the field, because they were busy with their own projects, or were too far away to meet the group. With virtual teams, managers can select personnel from anywhere in the world, and so from a wider pool.
Launch
It is highly recommended that, in the beginning of virtual teamwork, all members should meet each other face to face. Crucial elements of such a “kick-off” workshop are getting acquainted with the other team members, clarifying the team goals, clarifying the roles and functions of the team members, information and training how communication technologies can be used efficiently, and developing general rules for the teamwork. As a consequence, “kick-off” workshops are expected to promote clarification of team processes, trust building, building of a shared interpretive context, and high identification with the team.
Getting acquainted, goal clarification and development of intra-team rules should also be accomplished during this phase. Initial field data that compare virtual teams with and without such “kick-off” meetings confirm a general positive effects on team effectiveness, although more differentiated research is necessary. Experimental studies demonstrate that getting acquainted before the start of computer-mediated work facilitates cooperation and trust.
One of the manager's roles during launch is to create activities or events that allow for team building. These kickoff events should serve three major goals: everyone on the team is well versed in the technology involved, everyone knows what is expected of them and when it is expected, and finally have everyone get to know one another. By meeting all three goals the virtual team may be far more successful, and it lightens everyone's load.
Performance management
After the launch of a virtual team, work effectiveness and a constructive team climate has to be maintained using performance management strategies. These comprehensive management strategies arise from the agreed upon difficulty of working in virtual teams. Research shows that constructs and expectations of team membership, leadership, goal setting, social loafing and conflict differ in cultural groups and therefore affects team performance a lot. In early team formation process, one thing to agree on within a team is the meaning of leadership and role differentiation for the team leader and other team members. To apply this, the leader must show active leadership to create a shared conceptualization of team meaning, its focus and function.
The following discussion is again restricted to issues on which empirical results are already available. These issues are leadership, communication within virtual teams, team members' motivation, and knowledge management.
Leadership is a central challenge in virtual teams. Particularly, all kinds of direct control are difficult when team managers are not at the same location as the team members. As a consequence, delegative management principles are considered that shift parts of classic managerial functions to the team members. However, team members only accept and fulfill such managerial functions when they are motivated and identify with the team and its goals, which is again more difficult to achieve in virtual teams. Next, empirical results on three leadership approaches are summarized that differ in the degree of autonomy of the team members: Electronic monitoring as an attempt to realize directive leadership over distance, management by objectives (MBO) as an example for delegative leadership principles, and self-managing teams as an example for rather autonomous teamwork.
One way to maintain control over a virtual team is through motivators and incentives. Both are common techniques implemented by managers for collocated teams, but with slight adjustments they can be used effectively for virtual teams as well. A commonly held belief is that working online, is not particularly important or impactful. This belief can be changed by notifying employees that their work is being sent to the managers. This attaches the importance of career prospects to the work, and makes it more meaningful for the workers.
Communication processes are perhaps the most frequently investigated variables relevant for the regulation of virtual teamwork. By definition, communication in virtual teams is predominantly based on electronic media such as e-mail, telephone, video-conference, etc. The main concern here is that electronic media reduce the richness of information exchange compared to face-to-face communication. This difference in richness of information is an idea shared by multiple researchers, and there are some methods to move around the drop created by working in a virtual environment. One such method is to use the anonymity provided by working digitally. It lets people share concerns without worrying about being identified. This serves to over come the lack of richness by providing a safe method to honestly provide feedback and information. Predominant research issues have been conflict escalation and disinhibited communication (“flaming”), the fit between communication media and communication contents, and the role of non-job-related communication. These research issues revolve around the idea that people become more hostile over a virtual medium making the working environment unhealthy. These findings were quickly dismissed in the presence of virtual teams due to the fact that virtual teams have the expectation that one will work longer together, and the level of anonymity is different from just a one off online interaction. One of the important needs for successful communication is the ability to have every member of the group together repeatedly over time. Effective dispersed groups show spikes in presence during communication over time, while ineffective groups do not have as dramatic spikes.
For the management of motivational and emotional processes, three groups of such processes have been addressed in empirical investigations so far: motivation and trust, team identification and cohesion, and satisfaction of the team members. Since most of the variables are originated within the person, they can vary considerably among the members of a team, requiring appropriate aggregation procedures for multilevel analyses (e.g. motivation may be mediated by interpersonal trust ).
Systematic research is needed on the management of knowledge and the development of shared understanding within the teams, particularly since theoretical analyses sometimes lead to conflicting expectations. The development of such “common ground” might be particularly difficult in virtual teams because sharing of information and the development of a “transactive memory” (i.e., who knows what in the team) is harder due to the reduced amount of face-to-face communication and the reduced information about individual work contexts.
Team development
Virtual teams can be supported by personnel and team development interventions. The development of such training concepts should be based on an empirical assessment of the needs and/or deficits of the team and its members, and the effectiveness of the trainings should be evaluated empirically. The steps of team developments include assessment of needs/deficits, individual and team training, and evaluation of training effects.
One such development intervention is to have the virtual team self-facilitate. Normally, a team brings in an outside facilitator to ensure that the team is correctly using the technology. This is a costly method of developing the team, but virtual teams can self-facilitate. This lessens the need for an outside facilitator, and saves the team time, effort, and resources.
Disbanding and reintegration
Finally, the disbanding of virtual teams and the re-integration of the team members is an important issue that has been neglected not only in empirical but also in most of the conceptual work on virtual teams. However, particularly when virtual project teams have only a short life-time and reform again quickly, careful and constructive disbanding is mandatory to maintain high motivation and satisfaction among the employees. Members of transient project teams anticipate the end of the teamwork in the foreseeable future, which in turn overshadows the interaction and shared outcomes. The final stage of group development should be a gradual emotional disengagement that includes both sadness about separation and (at least in successful groups) joy and pride in the achievements of the team.
Pandemic factor
Post pandemic the virtual team concept has been further popularized although even before the COVID-19 pandemic, many organizations were actively shifting toward remote work. As per market sources, around 80% of global corporate remote work policies had shifted to virtual and mixed forms of team collaboration during the pandemic. With the onslaught of worldwide lockdowns and challenging time management, remote work has become a necessity for the majority and virtual management has become a way of life for business owners/leaders.
See also
Distributed development
Fractional executive
Gig economy
Interim Management
Outline of management
Virtual business
Virtual community of practice
Virtual team
Virtual volunteering
References
External links
Managing the virtual realm, by Denise Dubie, Network World
Dr Alister Jury's research into Leadership Effectiveness within Virtual Teams (University of Queensland)
Information technology management
Human resource management
Management by type | Virtual management | [
"Technology"
] | 3,347 | [
"Information technology",
"Information technology management"
] |
48,113 | https://en.wikipedia.org/wiki/SWIFT | The Society for Worldwide Interbank Financial Telecommunication (Swift), legally S.W.I.F.T. SC, is a cooperative established in 1973 in Belgium () and owned by the banks and other member firms that use its service. SWIFT provides the main messaging network through which international payments are initiated. It also sells software and services to financial institutions, mostly for use on its proprietary "SWIFTNet", and assigns ISO 9362 Business Identifier Codes (BICs), popularly known as "Swift codes".
As of 2018, around half of all high-value cross-border payments worldwide used the Swift network, and in 2015, Swift linked more than 11,000 financial institutions in over 200 countries and territories, who were exchanging an average of over 32 million messages per day (compared to an average of 2.4 million daily messages in 1995).
Swift is headquartered in La Hulpe near Brussels. It hosts an annual conference, called Sibos, specifically aimed at the financial services industry.
History
Before SWIFT's establishment, international financial transactions were communicated over Telex, a public system involving manual writing and reading of messages. SWIFT was set up out of fear of what might happen if a single private and fully American entity controlled global financial flows – which before was First National City Bank (FNCB) of New York – later Citibank. In response to FNCB's protocol, FNCB's competitors in the US and Europe pushed an alternative "messaging system that could replace the public providers and speed up the payment process".
SWIFT was founded in Brussels on 3 May 1973. Individuals who played a key role in its creation included bankers Jan Kraa (at AMRO Bank) and François Dentz (at the Banque de l'Union Parisienne) as well as Carl Reuterskiöld and Bessel Kok, who became respectively its first two chairmen and chief executives. It was initially supported by 239 banks in 15 countries. It soon started to establish common standards for financial transactions and a shared data processing system and worldwide communications network designed by Logica and developed by the Burroughs Corporation. Fundamental operating procedures and rules for liability were established in 1975, and the first message was ceremonially sent by Prince Albert of Belgium on .
SWIFT's first non-European operations centre was inaugurated by Governor John N. Dalton of Virginia in 1979. In 1989 SWIFT completed a monumental new head office building in La Hulpe, designed by Ricardo Bofill Taller de Arquitectura.
Ownership and governance
SWIFT's shareholding structure is adjusted every three years in proportion to volumes of activity incurred by the members, ensuring that the most active members get the most voice irrespective of geography; additional rules are aimed at ensuring some geographical diversity within the board of directors. The 25 directors are elected by the shareholders, on three-year terms with the renewal of one-third of the board every year; all directors are member representatives.
As of May 2024, the members directly represented on the board of directors were JPMorgan Chase (chair), Lloyds Bank (deputy chair), Bank of China, BNP Paribas, BPCE, Citi, Clearstream, Commerzbank, Commonwealth Bank of Australia, Deutsche Bank, Euroclear, FirstRand, HSBC, ING, Intesa Sanpaolo, KBC, MUFG, NatWest, Nordea, Royal Bank of Canada, Santander, SEB, UBS (2 representatives following the acquisition of Credit Suisse), as well as the Association of Banks in Singapore.
Operations
Swift acts as a carrier of the "messages containing the payment instructions between financial institutions involved in a transaction". However, the organisation does not manage accounts on behalf of individuals or financial institutions, and it does not hold funds from third parties. It also does not perform clearing or settlement functions. After payment has been initiated, it must be settled through a payment system, such as T2 in Europe. In the context of cross-border transactions, this step often takes place through correspondent banking accounts that financial institutions have with each other.
SWIFT means several things in the financial world:
a secure network for transmitting messages between financial institutions;
a set of syntax standards for financial messages (for transmission over SWIFTNet or any other network)
a set of connection software and services allowing financial institutions to transmit messages over SWIFT network.
Under 3 above, SWIFT provides turn-key solutions for members, consisting of linkage clients to facilitate connectivity to the SWIFT network and CBTs or "computer-based terminals" which members use to manage the delivery and receipt of their messages. Some of the more well-known interfaces and CBTs provided to their members are:
SWIFTNet Link (SNL) software which is installed on the SWIFT customer's site and opens a connection to SWIFTNet. Other applications can only communicate with SWIFTNet through the SNL.
Alliance Gateway (SAG) software with interfaces (e.g., RAHA = Remote Access Host Adapter), allowing other software products to use the SNL to connect to SWIFTNet
Alliance WebStation (SAB) desktop interface for SWIFT Alliance Gateway with several usage options:
administrative access to the SAG
direct connection SWIFTNet by the SAG, to administrate SWIFT Certificates
so-called Browse connection to SWIFTNet (also by SAG) to use additional services, for example, the Eurosystem's T2
Alliance Access (SAA) and Alliance Messaging Hub (AMH) are the main messaging software applications by SWIFT, which allow message creation for FIN messages, routing and monitoring for FIN and MX messages. The main interfaces are FTA (files transfer automated, not FTP) and MQSA, a WebSphere MQ interface.
The Alliance Workstation (SAW) is the desktop software for administration, monitoring and FIN message creation. Since Alliance Access is not yet capable of creating MX messages, Alliance Messenger (SAM) has to be used for this purpose.
Alliance Web Platform (SWP) as new thin-client desktop interface provided as an alternative to the existing Alliance WebStation, Alliance Workstation (soon) and Alliance Messenger.
Alliance Integrator is built on Oracle's Java Caps which enables customer's back-office applications to connect to Alliance Access or Alliance Entry.
Alliance Lite2 is a secure and reliable, cloud-based way to connect to the SWIFT network which is a light version of Alliance Access specifically targeting customers with low volume of traffic.
Services
There are four key areas that SWIFT services fall under in the financial marketplace: securities, treasury & derivatives, trade services, and payments & cash management.
Securities
SWIFTNet FIX (obsolete)
SWIFTNet Data Distribution
SWIFTNet Funds
SWIFTNet Accord for Securities (end of life October 2017)
Treasury and derivatives
SWIFTNet Accord for Treasury (end of life October 2017)
SWIFTNet Affirmations
SWIFTNet CLS Third Party Service
Cash management
SWIFTNet Bulk Payments
SWIFTNet Cash Reporting
SWIFTNet Exceptions and Investigations
Trade services
SWIFTNet Trade Services Utility
Swift Ref, the global payment reference data utility, is SWIFT's unique reference data service. Swift Ref sources data directly from data originators, including central banks, code issuers and banks making it easy for issuers and originators to maintain data regularly and thoroughly. SWIFTRef constantly validates and cross-checks data across the different data sets.
Operations centres
The SWIFT secure messaging network is run from three data centres, located in the United States, the Netherlands, and Switzerland. These centres share information in near real-time. In case of a failure in one of the data centres, another is able to handle the traffic of the complete network. SWIFT uses submarine communications cables to transmit its data.
Shortly after opening its third data centre in Switzerland in 2009, SWIFT introduced a new distributed architecture with two messaging zones, European and Trans-Atlantic, so data from European SWIFT members no longer mirrored the U.S. data centre. European zone messages are stored in the Netherlands and in part of the Swiss operating centre; Trans-Atlantic zone messages are stored in the United States and in another part of the Swiss operating centre that is segregated from the European zone messages. Countries outside of Europe were by default allocated to the Trans-Atlantic zone but could choose to have their messages stored in the European zone.
SWIFTNet network
SWIFT moved to its current IP network infrastructure, known as SWIFTNet, from 2001 to 2005, providing a total replacement of the previous X.25 infrastructure. The process involved the development of new protocols that facilitate efficient messaging, using existing and new message standards. The adopted technology chosen to develop the protocols was XML, which now provides a wrapper around all messages legacy or contemporary. The communication protocols can be broken down into:
InterAct
SWIFTNet InterAct Realtime
SWIFTNet InterAct Store and Forward
FileAct
SWIFTNet FileAct Realtime
SWIFTNet FileAct Store and Forward
Browse
SWIFTNet Browse
SWIFT provides a centralized store-and-forward mechanism, with some transaction management. For bank A to send a message to bank B with a copy or authorization involving institution C, it formats the message according to standards and securely sends it to SWIFT. SWIFT guarantees its secure and reliable delivery to B after the appropriate action by C. SWIFT guarantees are based primarily on high redundancy of hardware, software, and people.
During 2007 and 2008, the entire SWIFT network migrated its infrastructure to a new protocol called SWIFTNet Phase 2. The main difference between Phase 2 and the former arrangement is that Phase 2 requires banks connecting to the network to use a Relationship Management Application (RMA) instead of the former bilateral key exchange (BKE) system. According to SWIFT's public information database on the subject, RMA software should eventually prove more secure and easier to keep up-to-date; however, converting to the RMA system meant that thousands of banks around the world had to update their international payment systems to comply with the new standards. RMA completely replaced BKE on 1 January 2009.
Standards
SWIFT has become the industry standard for syntax in financial messages. Messages formatted to SWIFT standards can be read and processed by many well-known financial processing systems, whether or not the message travelled over the SWIFT network. SWIFT cooperates with international organizations to define standards for message format and content. SWIFT is also a registration authority (RA) for the following ISO standards:
ISO 9362: 1994 BankingBanking telecommunication messagesBank identifier codes
ISO 10383: 2003 Securities and related financial instrumentsCodes for exchanges and market identification (MIC)
ISO 13616: 2003 IBAN Registry
ISO 15022: 1999 SecuritiesScheme for messages (Data Field Dictionary) (replaces ISO 7775)
ISO 20022-1: 2004 and ISO 20022-2:2007 Financial servicesUniversal Financial Industry message scheme
In RFC 3615 urn:swift: was defined as Uniform Resource Names (URNs) for SWIFT FIN.
Supervision
SWIFT is not a payment system and thus neither regulated nor supervised as such, but is nevertheless deemed to be systemically important and thus under the "oversight" of public authorities. In 1998, the so-called Group of Ten central banks (those of Belgium, Canada, France, Germany, Italy, Japan, the Netherlands, Sweden, Switzerland, the United Kingdom, the Federal Reserve Board and the Federal Reserve Bank of New York for the U.S. and the European Central Bank) started acting as joint overseers, with the National Bank of Belgium (NBB) in a lead role. The oversight focuses primarily on systemic risk, confidentiality, infrastructure security, and business continuity. It is formalized in bilateral documents between the NBB and SWIFT on the one hand, and between the NBB and each of the other G10 central banks on the other hand. In 2018, the International Monetary Fund has recommended that "the National Bank of Belgium should consider enhancing oversight with additional regulatory and supervisory powers."
In 2012, this framework was complemented by a "SWIFT Oversight Forum" including additional central banks. As of 2024, in addition to the G10 central banks, the SWIFT Oversight Forum included the national central banks of Argentina, Australia, Brazil, China, Hong Kong, India, Indonesia, Korea, Mexico, Russia, Saudi Arabia, Singapore, South Africa, Spain, and Turkey. According to SWIFT, the Oversight Forum "provides a forum for the G-10 central banks to share information on Swift oversight activities with a wider group of central banks."
Alternatives
Purported alternatives to the SWIFT system include:
CIPS: sponsored by China, for RMB-related deals. 1467 financial institutions in 111 countries and regions have connected to the system. The actual business covers more than 4,200 banking institutions in 182 countries and regions around the world.
SFMS: sponsored by India
SPFS: developed by the Russian Federation
Former. INSTEX: sponsored by the European Union, limited to non-USD transactions for trade with Iran, largely unused and ineffective. Liquidated in March 2023.
Leadership
Chair
Johannes Kraa (AMRO Bank, Dutch), 1973–1974
François Dentz (Crédit du Nord, French), 1974–1976
Helmer Hasselblad (Swedish), 1976–1984
W. Robert Moore (Chemical Bank, American), 1984–1989
Richard Fröhlich (Austrian), 1989–1992
Eric C. Chilton (Barclays, British), 1992–1996
Jean-Marie Weydert (Société Générale, French), 1996–2000
Jaap Kamp (ABN AMRO, Dutch), 2000–2006
Yawar Shah (JPMorgan then Citi, American), 2006-2022
Mark Buitenhek (ING, Dutch), acting 2022–2023
Graeme Munro (JPMorgan, British), since 2023
Chief Executive Officer
Carl Reuterskiöld, 1973–1983
Bessel Kok, 1983–1991
Jacques Cerveau (interim CEO), 1991
Leonard (Lennie) Schrank, 1992–2007
Lazaro Campos, 2007–2012
Gottfried Leibbrandt, 2012-2019
Javier Pérez-Tasso, since 2019
Controversy
Inefficiency
Swift has been criticised for inefficiency. In 2018, the London-based Financial Times noted that transfers frequently "pass through multiple banks before reaching their final destination, making them time-consuming, costly and lacking transparency on how much money will arrive at the other end". Swift has since introduced an improved service called "Global Payments Innovation" (GPI), claiming it was adopted by 165 banks and was completing half its payments within 30 minutes. The new standard which included Swift Go was supposed to be utilised in receiving and transferring low-value international payments. One of the significant changes was the transaction amount, which would not differ from start to end. However, , uptake was mixed. For instance, Alisherov Eraj, Alif Bank Treasury Department Swift Transfers & Banking Relationship Expert in the Republic of Tajikistan, describes that the leading cause for the late Swift Go adoption in Tajikistan was the Core Banking System itself. To connect to Swift Go, he adds, banking system interfaces needed to be upgraded and integrated with their software to be fully compatible; this hindered many banks from adopting the technology earlier.
U.S. government surveillance
A series of articles published on 23 June 2006 in The New York Times, The Wall Street Journal, and the Los Angeles Times revealed a program, named the Terrorist Finance Tracking Program, which the US Treasury Department, Central Intelligence Agency (CIA), and other United States governmental agencies initiated after the 11 September attacks to gain access to the SWIFT transaction database.
After the publication of these articles, SWIFT quickly came under pressure for compromising the data privacy of its customers by allowing governments to gain access to sensitive personal information. In September 2006, the Belgian government declared that these SWIFT dealings with American governmental authorities were a breach of Belgian and European privacy laws.
In response, and to satisfy members' concerns about privacy, SWIFT began a process of improving its architecture by implementing a distributed architecture with a two-zone model for storing messages .
Concurrently, the European Union negotiated an agreement with the United States government to permit the transfer of intra-EU SWIFT transaction information to the United States under certain circumstances. Because of concerns about its potential contents, the European Parliament adopted a position statement in September 2009, demanding to see the full text of the agreement and asking that it be fully compliant with EU privacy legislation, with oversight mechanisms emplaced to ensure that all data requests were handled appropriately. An interim agreement was signed without European Parliamentary approval by the European Council on 30 November 2009, the day before the Lisbon Treaty—which would have prohibited such an agreement from being signed under the terms of the codecision procedure—formally came into effect. While the interim agreement was scheduled to come into effect on 1 January 2010, the text of the agreement was classified as "EU Restricted" until translations could be provided in all EU languages and published on 25 January 2010.
On 11 February 2010, the European Parliament decided to reject the interim agreement between the EU and the US by 378 to 196 votes. One week earlier, the parliament's civil liberties committee had already rejected the deal, citing legal reservations.
In March 2011, it was reported that two mechanisms of data protection had failed: EUROPOL released a report complaining that requests for information from the US had been too vague (making it impossible to make judgments on validity) and that the guaranteed right for European citizens to know whether their information had been accessed by US authorities had not been put into practice.
Der Spiegel reported in September 2013 that the National Security Agency (NSA) widely monitors banking transactions via SWIFT, as well as credit card transactions. The NSA intercepted and retained data from the SWIFT network used by thousands of banks to securely send transaction information. SWIFT was named as a "target", according to documents leaked by Edward Snowden. The documents revealed that the NSA spied on SWIFT using a variety of methods, including reading "SWIFT printer traffic from numerous banks". In April 2017, a group known as the Shadow Brokers released files allegedly from the NSA which indicate that the agency monitored financial transactions made through SWIFT.
SWIFT and sanctions
Iran
In January 2012, the advocacy group United Against Nuclear Iran (UANI) implemented a campaign calling on SWIFT to end all relations with Iran's banking system, including the Central Bank of Iran. UANI asserted that Iran's membership in SWIFT violated US and EU financial sanctions against Iran as well as SWIFT's own corporate rules.
Consequently, in February 2012, the U.S. Senate Banking Committee unanimously approved sanctions against SWIFT aimed at pressuring it to terminate its ties with blacklisted Iranian banks. Expelling Iranian banks from SWIFT would potentially deny Iran access to billions of dollars in revenue using SWIFT but not from using IVTS. Mark Wallace, president of UANI, praised the Senate Banking Committee.
Initially SWIFT denied that it was acting illegally, but later said that "it is working with U.S. and European governments to address their concerns that its financial services are being used by Iran to avoid sanctions and conduct illicit business". Targeted banks would be—amongst others—Saderat Bank of Iran, Bank Mellat, Post Bank of Iran and Sepah Bank. On 17 March 2012, following an agreement two days earlier between all 27 member states of the Council of the European Union and the council's subsequent ruling, SWIFT disconnected all Iranian banks that had been identified as institutions in breach of current EU sanctions from its international network and warned that even more Iranian financial institutions could be disconnected from the network.
In February 2016, most Iranian banks reconnected to the network following the lift of sanctions due to the Joint Comprehensive Plan of Action.
Israel
In 2014, SWIFT rejected calls from pro-Palestinian activists to revoke Israeli banks' access to its network owing to the Israeli occupation of Palestinian territory.
Russia and Belarus
Similarly, in August 2014 the UK planned to press the EU to block Russian use of SWIFT as a sanction due to Russian military intervention in Ukraine. However, SWIFT refused to do so. SPFS, a Russian alternative to SWIFT, was developed by the Central Bank of Russia as a backup measure.
During the prelude to the 2022 Russian invasion of Ukraine, the United States developed preliminary possible sanctions against Russia, but excluded banning Russia from SWIFT. Following the 2022 Russian invasion of Ukraine, the foreign ministers of the Baltic states Lithuania, Latvia, and Estonia called for Russia to be cut off from SWIFT. However, other EU member states were reluctant, both because European lenders held most of the nearly $30 billion in foreign banks' exposure to Russia and because Russia had developed the SPFS alternative. The European Union, United Kingdom, Canada, and the United States finally agreed to remove a few Russian banks from the SWIFT messaging system in response to the 2022 Russian invasion of Ukraine; the governments of France, Germany, Italy and Japan individually released statements alongside the EU.
On 20 March 2023, Russia was banned from SWIFT.
The European Union issued the first set of sanctions against Belarus - the first was introduced on 27 February 2022, which banned certain categories of Belarusian items in the EU, including timber, steel, mineral fuels and tobacco. After the Lithuanian prime minister proposed disconnecting Belarus from SWIFT, the European Union, which does not recognise Lukashenko as the legitimate President of Belarus, started to plan an extension of the sanctions already issued against Russian entities and top officials to its ally.
Security
In 2016 an $81 million theft from the Bangladesh central bank via its account at the New York Federal Reserve Bank was traced to hacker penetration of SWIFT's Alliance Access software, according to a New York Times report. It was not the first such attempt, the society acknowledged, and the security of the transfer system was undergoing new examination accordingly. Soon after the reports of the theft from the Bangladesh central bank, a second, apparently related, attack was reported to have occurred at a commercial bank in Vietnam.
Both attacks involved malware written to both issue unauthorized SWIFT messages and to conceal that the messages had been sent. After the malware sent the SWIFT messages that stole the funds, it deleted the database record of the transfers and then took further steps to prevent confirmation messages from revealing the theft. In the Bangladeshi case, the confirmation messages would have appeared on a paper report; the malware altered the paper reports when they were sent to the printer. In the second case, the bank used a PDF report; the malware altered the PDF viewer to hide the transfers.
In May 2016, Banco del Austro (BDA) in Ecuador sued Wells Fargo after Wells Fargo honoured $12 million in fund transfer requests that had been placed by thieves. In this case, the thieves sent SWIFT messages that resembled recently cancelled transfer requests from BDA, with slightly altered amounts; the reports do not detail how the thieves gained access to send the SWIFT messages. BDA asserts that Wells Fargo should have detected the suspicious SWIFT messages, which were placed outside of normal BDA working hours and were of an unusual size. Wells Fargo claims that BDA is responsible for the loss, as the thieves gained access to the legitimate SWIFT credentials of a BDA employee and sent fully authenticated SWIFT messages.
In the first half of 2016, an anonymous Ukrainian bank and others—even "dozens" that are not being made public—were variously reported to have been "compromised" through the SWIFT network and to have lost money.
In March 2022, Swiss newspaper Neue Zürcher Zeitung reported about the increased security precautions by the State Police of Thurgau at the SWIFT data centre in Diessenhofen. After most of the Russian banks had been excluded from the private payment system, the risk of sabotage was considered higher. Inhabitants of the town described the large complex as a "fortress" or "prison" where frequent security checks of the fenced property are conducted.
See also
Bilateral key exchange and the new Relationship Management Application (RMA)
BRICS PAY
Cross-Border Interbank Payment System (CIPS)
Electronic money
Indian Financial System Code (IFSC)
ISO 9362, the SWIFT/BIC code standard
ISO 15022
ISO 20022
Single Euro Payments Area (SEPA)
Sibos conference
TIPANET
Value transfer system
References
Further reading
Farrell, Henry; Newman, Abraham (2019). Of Privacy and Power: The Transatlantic Struggle over Freedom and Security. Princeton University Press.
External links
1973 establishments in Belgium
Financial markets software
Financial metadata
Financial services companies established in 1973
La Hulpe
Market data
Network architecture
Ricardo Bofill buildings
Cooperatives
Cooperatives in Europe
Cooperatives in Belgium | SWIFT | [
"Technology",
"Engineering"
] | 4,988 | [
"Network architecture",
"Market data",
"Data",
"Computer networks engineering"
] |
48,134 | https://en.wikipedia.org/wiki/Temple | A temple (from the Latin ) is a place of worship, a building used for spiritual rituals and activities such as prayer and sacrifice. By convention, the specially built places of worship of some religions are commonly called "temple" in English, while those of other religions are not, even though they fulfill very similar functions.
The religions for which the terms are used include the great majority of ancient religions that are now extinct, such as the Ancient Egyptian religion and the Ancient Greek religion. Among religions still active: Hinduism (whose temples are called Mandir or Kovil), Buddhism (whose temples are called Vihar), Sikhism (whose temples are called gurudwara), Jainism (whose temples are sometimes called derasar), Zoroastrianism (whose temples are sometimes called Agiary), the Baháʼí Faith (which are often simply referred to as Baháʼí House of Worship), Taoism (which are sometimes called Daoguan), Shinto (which are often called Jinja), Confucianism (which are sometimes called the Temple of Confucius).
Religions whose places of worship are generally not called "temples" in English include Christianity, which has churches, Islam with mosques, and Judaism with synagogues (although some of these use "temple" as a name).
The form and function of temples are thus very variable, though they are often considered by believers to be, in some sense, the "house" of one or more deities. Typically, offerings of some sort are made to the deity, and other rituals are enacted, and a special group of clergy maintain and operate the temple. The degree to which the whole population of believers can access the building varies significantly; often parts, or even the whole main building, can only be accessed by the clergy. Temples typically have a main building and a larger precinct, which may contain many other buildings or may be a dome-shaped structure, much like an igloo.
The word comes from Ancient Rome, where a constituted a sacred precinct as defined by a priest, or augur. It has the same root as the word "template", a plan in preparation for the building that was marked out on the ground by the augur.
Indian temples
Hindu temple
Hindu temples are known by many different names, varying on region and language, including Alayam, Mandir, Mandira, Ambalam, Gudi, Kavu, Koil, Kovil, Déul, Raul, Devasthana, Devalaya, Devayatan, Devakula, Devagiriha, Degul, Deva Mandiraya, and Devalayam. Hindu temple architecture is mainly divided into the Dravidian style of the south and the Nagara style of the north, with other regional styles.
The basic elements of the Hindu temple remain the same across all periods and styles. The most essential feature is the inner sanctuary, the garbhagriha or womb-chamber, where the primary murti or cult image of a deity is housed in a simple bare cell. Around this chamber there are often other structures and buildings, in the largest cases covering several acres. On the exterior, the garbhagriha is crowned by a tower-like shikhara, also called the vimana in the south. The shrine building may include an ambulatory for parikrama (circumambulation), one or more mandapas or congregation halls, and sometimes an antarala antechamber and porch between garbhagriha and mandapa.
A Hindu temple is a symbolic house, the seat and dwelling of Hindu gods. It is a structure designed to bring human beings and gods together according to Hindu faith. Inside its garbhagriha innermost sanctum, a Hindu temple contains a murti or Hindu god's image. Hindu temples are large and magnificent with a rich history. There is evidence of the use of sacred ground as far back as the Bronze Age and later during the Indus Valley civilization.
Outside of the Indian subcontinent (India, Bangladesh and Nepal), Hindu temples have been built in various countries around the world. Either following the historic diffusion of Hinduism across Asia (e.g. ancient stone temples of Cambodia and Indonesia), or following the migration of the Indian Hindus' diaspora, to Western Europe (esp. Great Britain), North America (the United States and Canada), as well as Australia, Malaysia and Singapore, Mauritius and South Africa.
Buddhist temples
Buddhist temples include the structures called stupa, wat and pagoda in different regions and languages. A Buddhist temple might contain a meditation hall hosting Buddharupa, or the image of Buddha, as the object of concentration and veneration during a meditation. The stupa domed structures are also used in a circumambulation ritual called Pradakshina.
Temples in Buddhism represent the pure land or pure environment of a Buddha. Traditional Buddhist temples are designed to inspire inner and outer peace.
Three types of structures are associated with the religious architecture of early Buddhism: monasteries (viharas), places to venerate relics (stupas), and shrines or prayer halls (chaityas, also called chaitya grihas), which later came to be called temples in some places. The pagoda is an evolution of the Indian stupas.
The initial function of a stupa was the veneration and safe-guarding of the relics of Gautama Buddha. The earliest archaeologically known example of a stupa is the relic stupa located in Vaishali, Bihar in India.
In accordance with changes in religious practice, stupas were gradually incorporated into chaitya-grihas (prayer halls). These are exemplified by the complexes of the Ajanta Caves and the Ellora Caves (Maharashtra). The Mahabodhi Temple at Bodh Gaya in Bihar is another well-known example.
As Buddhism spread, Buddhist architecture diverged in style, reflecting the similar trends in Buddhist art. Building form was also influenced to some extent by the different forms of Buddhism in the northern countries, practising Mahayana Buddhism in the main and in the south where Theravada Buddhism prevailed.
Jain temples
A Jain temple, called a Derasar, is the place of worship for Jains, the followers of Jainism. Some famous Jain temples are Shikharji, Palitana temples, Ranakpur Jain temple, Shravan Belgola, Dilwara Temples and Lal Mandir. Jain temples are built with various architectural designs. Jain temples in North India are completely different from the Jain temples in South India, which in turn are quite different from Jain temples in West India. Additionally, a manastambha (literally 'column of honor') is a pillar that is often constructed in front of Jain temples.
Sikh temples
A Sikh temple is called a gurdwara, literally the "doorway to the Guru". Its most essential element is the presence of the Guru, Guru Granth Sahib. The gurdwara has an entrance from all sides, signifying that they are open to all without any distinction whatsoever. The gurdwara has a Darbar Sahib where the Guru Granth Sahib is seen and a Langar where people can eat free food. A gurdwara may also have a library, nursery, and classroom.
Mesopotamian temples
The temple-building tradition of Mesopotamia derived from the cults of gods and deities in the Mesopotamian religion. It spanned several civilizations; from Sumerian, Akkadian, Assyrian, and Babylonian. The most common temple architecture of Mesopotamia is the structure of sun-baked bricks called a ziggurat, having the form of a terraced step pyramid with a flat upper terrace where the shrine or temple stood.
Egyptian temples
Ancient Egyptian temples were meant as places for the deities to reside on earth. Indeed, the term the Egyptians most commonly used to describe the temple building, , means 'mansion (or enclosure) of a god'.
A god's presence in the temple linked the human and divine realms and allowed humans to interact with the god through ritual. These rituals, it was believed, sustained the god and allowed it to continue to play its proper role in nature. They were, therefore, a key part of the maintenance of maat, the ideal order of nature and of human society in Egyptian belief. Maintaining was the entire purpose of Egyptian religion, and thus it was the purpose of a temple as well.
Ancient Egyptian temples were also of economic significance to Egyptian society. The temples stored and redistributed grain and came to own large portions of the nation's arable land (some estimate as much as 33% by the New Kingdom period). In addition, many of these Egyptian temples utilized the Tripartite Floor Plan in order to draw visitors to the center room.
In The Temple in Man, a work by R. A. Schwaller de Lubicz, the author explores the idea that Egyptian temples, particularly the Temple of Luxor, are metaphysical representations of the human body. Schwaller de Lubicz suggests that these temples reflect the cosmic and spiritual order through their proportions and design. The author argues that the ancient Egyptians embedded knowledge of sacred geometry and spiritual awakening into their architecture, and that the human body itself is a temple that mirrors the harmony of the universe. The work connects the metaphysical symbolism of the temples to esoteric concepts, showing how the architecture reflects human anatomy and cosmic laws.
Greco-Roman temples
Greek and Roman temples were originally built out of wood and mud bricks, but as the empires expanded, the temples grew to monumental size, made out of materials such as stone and marble on raised platforms. While the color has long since faded, The columns would have been painted in white, blue, red, and black. Above the columns would have been a sculpted or painted depiction of a myth or battle, with freestanding sculptures in the pediment triangles. The roofs were tiled and had sculptures of mythical animals or deities on the tops or corners. Greek temples also had several standard floor plans with very distinct column placement.
Located in the front of the temple were altars intended for sacrifices or offerings. Ouranic altars were usually square, lined with a metal pan for burnt offerings, and a flat top which was necessary for the ouranic gods to receive offerings. Chthonic altars, called bothros, were pits dug into the earth for liquid libations of animal sacrifices, milk, honey, and wine. The building which housed the cult statue or agalma in its cella was located in the center of the temple in Greek architecture, while in Rome, the cella was in the back. Greek temple architecture had a profound influence on ancient architectural traditions.
Greco-Roman temples were built facing eastward, utilizing the rising sun in morning rituals. The location each temple was built also depended on many factors such as environment, myth, function, and divine experience. Most were built on sites associated with myths or a place a god had been believed to have performed a feat, or founded a town or city. Many Roman temples had close associations with important events in Roman history, such as military victories. Temples in cities were often dedicated to the founding deity of the city, but also served as civic and social centers. The Temple of Saturn even held the state treasury and treasury offices in its basement.
European polytheistic temples
The Romans usually referred to a holy place of a pagan religion as ; in some cases this referred to a sacred grove, in others to a temple. Medieval Latin writers also sometimes used the word , previously reserved for temples of the ancient Roman religion. In some cases it is hard to determine whether a temple was a building or an outdoor shrine. For temple buildings of the Germanic peoples, the Old Norse term hof is often used.
Zoroastrian temples
A Zoroastrian temple may also be called a Dar-e-mehr and an Atashkadeh. A fire temple in Zoroastrianism is the place of worship for Zoroastrians. Zoroastrians revere fire in any form, and their temples contains an eternal flame, with Atash Behram (Fire of Victory) as the highest grade of all, as it combines 16 different types of fire gathered in elaborate rituals.
In the Zoroastrian religion, fire (Atar), together with clean water (Aban), are agents of ritual purity. Clean, white "ash for the purification ceremonies is regarded as the basis of ritual life," which, "are essentially the rites proper to the tending of a domestic fire, for the temple fire is that of the hearth fire raised to a new solemnity".
Chinese temples
Chinese temples refer to temples in accordance with Chinese culture, which serve as a house of worship for Chinese faiths, namely Confucianism, Taoism, Buddhism and Chinese folk religion. Chinese temples were born from the age-old religion and tradition of Chinese people since the ancient era of imperial China, thus they are usually built in typical classical Chinese architecture.
Other than the base constructed from an elevated platform of earth and stones, most parts of Chinese temples are made of timber carpentry, with parts of brick masonry and glazed ceramics for roofs and tile decorations. Typical Chinese temples have curved overhanging eaves and complicated carpentry of stacked roof construction. Chinese temples are known for their vivid colour and rich decorations. Their roofs are often decorated with mythical beasts, such as Chinese dragons and qilins, and sometimes also Chinese deities. Chinese temples can be found throughout Mainland China and Taiwan, and also where Chinese expatriate communities have settled abroad; thus Chinese temples can be found in Chinatowns worldwide.
Indonesian temples
Candi is an Indonesian term to refer to ancient temples. Before the rise of Islam, between the 5th to 15th centuries, Dharmic faiths (Hinduism and Buddhism) were the majority in the Indonesian archipelago, especially in Java and Sumatra. As a result, numerous Hindu temples, locally known as , were constructed and dominated the landscape of Java. The architecture follows the typical Indonesian architectural traditions based on Vastu Shastra. The temple layout, especially in the Central Java period, incorporated mandala temple plan arrangements and also the typical high towering spires of Hindu temples. The was designed to mimic Meru, the holy mountain and the abode of the gods. In contemporary Indonesian Buddhist perspective, refers to a shrine, either ancient or new. Several contemporary viharas in Indonesia, for example, contain an actual-size replica or reconstruction of famous Buddhist temples, such as the replica of Pawon and Plaosan's (small) temples.
According to local beliefs, the Java valley had thousands of Hindu temples that co-existed with Buddhist temples, most of which were buried in the massive eruption of Mount Merapi in 1006 CE.
Mesoamerican temples
Temples of the Mesoamerican civilization usually took the shape of stepped pyramids with temples or shrines on top of the massive structure. They are more akin to the ziggurats of Mesopotamia than to Egyptian ones. A single or several flight(s) of steep steps from the base lead to the temple that stood on the plateau on top of the pyramid. The stone temple might be a square or a rounded structure with a door opening leading to a cella or inner sanctum. The plateau on top of the pyramid in front of the temple is where the ritualistic sacrifice took place.
Some classic Mesoamerican pyramids are adorned with stories about the feathered serpent Quetzalcoatl or Mesoamerican creation myths, written in the form of hieroglyphs on the rises of the steps of the pyramids, on the walls, and on the sculptures contained within. Notable example include Aztec Acatitlan and Mayan Chichen Itza, Uxmal and Tikal.
Jewish synagogues and temples
In Judaism, the ancient Hebrew texts refer to a "sanctuary", "palace" or "hall" for each of the two ancient temples in Jerusalem, called in the Tanakh , which translates literally as 'YHWH's House'. In English "temple" is the normal term for them.
The Temple Mount in Jerusalem is the site where the First Temple of Solomon and the Second Temple were built. At the center of the structure was the Holy of Holies where only the High Priest could enter. The Temple Mount is now the site of the Islamic edifice, the Dome of the Rock ().
The Greek word synagogue came into use to describe Jewish (and Samaritan) places of worship during Hellenistic times and it, along with the Yiddish term shul, and the original Hebrew term Beit Knesset ('House of meeting') are the terms in most universal usage.
Since the 18th century, Jews in Western and Central Europe began to apply the name temple, borrowed from the French where it was used to denote all non-Catholic prayer houses, to synagogues. The term became strongly associated with Reform institutions, in some of which both congregants and outsiders associated it with the elimination of the prayers for the restoration of the Jerusalem Temple, though this was not the original meaning—traditional synagogues named themselves "temple" over a century before the advent of Reform, and many continued to do so after. In American parlance, temple is often synonymous with synagogue, but especially non-Orthodox ones.
The term kenesa, from the Aramaic for 'assembly', is used to describe the places of worship of Karaite Jews.
Example of such temple is the Sofia Synagogue, Bulgaria the largest synagogue in Southeastern Europe and third-largest in Europe.
Christian temples
Orthodox Christianity
The word temple is used frequently in the tradition of Eastern Christianity; particularly the Eastern Orthodox Church, where the principal words used for houses of worship are temple and church. The use of the word temple comes from the need to distinguish a building of the church vs. the church seen as the Body of Christ. In the Russian language (similar to other Slavic languages), while the general-purpose word for 'church' is tserkov, the term (), 'temple', is used to refer to the church building as a temple of God (). The words church and temple, in this case are interchangeable; however, the term church () is far more common. The term temple () is also commonly applied to larger churches. Some famous churches which are referred to as temples include the Hagia Sophia, Saint Basil's Cathedral, Alexander Nevsky Cathedral, Sofia, the Cathedral of Christ the Saviour and the Temple of Saint Sava in Belgrade, Serbia.
Catholicism
The word temple has traditionally been rarely used in the English-speaking Western Christian tradition. In Irish, some pre-schism churches use the word teampall. The usual word for church in the Hungarian language is templom, also deriving from the same Latin root. Spanish distinguishes between the temple being the physical building for religious activity, and the church being both the physical building for religious activity and also the congregation of religious followers.
The principal words typically used to distinguish houses of worship in Western Christian architecture are abbey, basilica, cathedral, chapel and church. The Catholic Church has used the word temple in reference of a place of worship on rare occasions. An example is the Roman Catholic Sagrada Familia Temple in Barcelona, Spain and the Roman Catholic Basilique du Sacré-Cœur Temple in Paris, France. Another example is the Temple or Our Lady of the Pillar, a church in Guadalajara, Mexico.
Protestantism
Some Protestant churches use this term; above main entrance of the Lutheran Gustav Vasa church in Stockholm, Sweden is a cartouche in Latin which reads "this temple (...) was constructed by king Oscar II."
Beginning in the late eighteenth century, following the Enlightenment, some Protestant denominations in France and elsewhere began to use the word temple to distinguish these spaces from Catholic churches. Evangelical and other Protestant churches make use of a wide variety of terms to designate their worship spaces, such as church, tabernacle or temple. Additionally some breakaway Catholic churches such as the Mariavite Church in Poland have chosen to also designate their central church building as a temple, as in the case of the Temple of Mercy and Charity in Płock.
Latter Day Saint movement
According to Latter Day Saints, in 1832, Joseph Smith received a revelation to restore the practice of temple worship, in a "house of the Lord". The Kirtland Temple was the first temple of the Latter-day Saint movement and the only one completed in Smith's lifetime, although the Nauvoo Temple was partially complete at the time of his death. The schisms stemming from a succession crisis have led to differing views about the role and use of temples between various groups with competing succession claims.
The Book of Mormon, which Latter Day Saints believe is a companion book of scripture with the Bible, refers to temple building in the ancient Americas by a group of people called the Nephites. Though Book of Mormon authors are not explicit about the practices in these Nephite temples, they were patterned "after the manner of the temple of Solomon" () and served as gathering places for significant religious and political events (e.g. Mosiah 1–6; 3rd Nephi 11–26).
The Church of Jesus Christ of Latter-day Saints
The Church of Jesus Christ of Latter-day Saints is a prolific builder of temples. Latter-day Saint temples are reserved for performing and undertaking only the most holy and sacred of covenants and special of ordinances. They are distinct from meeting houses and chapels where weekly worship services are held. The temples are built and kept under strict sacredness and are not to be defiled. Thus, strict rules apply for entrance, including church membership and regular attendance. During the open-house period after its construction and before its dedication, the temple is open to the public for tours.
Other Latter Day Saint denominations
Various sects in the Latter Day Saint movement founded by Joseph Smith have temples.
The Church of Christ (Wightite), a Latter Day Saint denomination formed by Lyman Wight following the death of Joseph Smith, built first Mormon temple west of the Mississippi in Zodiac, Texas. about three miles from Fredericksburg.
In 1990 or earlier a temple in Ozumba, Mexico was built by the Apostolic United Brethren.
On April 17, 1994, the Independence Temple in Independence, Missouri, was open by the Community of Christ by then-church Prophet-President Wallace B. Smith. The Community of Christ also owned the original Kirtland Temple, dedicated in 1836 by the Church of the Latter Day Saints (later renamed the Church of Jesus Christ of Latter Day Saints), in Kirtland, Ohio. On March 5, 2024, the Church of Jesus Christ of Latter-day Saints announced it had purchased the temple.
In 2005 construction on the YFZ Ranch Temple by the Fundamentalist Church of Jesus Christ of Latter-Day Saints Church began. It is located just outside Eldorado in Schleicher County, Texas. However, as of April 2014, the State of Texas took physical and legal possession of the property. as it was used to "commit or facilitate certain criminal conduct."
A pyramid-shaped temple near Modena, Utah, was built by the Righteous Branch of the Church of Jesus Christ of Latter-day Saints.
Esoteric Christianity
Mount Ecclesia Esoteric Christian Temple of the Rosicrucian Fellowship with its round 12-sided building architecture set on top of a mesa and facing east, the rising Sun. This modern-day temple is ornamented with alchemical and astrological symbols.
Masonic temples
Freemasonry is a fraternal organization with its origins in the eighteenth century whose membership is held together by a shared set of moral and metaphysical ideals based on short role play narratives concerning the construction of King Solomon's Temple. Freemasons meet as a Lodge. Lodges meet in a Masonic Temple (in reference to King Solomon's Temple), Masonic Center or a Masonic Hall, such as Freemasons' Hall, London. Some confusion exists as Masons usually refer to a Lodge meeting as being in Lodge.
Others
Göbekli Tepe, located in southern Turkey, was built between the 8th and 10th millennium BCE. Its circular compounds on top of a tell are composed by massive T-shaped stone pillars decorated with abstract, enigmatic pictograms and animal reliefs.
Temples of Sheikh, ancient temples in Sheikh, Somalia
Temple of Yeha, the oldest standing structure in Yeha, Ethiopia; built around 700 BCE
In the Star Wars films, the Jedi Temple is located on Coruscant.
Wolmyeongdong Natural Temple, located in South Korea, was developed beginning in 1990 and continues to this day.
Pashupatinath is one of the most famous temples of Hindu religion, which is located at Kathmandu, Nepal.
Convention sometimes allows the use of temple in some of the following cases:
Baháʼí Faith temple (Mashriqu'l-Adhkárs or 'Houses of Worship').
Shrines of the traditional Chinese Ethnic Shenism are called miao, or ancestral hall in English. Joss house is an obsolete American term for such kind of places of worship.
Confucian temple or Temple of Confucius.
Mankhim, the temple of the ethnic group the Rai, located at Aritar, Sikkim.
Shintoist jinja are normally called shrines in English in order to distinguish them from Buddhist temples (-tera, -dera).
Taoist temples and monasteries are called or daoguan (, literally 'place of contemplation of the Tao') in Chinese, being the shortened version of .
See also
Balinese temple
Candi of Indonesia
Chinese pagoda
Chinese temple
Dravidian architecture
Jangam
List of temples of Tamil Nadu
Mandi (Mandaeism)
Mosque
National Temple of Divine Providence
Place of worship
References
Further reading
Hani, Jean, Le symbolisme du temple chrétien, G. Trédaniel (editor); [2. éd.] edition (1978), 207 pp.,
External links
Definition of 'temple' at the Online Etymology Dictionary
Comparison between Egyptian and Greek temples
Building types
Types of monuments and memorials
Sacral architecture
Religious buildings and structures | Temple | [
"Engineering"
] | 5,363 | [
"Sacral architecture",
"Architecture"
] |
48,144 | https://en.wikipedia.org/wiki/Microcomputer | A microcomputer is a small, relatively inexpensive computer having a central processing unit (CPU) made out of a microprocessor. The computer also includes memory and input/output (I/O) circuitry together mounted on a printed circuit board (PCB). Microcomputers became popular in the 1970s and 1980s with the advent of increasingly powerful microprocessors. The predecessors to these computers, mainframes and minicomputers, were comparatively much larger and more expensive (though indeed present-day mainframes such as the IBM System z machines use one or more custom microprocessors as their CPUs). Many microcomputers (when equipped with a keyboard and screen for input and output) are also personal computers (in the generic sense). An early use of the term "personal computer" in 1962 predates microprocessor-based designs. (See "Personal Computer: Computers at Companies" reference below). A "microcomputer" used as an embedded control system may have no human-readable input and output devices. "Personal computer" may be used generically or may denote an IBM PC compatible machine.
The abbreviation "micro" was common during the 1970s and 1980s, but has since fallen out of common usage.
Origins
The term microcomputer came into popular use after the introduction of the minicomputer, although Isaac Asimov used the term in his short story "The Dying Night" as early as 1956 (published in The Magazine of Fantasy and Science Fiction in July that year). Most notably, the microcomputer replaced the many separate components that made up the minicomputer's CPU with one integrated microprocessor chip.
In 1973, the French Institut National de la Recherche Agronomique (INRA) was looking for a computer able to measure agricultural hygrometry. To answer this request, a team of French engineers of the computer technology company R2E, led by its Head of Development, François Gernelle, created the first available microprocessor-based microcomputer, the Micral N. The same year the company filed their patents with the term "Micro-ordinateur", a literal equivalent of "Microcomputer", to designate a solid state machine designed with a microprocessor.
In the US the earliest models such as the Altair 8800 were often sold as kits to be assembled by the user, and came with as little as 256 bytes of RAM, and no input/output devices other than indicator lights and switches, useful as a proof of concept to demonstrate what such a simple device could do.
As microprocessors and semiconductor memory became less expensive, microcomputers grew cheaper and easier to use.
Increasingly inexpensive logic chips such as the 7400 series allowed cheap dedicated circuitry for improved user interfaces such as keyboard input, instead of simply a row of switches to toggle bits one at a time.
Use of audio cassettes for inexpensive data storage replaced manual re-entry of a program every time the device was powered on.
Large cheap arrays of silicon logic gates in the form of read-only memory and EPROMs allowed utility programs and self-booting kernels to be stored within microcomputers. These stored programs could automatically load further more complex software from external storage devices without user intervention, to form an inexpensive turnkey system that does not require a computer expert to understand or to use the device.
Random-access memory became cheap enough to afford dedicating approximately 1–2 kilobytes of memory to a video display controller frame buffer, for a 40x25 or 80x25 text display or blocky color graphics on a common household television. This replaced the slow, complex, and expensive teletypewriter that was previously common as an interface to minicomputers and mainframes.
All these improvements in cost and usability resulted in an explosion in their popularity during the late 1970s and early 1980s.
A large number of computer makers packaged microcomputers for use in small business applications. By 1979, many companies such as Cromemco, Processor Technology, IMSAI, North Star Computers, Southwest Technical Products Corporation, Ohio Scientific, Altos Computer Systems, Morrow Designs and others produced systems designed for resourceful end users or consulting firms to deliver business systems such as accounting, database management and word processing to small businesses. This allowed businesses unable to afford leasing of a minicomputer or time-sharing service the opportunity to automate business functions, without (usually) hiring a full-time staff to operate the computers. A representative system of this era would have used an S100 bus, an 8-bit processor such as an Intel 8080 or Zilog Z80, and either CP/M or MP/M operating system.
The increasing availability and power of desktop computers for personal use attracted the attention of more software developers. As the industry matured, the market for personal computers standardized around IBM PC compatibles running DOS, and later Windows. Modern desktop computers, video game consoles, laptops, tablet PCs, and many types of handheld devices, including mobile phones, pocket calculators, and industrial embedded systems, may all be considered examples of microcomputers according to the definition given above.
Colloquial use of the term
By the early 2000s, everyday use of the expression "microcomputer" (and in particular "micro") declined significantly from its peak in the mid-1980s. The term is most commonly associated with the most popular 8-bit home computers (such as the Apple II, ZX Spectrum, Commodore 64, BBC Micro, and TRS-80) and small-business CP/M-based microcomputers.
In colloquial usage, "microcomputer" has been largely supplanted by the term "personal computer" or "PC", which specifies a computer that has been designed to be used by one individual at a time, a term first coined in 1959. IBM first promoted the term "personal computer" to differentiate the IBM PC from CP/M-based microcomputers likewise targeted at the small-business market, and also IBM's own mainframes and minicomputers. However, following its release, the IBM PC itself was widely imitated, as well as the term. The component parts were commonly available to producers and the BIOS was reverse engineered through cleanroom design techniques. IBM PC compatible "clones" became commonplace, and the terms "personal computer", and especially "PC", stuck with the general public, often specifically for a computer compatible with DOS (or nowadays Windows).
Description
Monitors, keyboards and other devices for input and output may be integrated or separate. Computer memory in the form of RAM, and at least one other less volatile, memory storage device are usually combined with the CPU on a system bus in one unit. Other devices that make up a complete microcomputer system include batteries, a power supply unit, a keyboard and various input/output devices used to convey information to and from a human operator (printers, monitors, human interface devices). Microcomputers are designed to serve only one user at a time, although they can often be modified with software or hardware to concurrently serve more than one user. Microcomputers fit well on or under desks or tables, so that they are within easy access of users. Bigger computers like minicomputers, mainframes, and supercomputers take up large cabinets or even dedicated rooms.
A microcomputer comes equipped with at least one type of data storage, usually RAM. Although some microcomputers (particularly early 8-bit home micros) perform tasks using RAM alone, some form of secondary storage is normally desirable. In the early days of home micros, this was often a data cassette deck (in many cases as an external unit). Later, secondary storage (particularly in the form of floppy disk and hard disk drives) were built into the microcomputer case.
History
TTL precursors
Although they did not contain any microprocessors, but were built around transistor-transistor logic (TTL), Hewlett-Packard calculators as far back as 1968 had various levels of programmability comparable to microcomputers. The HP 9100B (1968) had rudimentary conditional (if) statements, statement line numbers, jump statements (go to), registers that could be used as variables, and primitive subroutines. The programming language resembled assembly language in many ways. Later models incrementally added more features, including the BASIC programming language (HP 9830A in 1971). Some models had tape storage and small printers. However, displays were limited to one line at a time. The HP 9100A was referred to as a personal computer in an advertisement in a 1968 Science magazine, but that advertisement was quickly dropped. HP was reluctant to sell them as "computers" because the perception at that time was that a computer had to be big in size to be powerful, and thus decided to market them as calculators. Additionally, at that time, people were more likely to buy calculators than computers, and, purchasing agents also preferred the term "calculator" because purchasing a "computer" required additional layers of purchasing authority approvals.
The Datapoint 2200, made by CTC in 1970, was also comparable to microcomputers. While it contains no microprocessor, the instruction set of its custom TTL processor was the basis of the instruction set for the Intel 8008, and for practical purposes the system behaves approximately as if it contains an 8008. This is because Intel was the contractor in charge of developing the Datapoint's CPU, but ultimately CTC rejected the 8008 design because it needed 20 support chips.
Another early system, the Kenbak-1, was released in 1971. Like the Datapoint 2200, it used small-scale integrated transistor–transistor logic instead of a microprocessor. It was marketed as an educational and hobbyist tool, but it was not a commercial success; production ceased shortly after introduction.
Early microcomputers
In late 1972, a French team headed by François Gernelle within a small company, Réalisations & Etudes Electroniques (R2E), developed and patented a computer based on a microprocessor – the Intel 8008 8-bit microprocessor. This Micral-N was marketed in early 1973 as a "Micro-ordinateur" or microcomputer, mainly for scientific and process-control applications. About a hundred Micral-N were installed in the next two years, followed by a new version based on the Intel 8080. Meanwhile, another French team developed the Alvan, a small computer for office automation which found clients in banks and other sectors. The first version was based on LSI chips with an Intel 8008 as peripheral controller (keyboard, monitor and printer), before adopting the Zilog Z80 as main processor.
In late 1972, a Sacramento State University team led by Bill Pentz built the Sac State 8008 computer, able to handle thousands of patients' medical records. The Sac State 8008 was designed with the Intel 8008. It had a full set of hardware and software components: a disk operating system included in a series of programmable read-only memory chips (PROMs); 8 Kilobytes of RAM; IBM's Basic Assembly Language (BAL); a hard drive; a color display; a printer output; a 150 bit/s serial interface for connecting to a mainframe; and even the world's first microcomputer front panel.
In early 1973, Sord Computer Corporation (now Toshiba Personal Computer System Corporation) completed the SMP80/08, which used the Intel 8008 microprocessor. The SMP80/08, however, did not have a commercial release. After the first general-purpose microprocessor, the Intel 8080, was announced in April 1974, Sord announced the SMP80/x, the first microcomputer to use the 8080, in May 1974.
Virtually all early microcomputers were essentially boxes with lights and switches; one had to read and understand binary numbers and machine language to program and use them (the Datapoint 2200 was a striking exception, bearing a modern design based on a monitor, keyboard, and tape and disk drives). Of the early "box of switches"-type microcomputers, the MITS Altair 8800 (1975) was arguably the most famous. Most of these simple, early microcomputers were sold as electronic kits—bags full of loose components which the buyer had to solder together before the system could be used.
The period from about 1971 to 1976 is sometimes called the first generation of microcomputers. Many companies such as DEC, National Semiconductor, Texas Instruments offered their microcomputers for use in terminal control, peripheral device interface control and industrial machine control. There were also machines for engineering development and hobbyist personal use. In 1975, the Processor Technology SOL-20 was designed, which consisted of one board which included all the parts of the computer system. The SOL-20 had built-in EPROM software which eliminated the need for rows of switches and lights. The MITS Altair just mentioned played an instrumental role in sparking significant hobbyist interest, which itself eventually led to the founding and success of many well-known personal computer hardware and software companies, such as Microsoft and Apple Computer. Although the Altair itself was only a mild commercial success, it helped spark a huge industry.
Home computers
By 1977, the introduction of the second microcomputer generation as consumer goods, known as home computers, made them considerably easier to use than their predecessors because their predecessors' operation often demanded thorough familiarity with practical electronics. The ability to connect to a monitor (screen) or TV set allowed visual manipulation of text and numbers. The BASIC language, which was easier to learn and use than raw machine language, became a standard feature. These features were already common in minicomputers, with which many hobbyists and early produces were familiar.
In 1979, the launch of the VisiCalc spreadsheet (initially for the Apple II) first turned the microcomputer from a hobby for computer enthusiasts into a business tool. After the 1981 release by IBM of its IBM PC, the term personal computer became generally used for microcomputers compatible with the IBM PC architecture (IBM PC–compatible).
See also
History of computing hardware (1960s–present)
Lists of microcomputers
Mainframe computer
Market share of personal computer vendors
Minicomputer
Personal computer
Keyboard computer
SFF computer
Supercomputer
Notes and references
Microcomputer
Computers | Microcomputer | [
"Technology"
] | 3,067 | [
"Computers"
] |
48,164 | https://en.wikipedia.org/wiki/Maximal%20ideal | In mathematics, more specifically in ring theory, a maximal ideal is an ideal that is maximal (with respect to set inclusion) amongst all proper ideals. In other words, I is a maximal ideal of a ring R if there are no other ideals contained between I and R.
Maximal ideals are important because the quotients of rings by maximal ideals are simple rings, and in the special case of unital commutative rings they are also fields.
In noncommutative ring theory, a maximal right ideal is defined analogously as being a maximal element in the poset of proper right ideals, and similarly, a maximal left ideal is defined to be a maximal element of the poset of proper left ideals. Since a one-sided maximal ideal A is not necessarily two-sided, the quotient R/A is not necessarily a ring, but it is a simple module over R. If R has a unique maximal right ideal, then R is known as a local ring, and the maximal right ideal is also the unique maximal left and unique maximal two-sided ideal of the ring, and is in fact the Jacobson radical J(R).
It is possible for a ring to have a unique maximal two-sided ideal and yet lack unique maximal one-sided ideals: for example, in the ring of 2 by 2 square matrices over a field, the zero ideal is a maximal two-sided ideal, but there are many maximal right ideals.
Definition
There are other equivalent ways of expressing the definition of maximal one-sided and maximal two-sided ideals. Given a ring R and a proper ideal I of R (that is I ≠ R), I is a maximal ideal of R if any of the following equivalent conditions hold:
There exists no other proper ideal J of R so that I ⊊ J.
For any ideal J with I ⊆ J, either J = I or J = R.
The quotient ring R/I is a simple ring.
There is an analogous list for one-sided ideals, for which only the right-hand versions will be given. For a right ideal A of a ring R, the following conditions are equivalent to A being a maximal right ideal of R:
There exists no other proper right ideal B of R so that A ⊊ B.
For any right ideal B with A ⊆ B, either B = A or B = R.
The quotient module R/A is a simple right R-module.
Maximal right/left/two-sided ideals are the dual notion to that of minimal ideals.
Examples
If F is a field, then the only maximal ideal is {0}.
In the ring Z of integers, the maximal ideals are the principal ideals generated by a prime number.
More generally, all nonzero prime ideals are maximal in a principal ideal domain.
The ideal is a maximal ideal in ring . Generally, the maximal ideals of are of the form where is a prime number and is a polynomial in which is irreducible modulo .
Every prime ideal is a maximal ideal in a Boolean ring, i.e., a ring consisting of only idempotent elements. In fact, every prime ideal is maximal in a commutative ring whenever there exists an integer such that for any .
The maximal ideals of the polynomial ring are principal ideals generated by for some .
More generally, the maximal ideals of the polynomial ring over an algebraically closed field K are the ideals of the form . This result is known as the weak Nullstellensatz.
Properties
An important ideal of the ring called the Jacobson radical can be defined using maximal right (or maximal left) ideals.
If R is a unital commutative ring with an ideal m, then k = R/m is a field if and only if m is a maximal ideal. In that case, R/m is known as the residue field. This fact can fail in non-unital rings. For example, is a maximal ideal in , but is not a field.
If L is a maximal left ideal, then R/L is a simple left R-module. Conversely in rings with unity, any simple left R-module arises this way. Incidentally this shows that a collection of representatives of simple left R-modules is actually a set since it can be put into correspondence with part of the set of maximal left ideals of R.
Krull's theorem (1929): Every nonzero unital ring has a maximal ideal. The result is also true if "ideal" is replaced with "right ideal" or "left ideal". More generally, it is true that every nonzero finitely generated module has a maximal submodule. Suppose I is an ideal which is not R (respectively, A is a right ideal which is not R). Then R/I is a ring with unity (respectively, R/A is a finitely generated module), and so the above theorems can be applied to the quotient to conclude that there is a maximal ideal (respectively, maximal right ideal) of R containing I (respectively, A).
Krull's theorem can fail for rings without unity. A radical ring, i.e. a ring in which the Jacobson radical is the entire ring, has no simple modules and hence has no maximal right or left ideals. See regular ideals for possible ways to circumvent this problem.
In a commutative ring with unity, every maximal ideal is a prime ideal. The converse is not always true: for example, in any nonfield integral domain the zero ideal is a prime ideal which is not maximal. Commutative rings in which prime ideals are maximal are known as zero-dimensional rings, where the dimension used is the Krull dimension.
A maximal ideal of a noncommutative ring might not be prime in the commutative sense. For example, let be the ring of all matrices over . This ring has a maximal ideal for any prime , but this is not a prime ideal since (in the case ) and are not in , but . However, maximal ideals of noncommutative rings are prime in the generalized sense below.
Generalization
For an R-module A, a maximal submodule M of A is a submodule satisfying the property that for any other submodule N, implies or . Equivalently, M is a maximal submodule if and only if the quotient module A/M is a simple module. The maximal right ideals of a ring R are exactly the maximal submodules of the module RR.
Unlike rings with unity, a nonzero module does not necessarily have maximal submodules. However, as noted above, finitely generated nonzero modules have maximal submodules, and also projective modules have maximal submodules.
As with rings, one can define the radical of a module using maximal submodules. Furthermore, maximal ideals can be generalized by defining a maximal sub-bimodule M of a bimodule B to be a proper sub-bimodule of M which is contained in no other proper sub-bimodule of M. The maximal ideals of R are then exactly the maximal sub-bimodules of the bimodule RRR.
See also
Prime ideal
References
Ideals (ring theory)
Ring theory
Prime ideals | Maximal ideal | [
"Mathematics"
] | 1,505 | [
"Fields of abstract algebra",
"Ring theory"
] |
48,167 | https://en.wikipedia.org/wiki/Congruence%20relation | In abstract algebra, a congruence relation (or simply congruence) is an equivalence relation on an algebraic structure (such as a group, ring, or vector space) that is compatible with the structure in the sense that algebraic operations done with equivalent elements will yield equivalent elements. Every congruence relation has a corresponding quotient structure, whose elements are the equivalence classes (or congruence classes) for the relation.
Definition
The definition of a congruence depends on the type of algebraic structure under consideration. Particular definitions of congruence can be made for groups, rings, vector spaces, modules, semigroups, lattices, and so forth. The common theme is that a congruence is an equivalence relation on an algebraic object that is compatible with the algebraic structure, in the sense that the operations are well-defined on the equivalence classes.
General
The general notion of a congruence relation can be formally defined in the context of universal algebra, a field which studies ideas common to all algebraic structures. In this setting, a relation on a given algebraic structure is called compatible if
for each and each -ary operation defined on the structure: whenever and ... and , then .
A congruence relation on the structure is then defined as an equivalence relation that is also compatible.
Examples
Basic example
The prototypical example of a congruence relation is congruence modulo on the set of integers. For a given positive integer , two integers and are called congruent modulo , written
if is divisible by (or equivalently if and have the same remainder when divided by ).
For example, and are congruent modulo ,
since is a multiple of 10, or equivalently since both and have a remainder of when divided by .
Congruence modulo (for a fixed ) is compatible with both addition and multiplication on the integers. That is,
if
and
then
and
The corresponding addition and multiplication of equivalence classes is known as modular arithmetic. From the point of view of abstract algebra, congruence modulo is a congruence relation on the ring of integers, and arithmetic modulo occurs on the corresponding quotient ring.
Example: Groups
For example, a group is an algebraic object consisting of a set together with a single binary operation, satisfying certain axioms. If is a group with operation , a congruence relation on is an equivalence relation on the elements of satisfying
and
for all . For a congruence on a group, the equivalence class containing the identity element is always a normal subgroup, and the other equivalence classes are the other cosets of this subgroup. Together, these equivalence classes are the elements of a quotient group.
Example: Rings
When an algebraic structure includes more than one operation, congruence relations are required to be compatible with each operation. For example, a ring possesses both addition and multiplication, and a congruence relation on a ring must satisfy
and
whenever and . For a congruence on a ring, the equivalence class containing 0 is always a two-sided ideal, and the two operations on the set of equivalence classes define the corresponding quotient ring.
Relation with homomorphisms
If is a homomorphism between two algebraic structures (such as homomorphism of groups, or a linear map between vector spaces), then the relation defined by
if and only if
is a congruence relation on . By the first isomorphism theorem, the image of A under is a substructure of B isomorphic to the quotient of A by this congruence.
On the other hand, the congruence relation induces a unique homomorphism given by
.
Thus, there is a natural correspondence between the congruences and the homomorphisms of any given algebraic structure.
Congruences of groups, and normal subgroups and ideals
In the particular case of groups, congruence relations can be described in elementary terms as follows:
If G is a group (with identity element e and operation *) and ~ is a binary relation on G, then ~ is a congruence whenever:
Given any element a of G, (reflexivity);
Given any elements a and b of G, if , then (symmetry);
Given any elements a, b, and c of G, if and , then (transitivity);
Given any elements a, a′, b, and b′ of G, if and , then ;
Given any elements a and a′ of G, if , then (this is implied by the other four, so is strictly redundant).
Conditions 1, 2, and 3 say that ~ is an equivalence relation.
A congruence ~ is determined entirely by the set of those elements of G that are congruent to the identity element, and this set is a normal subgroup.
Specifically, if and only if .
So instead of talking about congruences on groups, people usually speak in terms of normal subgroups of them; in fact, every congruence corresponds uniquely to some normal subgroup of G.
Ideals of rings and the general case
A similar trick allows one to speak of kernels in ring theory as ideals instead of congruence relations, and in module theory as submodules instead of congruence relations.
A more general situation where this trick is possible is with Omega-groups (in the general sense allowing operators with multiple arity). But this cannot be done with, for example, monoids, so the study of congruence relations plays a more central role in monoid theory.
Universal algebra
The general notion of a congruence is particularly useful in universal algebra. An equivalent formulation in this context is the following:
A congruence relation on an algebra A is a subset of the direct product that is both an equivalence relation on A and a subalgebra of .
The kernel of a homomorphism is always a congruence. Indeed, every congruence arises as a kernel.
For a given congruence ~ on A, the set of equivalence classes can be given the structure of an algebra in a natural fashion, the quotient algebra.
The function that maps every element of A to its equivalence class is a homomorphism, and the kernel of this homomorphism is ~.
The lattice Con(A) of all congruence relations on an algebra A is algebraic.
John M. Howie described how semigroup theory illustrates congruence relations in universal algebra:
In a group a congruence is determined if we know a single congruence class, in particular if we know the normal subgroup which is the class containing the identity. Similarly, in a ring a congruence is determined if we know the ideal which is the congruence class containing the zero. In semigroups there is no such fortunate occurrence, and we are therefore faced with the necessity of studying congruences as such. More than anything else, it is this necessity that gives semigroup theory its characteristic flavour. Semigroups are in fact the first and simplest type of algebra to which the methods of universal algebra must be applied ...
Category theory
In category theory, a congruence relation R on a category C is given by: for each pair of objects X, Y in C, an equivalence relation RX,Y on Hom(X,Y), such that the equivalence relations respect composition of morphisms. See for details.
See also
Chinese remainder theorem
Congruence lattice problem
Table of congruences
Explanatory notes
Notes
References
(Section 4.5 discusses congruency of matrices.)
Modular arithmetic
Abstract algebra
Binary relations
Equivalence (mathematics)
Universal algebra | Congruence relation | [
"Mathematics"
] | 1,555 | [
"Universal algebra",
"Binary relations",
"Number theory",
"Fields of abstract algebra",
"Arithmetic",
"Mathematical relations",
"Abstract algebra",
"Modular arithmetic",
"Algebra"
] |
48,172 | https://en.wikipedia.org/wiki/List%20of%20mental%20disorders | The following is a list of mental disorders as defined at any point by the Diagnostic and Statistical Manual of Mental Disorders (DSM) or the International Classification of Diseases (ICD). A mental disorder, also known as a mental illness, mental health condition, or psychiatric disorder, is characterized by a pattern of behavior or mental function that significantly impairs personal functioning or causes considerable distress.
The DSM, a classification and diagnostic guide published by the American Psychiatric Association, includes over 450 distinct definitions of mental disorders. Meanwhile, the ICD, published by the World Health Organization, stands as the international standard for categorizing all medical conditions, including sections on mental and behavioral disorders.
Revisions and updates are periodically made to the diagnostic criteria and descriptions in the DSM and ICD to reflect current understanding and consensus within the mental health field. The list includes conditions currently recognized as mental disorders according to these systems. There is ongoing debate among mental health professionals, including psychiatrists, about the definitions and criteria used to delineate mental disorders. There is particular concern over whether certain conditions should be classified as "mental illnesses" or might more accurately be described as neurological disorders or in other terms.
Anxiety disorders
Agoraphobia
Generalized anxiety disorder
Panic disorder
Selective mutism
Separation anxiety disorder
Specific phobias
Social anxiety disorder
Dissociative disorders
Dissociative identity disorder
Dissociative amnesia (formerly psychogenic amnesia)
Depersonalization-derealization disorder
Dissociative amnesia with dissociative fugue
Dissociative neurological symptom disorder (including psychogenic non-epileptic seizures)
Other specified dissociative disorder (OSDD)
Unspecified dissociative disorder
Ganser syndrome
Mood disorders
Depressive disorders
Disruptive mood dysregulation disorder
Major depressive disorder
Dysthymia
Premenstrual dysphoric disorder
Psychotic depression
Seasonal affective disorder (SAD)
Atypical depression
Catatonic depression
Postpartum depression
Melancholic depression
Pervasive refusal syndrome
Unspecified depressive disorder
Bipolar disorders
Bipolar I disorder
Bipolar II disorder
Bipolar disorder not otherwise specified
Cyclothymia
Hypomania
Trauma and stressor related disorders
Reactive attachment disorder
Disinhibited social engagement disorder
Post-traumatic stress disorder (PTSD)
Post-traumatic embitterment disorder (PTED)
Acute stress disorder
Adjustment disorder
Complex post-traumatic stress disorder (C-PTSD)
Prolonged grief disorder
Neuro-developmental disorders
Intellectual disability
Language disorder
Sensory processing disorder
Speech sound disorder
Stuttering
Aphasia
Social communication disorder
Pervasive developmental disorder
Auditory processing disorder
Communication disorder
Autism spectrum disorder (formerly a category that included Asperger syndrome, classic autism and Rett syndrome)
Attention deficit hyperactivity disorder (ADHD)
Developmental coordination disorder
Tourette syndrome
Down syndrome
Tic disorder
Dyslexia
Dyscalculia
Dysgraphia
Nonverbal learning disorder (NVLD, NLD)
Sleep-wake disorders
Insomnia (including chronic insomnia and short-term insomnia)
Hypersomnia
Idiopathic hypersomnia
Kleine–Levin syndrome
Insufficient sleep syndrome
Narcolepsy
Restless legs syndrome
Sleep apnea
Night terrors (sleep terrors)
Exploding head syndrome
Parasomnias
Nightmare disorder
Rapid eye movement sleep behavior disorder
Confusional arousals
Sleepwalking
Hypnagogic hallucinations
Hypnopompic hallucinations
Circadian rhythm sleep disorder
Circadian rhythm sleep disorder
Delayed sleep phase disorder
Advanced sleep phase disorder
Irregular sleep–wake rhythm
Non-24-hour sleep–wake disorder
Circadian rhythm sleep-wake disorder caused by irregular work shifts
Jet lag
Neuro-cognitive disorders
Delirium
Dementia
Traumatic brain injury
HIV-associated neurocognitive disorder (HAND)
Amnesia
Chronic traumatic encephalopathy
Agnosia
Substance-related and addictive disorders
Substance related disorders
Substance-induced disorder (Substance-induced psychosis, Substance-induced delirium, Substance-induced mood disorder)
Substance intoxication
Substance withdrawal
Substance dependence
Disorders due to use of alcohol
Alcohol use disorder
Alcoholic hallucinosis
Alcohol withdrawal
Harmful pattern of use of alcohol
Disorders due to use of cannabis
Cannabis use disorder
Cannabis dependence
Cannabis intoxication
Harmful pattern of use of cannabis
Cannabis withdrawal
Cannabis-induced delirium
Cannabis-induced psychosis
Cannabis-induced mood disorder
Cannabis-induced anxiety
Disorders due to use of synthetic cannabinoids
Episode of harmful use of synthetic cannabinoids
Harmful pattern of use of synthetic cannabinoids
Synthetic cannabinoid dependence
Synthetic cannabinoid intoxication
Synthetic cannabinoids withdrawal
Synthetic cannabinoids induced delirium
Synthetic cannabinoids induced psychotic disorder
Synthetic cannabinoids induced mood disorder
Synthetic cannabinoids induced anxiety
Disorders due to use of opioids
Episode of harmful use of opioids
Harmful pattern of use of opioids
Opioid dependence
Opioid intoxication
Opioids withdrawal
Opioids induced delirium
Opioids induced psychotic disorder
Opioids induced mood disorder
Opioids induced anxiety
Disorders due to use of sedative, hypnotic or anxiolytic
Episode of harmful use of sedative, hypnotic or anxiolytic
Harmful pattern of use of Sedative, hypnotic or anxiolytic
Sedative, hypnotic or anxiolytic dependence
Sedative, hypnotic or anxiolytic intoxication
Sedative, hypnotic or anxiolytic withdrawal
Sedative, hypnotic or anxiolytic induced delirium
Sedative, hypnotic or anxiolytic induced psychotic disorder
Sedative, hypnotic or anxiolytic induced mood disorder
Sedative, hypnotic or anxiolytic induced anxiety
Amnestic disorder due to use of sedatives, hypnotics or anxiolytics
Dementia due to use of sedatives, hypnotics or anxiolytics
Disorders due to use of cocaine
Episode of harmful use of cocaine
Harmful pattern of use of cocaine
Cocaine dependence
Cocaine intoxication
Cocaine withdrawal
Cocaine induced delirium
Cocaine induced psychotic disorder
Cocaine induced mood disorder
Cocaine induced anxiety
Cocaine induced OCD
Cocaine induced impulse control disorder
Disorders due to use of amphetamines
Episode of harmful use of amphetamines
Harmful pattern of use of amphetamines]]
Amphetamines dependence
Amphetamines intoxication
Amphetamines withdrawal
Amphetamines induced delirium
Amphetamines induced psychotic disorder
Amphetamines induced mood disorder
Amphetamines induced anxiety
Amphetamines induced OCD
Amphetamines induced impulse control disorder
Disorders due to use of synthetic cathinone
Episode of harmful use of synthetic cathinone
Harmful pattern of use of synthetic cathinone
Synthetic cathinone dependence
Synthetic cathinone intoxication
Synthetic cathinone withdrawal
Synthetic cathinone induced delirium
Synthetic cathinone induced psychotic disorder
Synthetic cathinone induced mood disorder
Synthetic cathinone induced anxiety
Synthetic cathinone induced OCD
Synthetic cathinone induced impulse control disorder
Disorders due to use of caffeine
Episode of harmful use of caffeine
Harmful pattern of use of caffeine
Caffeine intoxication
Caffeine withdrawal
Caffeine induced anxiety disorder
Caffeine-induced sleep disorder
Disorders due to use of hallucinogens
Episode of harmful use of hallucinogens
Harmful pattern of use of hallucinogens
Hallucinogens dependence
Hallucinogen induced delirium
Hallucinogens induced psychotic disorder
Hallucinogens induced anxiety disorder
Hallucinogens induced mood disorder
Hallucinogen persisting perception disorder
Disorders due to use of nicotine
Episode of harmful use of nicotine
Harmful pattern of use of nicotine
Nicotine intoxication
Nicotine withdrawal
Nicotine dependence
Disorders due to use of volatile inhalants
Episode of harmful use of volatile inhalants
Harmful pattern of use of volatile inhalants]]
Opioid dependence
Opioid intoxication
Volatile inhalants withdrawal
Volatile inhalants induced delirium
Volatile inhalants induced psychotic disorder
Volatile inhalants induced mood disorder
Volatile inhalants induced anxiety
Disorders due to use of dissociative drugs including ketamine and phencyclidine (PCP)
Episode of harmful use of dissociative drugs including ketamine and phencyclidine (PCP)
Harmful pattern of use of dissociative drugs including ketamine and phencyclidine (PCP)
Dissociative drugs including ketamine and phencyclidine [PCP] dependence
Dissociative drugs including ketamine and phencyclidine [PCP] intoxication
Dissociative drugs including ketamine and phencyclidine [PCP] withdrawal
Dissociative drugs including ketamine and phencyclidine [PCP] induced delirium
Dissociative drugs including ketamine and phencyclidine [PCP] induced psychotic disorder
Dissociative drugs including ketamine and phencyclidine [PCP] induced mood disorder
Dissociative drugs including ketamine and phencyclidine [PCP] induced anxiety
Non-substance related disorder
Addictive personality
Gambling disorder
Video game addiction
Internet addiction disorder
Sexual addiction
Food addiction
Exercise addiction
Addiction to social media
Pornography addiction
Shopping addiction
Paraphilias
Voyeuristic disorder
Exhibitionistic disorder
Frotteuristic disorder
Pedophilia
Compulsive sexual behaviour disorder
Erotic target location error
Sexual masochism disorder
Sexual sadism disorder
Fetishistic disorder
Transvestic disorder
Other specified paraphilic disorder
Somatic symptom related disorders
Hypochondriasis
Cyberchondria
Somatization disorder
Conversion disorder (Functional Neurological Symptom Disorder)
Factitious disorder imposed on self (Munchausen syndrome)
Factitious disorder imposed on another (Munchausen by proxy)
Pain disorder
Medically unexplained physical symptoms (MUPS)
Sexual dysfunctions
Delayed ejaculation
Erectile dysfunction
Anorgasmia
Vaginismus
Male hypoactive sexual desire disorder
Female sexual arousal disorder
Persistent genital arousal disorder
Hypoactive sexual desire disorder
Sexual arousal disorder
Premature ejaculation
Dyspareunia
Sexual dysfunction
Elimination disorders
Enuresis (involuntary urination)
Nocturnal enuresis
Encopresis (involuntary defecation)
Feeding and eating disorders
Pica (disorder)
Rumination syndrome
Avoidant/restrictive food intake disorder
Anorexia nervosa
Binge eating disorder
Bulimia nervosa
Purging disorder
Diabulimia
Night eating syndrome
Orthorexia nervosa
Atypical anorexia nervosa
Other specified feeding or eating disorder (OSFED)
Disruptive impulse-control, and conduct disorders
Intermittent explosive disorder
Oppositional defiant disorder
Conduct disorder
Antisocial personality disorder
Pyromania
Kleptomania
Mythomania
Disruptive mood dysregulation disorder
Obsessive-compulsive and related disorders
Obsessive–compulsive disorder (OCD)
Body dysmorphic disorder
Body integrity dysphoria
Compulsive hoarding
Trichotillomania
Excoriation disorder (skin picking disorder)
Body-focused repetitive behavior disorder
Olfactory reference syndrome
Phantom limb syndrome
Primarily obsessional obsessive-compulsive disorder
Hoarding disorder
Schizophrenia spectrum and other psychotic disorders
Brief psychotic disorder
Delusional disorder
Delusional misidentification syndrome
Paraphrenia
Psychosis
Schizophrenia
Schizoaffective disorder
Schizophreniform disorder
Schizotypal personality disorder
Shared delusional disorder
Personality disorders
Cluster A (Odd, Eccentric)
Paranoid personality disorder
Schizoid personality disorder
Schizotypal personality disorder
Cluster B (Dramatic, Erratic)
Antisocial personality disorder
Borderline personality disorder
Histrionic personality disorder
Narcissistic personality disorder
Cluster C (Fearful, Anxious)
Avoidant personality disorder
Dependent personality disorder
Obsessive–compulsive personality disorder
Not otherwise specified (PD-NOS)
Depressive personality disorder
Passive–aggressive personality disorder
Sadistic personality disorder
Self-defeating personality disorder
Other
Gender dysphoria (also known as gender integrity disorder or gender incongruence; there are different categorizations for children and non-children in the ICD-11)
Medication-induced movement disorders and other adverse effects of medication
Catatonia
Culture-bound syndrome
See also
List of neurological conditions and disorders
International Classification of Diseases by the WHO - World Health Organisation
References
Lists of diseases
Disability-related lists
List
Mental disorders
de:Liste der psychischen und Verhaltensstörungen nach ICD-10 | List of mental disorders | [
"Biology"
] | 2,569 | [
"Mental disorders",
"Behavior",
"Human behavior"
] |
48,179 | https://en.wikipedia.org/wiki/PINO | The Open PINO Platform (or just PINO) is an open humanoid robot platform, with its mechanical and software design covered by the GNU Free Documentation License and GNU General Public License respectively.
The external housing design of the PINO is a proprietary registered design, and the term PINO is trademarked.
The intention of PINO's designers appears to be to create a Linux-like open platform for robotics.
A commercial version of PINO is being sold by ZMP INC. a Tokyo-based robotics company. The latest version is Version 3 (released in August 2006).
External links
http://www.zmp.co.jp/
Bipedal humanoid robots
Social robots
Robots of Japan
2000s robots | PINO | [
"Technology"
] | 146 | [
"Social robots",
"Computing and society"
] |
48,187 | https://en.wikipedia.org/wiki/William%20Ramsay | Sir William Ramsay (; 2 October 1852 – 23 July 1916) was a Scottish chemist who discovered the noble gases and received the Nobel Prize in Chemistry in 1904 "in recognition of his services in the discovery of the inert gaseous elements in air" along with his collaborator, John William Strutt, 3rd Baron Rayleigh, who received the Nobel Prize in Physics that same year for their discovery of argon. After the two men identified argon, Ramsay investigated other atmospheric gases. His work in isolating argon, helium, neon, krypton, and xenon led to the development of a new section of the periodic table.
Early years
Ramsay was born at 2 Clifton Street in Glasgow on 2 October 1852, the son of civil engineer and surveyor, William C. Ramsay, and his wife, Catherine Robertson. The family lived at 2 Clifton Street in the city centre, a three-storey and basement Georgian townhouse. The family moved to 1 Oakvale Place in the Hillhead district in his youth. He was a nephew of the geologist Sir Andrew Ramsay.
He was educated at Glasgow Academy and then apprenticed to Robert Napier, a shipbuilder in Govan. However, he instead decided to study Chemistry at the University of Glasgow, matriculating in 1866 and graduating in 1869. He then undertook practical training with the chemist Thomas Anderson and then went to study in Germany at the University of Tübingen with Wilhelm Rudolph Fittig where his doctoral thesis was entitled Investigations in the Toluic and Nitrotoluic Acids.
Ramsay went back to Glasgow as Anderson's assistant at Anderson College. He was appointed as Professor of Chemistry at the University College of Bristol in 1879 and married Margaret Buchanan in 1881. In the same year he became the Principal of University College, Bristol, and somehow managed to combine that with active research both in organic chemistry and on gases.
Career
William Ramsay formed pyridine in 1876 from acetylene and hydrogen cyanide in an iron-tube furnace in what was the first synthesis of a heteroaromatic compound.
In 1887, he succeeded Alexander Williamson as the chair of Chemistry at University College London (UCL). It was here at UCL that his most celebrated discoveries were made. As early as 1885–1890, he published several notable papers on the oxides of nitrogen, developing the skills that he needed for his subsequent work.
On the evening of 19 April 1894, Ramsay attended a lecture given by Lord Rayleigh. Rayleigh had noticed a discrepancy between the density of nitrogen made by chemical synthesis and nitrogen isolated from the air by removal of the other known components. After a short conversation, he and Ramsay decided to investigate this. In August Ramsay told Rayleigh he had isolated a new, heavy component of air, which did not appear to have any chemical reactivity. He named this inert gas "argon", from the Greek word meaning "lazy". In the following years, working with Morris Travers, he discovered neon, krypton, and xenon. He also isolated helium, which had only been observed in the spectrum of the sun, and had not previously been found on earth. In 1910 he isolated and characterised radon.
During 1893–1902, Ramsay collaborated with Emily Aston, a British chemist, in experiments on mineral analysis and atomic weight determination. Their work included publications on the molecular surface energies of mixtures of non-associating liquids.
Ramsay was elected an International Member of the American Philosophical Society in 1899.
He was appointed a Knight Commander of the Order of the Bath (KCB) in the 1902 Coronation Honours list published on 26 June 1902, and invested as such by King Edward VII at Buckingham Palace on 24 October 1902.
In 1904, Ramsay received the Nobel Prize in Chemistry.
That same year, he was elected an International Member of the United States National Academy of Sciences. Ramsay's standing among scientists led him to become an adviser to the Indian Institute of Science. He suggested Bangalore as the location for the institute.
Ramsay endorsed the Industrial and Engineering Trust Ltd., a company that claimed it could extract gold from seawater, in 1905. It bought property on the English coast to begin its secret process. The company never produced any gold.
Ramsay was the president of the British Association in 1911–1912.
Personal life
In 1881, Ramsay was married to Margaret Johnstone Marshall (née Buchanan), daughter of George Stevenson Buchanan. They had a daughter, Catherine Elizabeth (Elska) and a son, William George, who died at 40.
Ramsay lived in Hazlemere, Buckinghamshire, until his death. He died in High Wycombe, Buckinghamshire, on 23 July 1916 from nasal cancer at the age of 63 and was buried in Hazlemere parish church.
Legacy
A blue plaque at number 12 Arundel Gardens, Notting Hill, commemorates his life and work.
The Sir William Ramsay School in Hazlemere and Ramsay grease are named after him.
There is a memorial to him by Charles Hartwell in the north aisle of the choir at Westminster Abbey.
In 1923, University College London named its new Chemical Engineering department and seat after Ramsay, which had been funded by the Ramsay Memorial Fund. One of Ramsay's former graduates, H. E. Watson was the third Ramsay professor of chemical engineering.
On 2 October 2019, Google celebrated his 167th birthday with a Google Doodle.
See also
Clan Ramsay
References
Secondary sources
External links
including the Nobel Lecture 12 December 1904 The Rare Gases of the Atmosphere from Nobelprize.org website
Sir William Ramsay School
Eponymous school
Web genealogy article on Ramsay
Chemical genealogy
victorianweb biography
chemeducator biography
7/23/1904;This Photograph of Sir William Ramsay Was Taken in His Laboratory Specially for the Scientific American
1852 births
1916 deaths
19th-century Scottish chemists
19th-century Scottish people
20th-century Scottish chemists
20th-century Scottish people
People from Hillhead
People educated at the Glasgow Academy
Alumni of the University of Glasgow
University of Tübingen alumni
Academics of the University of Strathclyde
Academics of the University of Glasgow
Academics of the University of Bristol
Academics of University College London
Discoverers of chemical elements
Honorary Fellows of the Royal Society of Edinburgh
Fellows of the Royal Society
Foreign associates of the National Academy of Sciences
Corresponding members of the Saint Petersburg Academy of Sciences
Honorary members of the Saint Petersburg Academy of Sciences
Knights Commander of the Order of the Bath
Nobel laureates in Chemistry
People from Notting Hill
Recipients of the Pour le Mérite (civil class)
Scottish knights
Scottish Nobel laureates
British Nobel laureates
Noble gases
Academics of University College Bristol
Industrial gases
Recipients of the Matteucci Medal
Alumni of the University of Strathclyde
Members of the American Philosophical Society | William Ramsay | [
"Chemistry",
"Materials_science"
] | 1,350 | [
"Chemical process engineering",
"Noble gases",
"Industrial gases",
"Nonmetals"
] |
48,193 | https://en.wikipedia.org/wiki/Camera%20obscura | A camera obscura (; ) is the natural phenomenon in which the rays of light passing through a small hole into a dark space form an image where they strike a surface, resulting in an inverted (upside down) and reversed (left to right) projection of the view outside.
Camera obscura can also refer to analogous constructions such as a darkened room, box or tent in which an exterior image is projected inside or onto a translucent screen viewed from outside. Camera obscuras with a lens in the opening have been used since the second half of the 16th century and became popular as aids for drawing and painting. The technology was developed further into the photographic camera in the first half of the 19th century, when camera obscura boxes were used to expose light-sensitive materials to the projected image.
The image (or the principle of its projection) of a lensless camera obscura is also referred to as a "pinhole image".
The camera obscura was used to study eclipses without the risk of damaging the eyes by looking directly into the Sun. As a drawing aid, it allowed tracing the projected image to produce a highly accurate representation, and was especially appreciated as an easy way to achieve proper graphical perspective.
Before the term camera obscura was first used in 1604, other terms were used to refer to the devices: cubiculum obscurum, cubiculum tenebricosum, conclave obscurum, and locus obscurus.
A camera obscura without a lens but with a very small hole is sometimes referred to as a "pinhole camera", although this more often refers to simple (homemade) lensless cameras where photographic film or photographic paper is used.
Physical explanation
Rays of light travel in straight lines and change when they are reflected and partly absorbed by an object, retaining information about the color and brightness of the surface of that object. Lighted objects reflect rays of light in all directions. A small enough opening in a barrier admits only the rays that travel directly from different points in the scene on the other side, and these rays form an image of that scene where they reach a surface opposite from the opening.
The human eye (and that of many other animals) works much like a camera obscura, with rays of light entering an opening (pupil), getting focused through a convex lens and passing a dark chamber before forming an inverted image on a smooth surface (retina). The analogy appeared early in the 16th century and would in the 17th century find common use to illustrate Western theological ideas about God creating the universe as a machine, with a predetermined purpose (just like humans create machines). This had a huge influence on behavioral science, especially on the study of perception and cognition. In this context, it is noteworthy that the projection of inverted images is actually a physical principle of optics that predates the emergence of life (rather than a biological or technological invention) and is not characteristic of all biological vision.
Technology
A camera obscura consists of a box, tent, or room with a small hole in one side or the top. Light from an external scene passes through the hole and strikes a surface inside, where the scene is reproduced, inverted (upside-down) and reversed (left to right), but with color and perspective preserved.
To produce a reasonably clear projected image, the aperture is typically smaller than 1/100 the distance to the screen.
As the pinhole is made smaller, the image gets sharper, but dimmer. With too small of a pinhole, sharpness is lost because of diffraction. Optimum sharpness is attained with an aperture diameter approximately equal to the geometric mean of the wavelength of light and the distance to the screen.
In practice, camera obscuras use a lens rather than a pinhole because it allows a larger aperture, giving a usable brightness while maintaining focus.
If the image is caught on a translucent screen, it can be viewed from the back so that it is no longer reversed (but still upside-down). Using mirrors, it is possible to project a right-side-up image. The projection can also be displayed on a horizontal surface (e.g., a table). The 18th-century overhead version in tents used mirrors inside a kind of periscope on the top of the tent.
The box-type camera obscura often has an angled mirror projecting an upright image onto tracing paper placed on its glass top. Although the image is viewed from the back, it is reversed by the mirror.
History
Prehistory to 500 BC: Possible inspiration for prehistoric art and possible use in religious ceremonies, gnomons
There are theories that occurrences of camera obscura effects (through tiny holes in tents or in screens of animal hide) inspired paleolithic cave paintings. Distortions in the shapes of animals in many paleolithic cave artworks might be inspired by distortions seen when the surface on which an image was projected was not straight or not in the right angle.
It is also suggested that camera obscura projections could have played a role in Neolithic structures.
Perforated gnomons projecting a pinhole image of the sun were described in the Chinese Zhoubi Suanjing writings (1046 BC–256 BC with material added until ). The location of the bright circle can be measured to tell the time of day and year. In Middle Eastern and European cultures its invention was much later attributed to Egyptian astronomer and mathematician Ibn Yunus around 1000 AD.
500 BC to 500 AD: Earliest written observations
One of the earliest known written records of a pinhole image is found in the Chinese text called Mozi, dated to the 4th century BC, traditionally ascribed to and named for Mozi (circa 470 BC-circa 391 BC), a Chinese philosopher and the founder of Mohist School of Logic. These writings explain how the image in a "collecting-point" or "treasure house" is inverted by an intersecting point (pinhole) that collects the (rays of) light. Light coming from the foot of an illuminated person gets partly hidden below (i.e., strikes below the pinhole) and partly forms the top of the image. Rays from the head are partly hidden above (i.e., strike above the pinhole) and partly form the lower part of the image.
Another early account is provided by Greek philosopher Aristotle (384–322 BC), or possibly a follower of his ideas. Similar to the later 11th-century Middle Eastern scientist Alhazen, Aristotle is also thought to have used camera obscura for observing solar eclipses. The formation of pinhole images is touched upon as a subject in the work Problems – Book XV, asking: and further on:
In an attempt to explain the phenomenon, the author described how the light formed two cones; one between the Sun and the aperture and one between the aperture and the Earth. However, the roundness of the image was attributed to the idea that parts of the rays of light (assumed to travel in straight lines) are cut off at the angles in the aperture become so weak that they cannot be noticed.
Many philosophers and scientists of the Western world would ponder the contradiction between light travelling in straight lines and the formation of round spots of light behind differently shaped apertures, until it became generally accepted that the circular and crescent-shapes described in the "problem" were pinhole image projections of the sun.
In his book Optics (circa 300 BC, surviving in later manuscripts from around 1000 AD), Euclid proposed mathematical descriptions of vision with "lines drawn directly from the eye pass through a space of great extent" and "the form of the space included in our vision is a cone, with its apex in the eye and its base at the limits of our vision." Later versions of the text, like Ignazio Danti's 1573 annotated translation, would add a description of the camera obscura principle to demonstrate Euclid's ideas.
500 to 1000: Earliest experiments, study of light
In the 6th century, the Byzantine-Greek mathematician and architect Anthemius of Tralles (most famous as a co-architect of the Hagia Sophia) experimented with effects related to the camera obscura. Anthemius had a sophisticated understanding of the involved optics, as demonstrated by a light-ray diagram he constructed in 555 AD.
In his optical treatise De Aspectibus, Al-Kindi (c. 801–873) wrote about pinhole images to prove that light travels in straight lines.
In the 10th century Yu Chao-Lung supposedly projected images of pagoda models through a small hole onto a screen to study directions and divergence of rays of light.
1000 to 1400: Optical and astronomical tool, entertainment
Middle Eastern physicist Ibn al-Haytham (known in the West by the Latinised Alhazen) (965–1040) extensively studied the camera obscura phenomenon in the early 11th century.
In his treatise "On the shape of the eclipse" he provided the first experimental and mathematical analysis of the phenomenon.
He understood the relationship between the focal point and the pinhole.
In his Book of Optics (circa 1027), Ibn al-Haytham explained that rays of light travel in straight lines and are distinguished by the body that reflected the rays, writing:
Latin translations of the Book of Optics from about 1200 onward seemed very influential in Europe. Among those Ibn al-Haytham is thought to have inspired are Witelo, John Peckham, Roger Bacon, Leonardo da Vinci, René Descartes and Johannes Kepler. However, On the shape of the eclipse remained exclusively available in Arabic until the 20th century and no comparable explanation was found in Europe before Kepler addressed it. It were actually al-Kindi's work and especially the widely circulated pseudo-Euclidean De Speculis that were cited by the early scholars who were interested in pinhole images.
In his 1088 book, Dream Pool Essays, the Song dynasty Chinese scientist Shen Kuo (1031–1095) compared the focal point of a concave burning-mirror and the "collecting" hole of camera obscura phenomena to an oar in a rowlock to explain how the images were inverted:
Shen Kuo also responded to a statement of Duan Chengshi in Miscellaneous Morsels from Youyang written in about 840 that the inverted image of a Chinese pagoda tower beside a seashore, was inverted because it was reflected by the sea: "This is nonsense. It is a normal principle that the image is inverted after passing through the small hole."
English statesman and scholastic philosopher Robert Grosseteste (c. 1175 – 9 October 1253) was one of the earliest Europeans who commented on the camera obscura.
English philosopher and Franciscan friar Roger Bacon (c. 1219/20 – c. 1292) falsely stated in his De Multiplicatione Specerium (1267) that an image projected through a square aperture was round because light would travel in spherical waves and therefore assumed its natural shape after passing through a hole. He is also credited with a manuscript that advised to study solar eclipses safely by observing the rays passing through some round hole and studying the spot of light they form on a surface.
A picture of a three-tiered camera obscura (see illustration) has been attributed to Bacon, but the source for this attribution is not given. A very similar picture is found in Athanasius Kircher's Ars Magna Lucis et Umbrae (1646).
Polish friar, theologian, physicist, mathematician and natural philosopher Vitello wrote about the camera obscura in his influential treatise Perspectiva (circa 1270–1278), which was largely based on Ibn al-Haytham's work.
English archbishop and scholar John Peckham (circa 1230 – 1292) wrote about the camera obscura in his Tractatus de Perspectiva (circa 1269–1277) and Perspectiva communis (circa 1277–79), falsely arguing that light gradually forms the circular shape after passing through the aperture. His writings were influenced by Bacon.
At the end of the 13th century, Arnaldus de Villa Nova is credited with using a camera obscura to project live performances for entertainment.
French astronomer Guillaume de Saint-Cloud suggested in his 1292 work Almanach Planetarum that the eccentricity of the Sun could be determined with the camera obscura from the inverse proportion between the distances and the apparent solar diameters at apogee and perigee.
Kamāl al-Dīn al-Fārisī (1267–1319) described in his 1309 work Kitab Tanqih al-Manazir (The Revision of the Optics) how he experimented with a glass sphere filled with water in a camera obscura with a controlled aperture and found that the colors of the rainbow are phenomena of the decomposition of light.
French Jewish philosopher, mathematician, physicist and astronomer/astrologer Levi ben Gershon (1288–1344) (also known as Gersonides or Leo de Balneolis) made several astronomical observations using a camera obscura with a Jacob's staff, describing methods to measure the angular diameters of the Sun, the Moon and the bright planets Venus and Jupiter. He determined the eccentricity of the Sun based on his observations of the summer and winter solstices in 1334. Levi also noted how the size of the aperture determined the size of the projected image. He wrote about his findings in Hebrew in his treatise Sefer Milhamot Ha-Shem (The Wars of the Lord) Book V Chapters 5 and 9.
1450 to 1600: Depiction, lenses, drawing aid, mirrors
Italian polymath Leonardo da Vinci (1452–1519), familiar with the work of Alhazen in Latin translation and having extensively studied the physics and physiological aspects of optics, wrote the oldest known clear description of the camera obscura, in 1502 (found in the Codex Atlanticus, translated from Latin):
These descriptions, however, would remain unknown until Venturi deciphered and published them in 1797.
Da Vinci was clearly very interested in the camera obscura: over the years he drew approximately 270 diagrams of the camera obscura in his notebooks. He systematically experimented with various shapes and sizes of apertures and with multiple apertures (1, 2, 3, 4, 8, 16, 24, 28 and 32). He compared the working of the eye to that of the camera obscura and seemed especially interested in its capability of demonstrating basic principles of optics: the inversion of images through the pinhole or pupil, the non-interference of images and the fact that images are "all in all and all in every part".
The oldest known published drawing of a camera obscura is found in Dutch physician, mathematician and instrument maker Gemma Frisius’ 1545 book De Radio Astronomica et Geometrica, in which he described and illustrated how he used the camera obscura to study the solar eclipse of 24 January 1544
Italian polymath Gerolamo Cardano described using a glass disc – probably a biconvex lens – in a camera obscura in his 1550 book De subtilitate, vol. I, Libri IV. He suggested to use it to view "what takes place in the street when the sun shines" and advised to use a very white sheet of paper as a projection screen so the colours would not be dull.
Sicilian mathematician and astronomer Francesco Maurolico (1494–1575) answered Aristotle's problem how sunlight that shines through rectangular holes can form round spots of light or crescent-shaped spots during an eclipse in his treatise Photismi de lumine et umbra (1521–1554). However this wasn't published before 1611, after Johannes Kepler had published similar findings of his own.
Italian polymath Giambattista della Porta described the camera obscura, which he called "camera obscura", in the 1558 first edition of his book series Magia Naturalis. He suggested to use a convex lens to project the image onto paper and to use this as a drawing aid. Della Porta compared the human eye to the camera obscura: "For the image is let into the eye through the eyeball just as here through the window". The popularity of Della Porta's books helped spread knowledge of the camera obscura.
In his 1567 work La Pratica della Perspettiva Venetian nobleman Daniele Barbaro (1513-1570) described using a camera obscura with a biconvex lens as a drawing aid and points out that the picture is more vivid if the lens is covered as much as to leave a circumference in the middle.
In his influential and meticulously annotated Latin edition of the works of Ibn al-Haytham and Witelo, (1572), German mathematician Friedrich Risner proposed a portable camera obscura drawing aid; a lightweight wooden hut with lenses in each of its four walls that would project images of the surroundings on a paper cube in the middle. The construction could be carried on two wooden poles. A very similar setup was illustrated in 1645 in Athanasius Kircher's influential book Ars Magna Lucis Et Umbrae.
Around 1575 Italian Dominican priest, mathematician, astronomer, and cosmographer Ignazio Danti designed a camera obscura gnomon and a meridian line for the Basilica of Santa Maria Novella, Florence, and he later had a massive gnomon built in the San Petronio Basilica in Bologna. The gnomon was used to study the movements of the Sun during the year and helped in determining the new Gregorian calendar for which Danti took place in the commission appointed by Pope Gregorius XIII and instituted in 1582.
In his 1585 book Diversarum Speculationum Mathematicarum Venetian mathematician Giambattista Benedetti proposed to use a mirror in a 45-degree angle to project the image upright. This leaves the image reversed, but would become common practice in later camera obscura boxes.
Giambattista della Porta added a "lenticular crystal" or biconvex lens to the camera obscura description in the 1589 second edition of Magia Naturalis. He also described use of the camera obscura to project hunting scenes, banquets, battles, plays, or anything desired on white sheets. Trees, forests, rivers, mountains "that are really so, or made by Art, of Wood, or some other matter" could be arranged on a plain in the sunshine on the other side of the camera obscura wall. Little children and animals (for instance handmade deer, wild boars, rhinos, elephants, and lions) could perform in this set. "Then, by degrees, they must appear, as coming out of their dens, upon the Plain: The Hunter he must come with his hunting Pole, Nets, Arrows, and other necessaries, that may represent hunting: Let there be Horns, Cornets, Trumpets sounded: those that are in the Chamber shall see Trees, Animals, Hunters Faces, and all the rest so plainly, that they cannot tell whether they be true or delusions: Swords drawn will glister in at the hole, that they will make people almost afraid."
Della Porta claimed to have shown such spectacles often to his friends. They admired it very much and could hardly be convinced by della Porta's explanations that what they had seen was really an optical trick.
1600 to 1650: Name coined, camera obscura telescopy, portable drawing aid in tents and boxes
[[File:1619 Scheiner - Oculus hoc est (frontispiece).jpg|thumb|Detail of Scheiner's Oculus hoc est (1619) frontispiece with a camera obscura'''s projected image reverted by a lens]]
The earliest use of the term camera obscura is found in the 1604 book Ad Vitellionem Paralipomena by German mathematician, astronomer, and astrologer Johannes Kepler. Kepler discovered the working of the camera obscura by recreating its principle with a book replacing a shining body and sending threads from its edges through a many-cornered aperture in a table onto the floor where the threads recreated the shape of the book. He also realized that images are "painted" inverted and reversed on the retina of the eye and figured that this is somehow corrected by the brain.
In 1607, Kepler studied the Sun in his camera obscura and noticed a sunspot, but he thought it was Mercury transiting the Sun.
In his 1611 book Dioptrice, Kepler described how the projected image of the camera obscura can be improved and reverted with a lens. It is believed he later used a telescope with three lenses to revert the image in the camera obscura.
In 1611, Frisian/German astronomers David and Johannes Fabricius (father and son) studied sunspots with a camera obscura, after realizing looking at the Sun directly with the telescope could damage their eyes. They are thought to have combined the telescope and the camera obscura into camera obscura telescopy.Surdin, V., and M. Kartashev. "Light in a dark room." Quantum 9.6 (1999): 40.
In 1612, Italian mathematician Benedetto Castelli wrote to his mentor, the Italian astronomer, physicist, engineer, philosopher, and mathematician Galileo Galilei about projecting images of the Sun through a telescope (invented in 1608) to study the recently discovered sunspots. Galilei wrote about Castelli's technique to the German Jesuit priest, physicist, and astronomer Christoph Scheiner.
From 1612 to at least 1630, Christoph Scheiner would keep on studying sunspots and constructing new telescopic solar-projection systems. He called these "Heliotropii Telioscopici", later contracted to helioscope. For his helioscope studies, Scheiner built a box around the viewing/projecting end of the telescope, which can be seen as the oldest known version of a box-type camera obscura. Scheiner also made a portable camera obscura.
In his 1613 book Opticorum Libri Sex Belgian Jesuit mathematician, physicist, and architect François d'Aguilon described how some charlatans cheated people out of their money by claiming they knew necromancy and would raise the specters of the devil from hell to show them to the audience inside a dark room. The image of an assistant with a devil's mask was projected through a lens into the dark room, scaring the uneducated spectators.
By 1620 Kepler used a portable camera obscura tent with a modified telescope to draw landscapes. It could be turned around to capture the surroundings in parts.
Dutch inventor Cornelis Drebbel is thought to have constructed a box-type camera obscura which corrected the inversion of the projected image. In 1622, he sold one to the Dutch poet, composer, and diplomat Constantijn Huygens who used it to paint and recommended it to his artist friends. Huygens wrote to his parents (translated from French):
German Orientalist, mathematician, inventor, poet, and librarian Daniel Schwenter wrote in his 1636 book Deliciae Physico-Mathematicae about an instrument that a man from Pappenheim had shown him, which enabled movement of a lens to project more from a scene through a camera obscura. It consisted of a ball as big as a fist, through which a hole (AB) was made with a lens attached on one side (B). This ball was placed inside two-halves of part of a hollow ball that were then glued together (CD), in which it could be turned around. This device was attached to a wall of the camera obscura (EF). This universal joint mechanism was later called a scioptic ball.
In his 1637 book Dioptrique French philosopher, mathematician and scientist René Descartes suggested placing an eye of a recently dead man (or if a dead man was unavailable, the eye of an ox) into an opening in a darkened room and scraping away the flesh at the back until one could see the inverted image formed on the retina.
Italian Jesuit philosopher, mathematician, and astronomer Mario Bettini wrote about making a camera obscura with twelve holes in his Apiaria universae philosophiae mathematicae (1642). When a foot soldier would stand in front of the camera, a twelve-person army of soldiers making the same movements would be projected.
French mathematician, Minim friar, and painter of anamorphic art Jean-François Nicéron (1613–1646) wrote about the camera obscura with convex lenses. He explained how the camera obscura could be used by painters to achieve perfect perspective in their work. He also complained how charlatans abused the camera obscura to fool witless spectators and make them believe that the projections were magic or occult science. These writings were published in a posthumous version of La Perspective Curieuse (1652).
1650 to 1800: Introduction of the magic lantern, popular portable box-type drawing aid, painting aid
The use of the camera obscura to project special shows to entertain an audience seems to have remained very rare. A description of what was most likely such a show in 1656 in France, was penned by the poet Jean Loret, who expressed how rare and novel it was. The Parisian society were presented with upside-down images of palaces, ballet dancing and battling with swords. Loret felt somewhat frustrated that he did not know the secret that made this spectacle possible. There are several clues that this may have been a camera obscura show, rather than a very early magic lantern show, especially in the upside-down image and Loret's surprise that the energetic movements made no sound.
German Jesuit scientist Gaspar Schott heard from a traveler about a small camera obscura device he had seen in Spain, which one could carry under one arm and could be hidden under a coat. He then constructed his own sliding box camera obscura, which could focus by sliding a wooden box part fitted inside another wooden box part. He wrote about this in his 1657 Magia universalis naturæ et artis (volume 1 – book 4 "Magia Optica" pages 199–201).
By 1659 the magic lantern was introduced and partly replaced the camera obscura as a projection device, while the camera obscura mostly remained popular as a drawing aid. The magic lantern can be regarded as a (box-type) camera obscura device that projects images rather than actual scenes. In 1668, Robert Hooke described the difference for an installation to project the delightful "various apparitions and disappearances, the motions, changes and actions" by means of a broad convex-glass in a camera obscura setup: "if the picture be transparent, reflect the rays of the sun so as that they may pass through it towards the place where it is to be represented; and let the picture be encompassed on every side with a board or cloth that no rays may pass beside it. If the object be a statue or some living creature, then it must be very much enlightened by casting the sun beams on it by refraction, reflexion, or both." For models that can't be inverted, like living animals or candles, he advised: "let two large glasses of convenient spheres be placed at appropriate distances".
The 17th century Dutch Masters, such as Johannes Vermeer, were known for their magnificent attention to detail. It has been widely speculated that they made use of the camera obscura, but the extent of their use by artists at this period remains a matter of fierce contention, recently revived by the Hockney–Falco thesis.
German philosopher Johann Sturm published an illustrated article about the construction of a portable camera obscura box with a 45° mirror and an oiled paper screen in the first volume of the proceedings of the Collegium Curiosum, Collegium Experimentale, sive Curiosum (1676).
Johann Zahn's Oculus Artificialis Teledioptricus Sive Telescopium, published in 1685, contains many descriptions, diagrams, illustrations and sketches of both the camera obscura and the magic lantern. A hand-held device with a mirror-reflex mechanism was first proposed by Johann Zahn in 1685, a design that would later be used in photographic cameras.
The scientist Robert Hooke presented a paper in 1694 to the Royal Society, in which he described a portable camera obscura. It was a cone-shaped box which fit onto the head and shoulders of its user.
From the beginning of the 18th century, craftsmen and opticians would make camera obscura devices in the shape of books, which were much appreciated by lovers of optical devices.
One chapter in the Conte Algarotti's Saggio sopra Pittura (1764) is dedicated to the use of a camera obscura ("optic chamber") in painting.
By the 18th century, following developments by Robert Boyle and Robert Hooke, more easily portable models in boxes became available. These were extensively used by amateur artists while on their travels, but they were also employed by professionals, including Paul Sandby and Joshua Reynolds, whose camera (disguised as a book) is now in the Science Museum in London. Such cameras were later adapted by Joseph Nicephore Niepce, Louis Daguerre and William Fox Talbot for creating the first photographs.
Role in the modern age
While the technical principles of the camera obscura have been known since antiquity, the broad use of the technical concept in producing images with a linear perspective in paintings, maps, theatre setups, and architectural, and, later, photographic images and movies started in the Western Renaissance and the scientific revolution. Although Alhazen (Ibn al-Haytham) had already observed an optical effect and developed a pioneering theory of the refraction of light, he was less interested in producing images with it (compare Hans Belting 2005); the society he lived in was even hostile (compare Aniconism in Islam) toward personal images.
Western artists and philosophers used the Middle Eastern findings in new frameworks of epistemic relevance. For example, Leonardo da Vinci used the camera obscura as a model of the eye, René Descartes for eye and mind, and John Locke started to use the camera obscura as a metaphor of human understanding per se. The modern use of the camera obscura as an epistemic machine had important side effects for science.Don Ihde Art Precedes Science: or Did the Camera Obscura Invent Modern Science? In Instruments in Art and Science: On the Architectonics of Cultural Boundaries in the 17th Century Helmar Schramm, Ludger Schwarte, Jan Lazardzig, Walter de Gruyter, 2008
While the use of the camera obscura has waxed and waned, one can still be built using a few simple items: a box, tracing paper, tape, foil, a box cutter, a pencil, and a blanket to keep out the light. Homemade camera obscura are popular primary- and secondary-school science or art projects.
In 1827, critic Vergnaud complained about the frequent use of camera obscura in producing many of the paintings at that year's Salon exhibition in Paris: "Is the public to blame, the artists, or the jury, when history paintings, already rare, are sacrificed to genre painting, and what genre at that!... that of the camera obscura." (translated from French)
British photographer Richard Learoyd has specialized in making pictures of his models and motifs with a camera obscura instead of a modern camera, combining it with the ilfochrome process which creates large grainless prints.
Other contemporary visual artists who have explicitly used camera obscura in their artworks include James Turrell, Abelardo Morell, Minnie Weisz, Robert Calafiore, Vera Lutter, Marja Pirilä, and Shi Guorui.
Digital cameras
Camera obscura principle pinhole objectives machined out of aluminium are commercially available. As the luminosity of the image is very weak in the phenomenon, long exposure times or high sensitivity must be used in digital photography. The resulting image appears hazy and the image is not that sharp, even if the objective is attached to a state of the art camera body.
See also
Bonnington Pavilion – the first Scottish camera obscura, dating from 1708
Black mirror
Clifton Observatory
Camera lucida History of cinema
Pepper's ghost
Notes
References
Sources
Hill, Donald R. (1993), "Islamic Science and Engineering", Edinburgh University Press, page 70.
Lindberg, D.C. (1976), "Theories of Vision from Al Kindi to Kepler", The University of Chicago Press, Chicago and London.
Nazeef, Mustapha (1940), "Ibn Al-Haitham As a Naturalist Scientist", , published proceedings of the Memorial Gathering of Al-Hacan Ibn Al-Haitham, 21 December 1939, Egypt Printing.
Needham, Joseph (1986). Science and Civilization in China: Volume 4, Physics and Physical Technology, Part 1, Physics. Taipei: Caves Books Ltd.
Omar, S.B. (1977). "Ibn al-Haitham's Optics", Bibliotheca Islamica, Chicago.
Lefèvre, Wolfgang (ed.) Inside the Camera Obscura: Optics and Art under the Spell of the Projected Image. Max Planck Institut Fur Wissenschaftgesichte. Max Planck Institute for the History of Science
Burkhard Walther, Przemek Zajfert: Camera Obscura Heidelberg. Black-and-white photography and texts. Historical and contemporary literature''. edition merid, Stuttgart, 2006,
External links
1500s introductions
1502 beginnings
17th-century neologisms
Optical toys
Optical devices
Artistic techniques
Precursors of photography
Precursors of film | Camera obscura | [
"Materials_science",
"Engineering"
] | 7,059 | [
"Glass engineering and science",
"Optical devices"
] |
48,203 | https://en.wikipedia.org/wiki/Article%20%28grammar%29 | In grammar, an article is any member of a class of dedicated words that are used with noun phrases to mark the identifiability of the referents of the noun phrases. The category of articles constitutes a part of speech.
In English, both "the" and "a(n)" are articles, which combine with nouns to form noun phrases. Articles typically specify the grammatical definiteness of the noun phrase, but in many languages, they carry additional grammatical information such as gender, number, and case. Articles are part of a broader category called determiners, which also include demonstratives, possessive determiners, and quantifiers. In linguistic interlinear glossing, articles are abbreviated as .
Types of article
Definite article
A definite article is an article that marks a definite noun phrase. Definite articles, such as the English the, are used to refer to a particular member of a group. It may be something that the speaker has already mentioned, or it may be otherwise something uniquely specified.
For example, Sentence 1 uses the definite article and thus, expresses a request for a particular book. In contrast, Sentence 2 uses an indefinite article and thus, conveys that the speaker would be satisfied with any book.
Give me the book.
Give me a book.
The definite article can also be used in English to indicate a specific class among other classes:
The cabbage white butterfly lays its eggs on members of the Brassica genus.
However, recent developments show that definite articles are morphological elements linked to certain noun types due to lexicalization. Under this point of view, definiteness does not play a role in the selection of a definite article more than the lexical entry attached to the article.
Some languages (such as the continental North Germanic languages, Bulgarian or Romanian) have definite articles only as suffixes.
Indefinite article
An indefinite article is an article that marks an indefinite noun phrase. Indefinite articles are those such as English "a" or "an", which do not refer to a specific identifiable entity. Indefinites are commonly used to introduce a new discourse referent which can be referred back to in subsequent discussion:
A monster ate a cookie. His name is Cookie Monster.
Indefinites can also be used to generalize over entities who have some property in common:
A cookie is a wonderful thing to eat.
Indefinites can also be used to refer to specific entities whose precise identity is unknown or unimportant.
A monster must have broken into my house last night and eaten all my cookies.
A friend of mine told me that happens frequently to people who live on Sesame Street.
Indefinites also have predicative uses:
Leaving my door unlocked was a bad decision.
Indefinite noun phrases are widely studied within linguistics, in particular because of their ability to take exceptional scope.
Proper article
A proper article indicates that its noun is proper, and refers to a unique entity. It may be the name of a person, the name of a place, the name of a planet, etc. The Māori language has the proper article , which is used for personal nouns; so, "" means "Peter". In Māori, when the personal nouns have the definite or indefinite article as an important part of it, both articles are present; for example, the phrase "", which contains both the proper article and the definite article refers to the person name Te Rauparaha.
The definite article is sometimes also used with proper names, which are already specified by definition (there is just one of them). For example: the Amazon, the Hebrides. In these cases, the definite article may be considered superfluous. Its presence can be accounted for by the assumption that they are shorthand for a longer phrase in which the name is a specifier, i.e. the Amazon River, the Hebridean Islands. Where the nouns in such longer phrases cannot be omitted, the definite article is universally kept: the United States, the People's Republic of China.
This distinction can sometimes become a political matter: the former usage the Ukraine stressed the word's Russian meaning of "borderlands"; as Ukraine became a fully independent state following the collapse of the Soviet Union, it requested that formal mentions of its name omit the article. Similar shifts in usage have occurred in the names of Sudan and both Congo (Brazzaville) and Congo (Kinshasa); a move in the other direction occurred with The Gambia. In certain languages, such as French and Italian, definite articles are used with all or most names of countries: , , ; , , .
Some languages use definite articles with personal names, as in Portuguese (, literally: "the Maria"), Greek (, , , ), and Catalan (, /). Such usage also occurs colloquially or dialectally in Spanish, German, French, Italian and other languages. In Hungarian, the colloquial use of definite articles with personal names, though widespread, is considered to be a Germanism.
The definite article sometimes appears in American English nicknames such as "the Donald", referring to former president Donald Trump, and "the Gipper", referring to former president Ronald Reagan.
Partitive article
A partitive article is a type of article, sometimes viewed as a type of indefinite article, used with a mass noun such as water, to indicate a non-specific quantity of it. Partitive articles are a class of determiner; they are used in French and Italian in addition to definite and indefinite articles. (In Finnish and Estonian, the partitive is indicated by inflection.) The nearest equivalent in English is some, although it is classified as a determiner, and English uses it less than French uses .
French:
Do you want (some) coffee?
For more information, see the article on the French partitive article.
Haida has a partitive article (suffixed ) referring to "part of something or... to one or more objects of a given group or category," e.g., "he is making a boat (a member of the category of boats)."
Negative article
A negative article specifies none of its noun, and can thus be regarded as neither definite nor indefinite. On the other hand, some consider such a word to be a simple determiner rather than an article. In English, this function is fulfilled by no, which can appear before a singular or plural noun:
No man has been on this island.
No dogs are allowed here.
No one is in the room.
In German, the negative article is, among other variations, kein, in opposition to the indefinite article ein.
Ein Hund – a dog
Kein Hund – no dog
The equivalent in Dutch is geen:
een hond – a dog
geen hond – no dog
Zero article
The zero article is the absence of an article. In languages having a definite article, the lack of an article specifically indicates that the noun is indefinite. Linguists interested in X-bar theory causally link zero articles to nouns lacking a determiner. In English, the zero article rather than the indefinite is used with plurals and mass nouns, although the word "some" can be used as an indefinite plural article.
Visitors end up walking in mud.
Crosslinguistic variation
Articles are found in many Indo-European languages, Semitic languages, Polynesian languages, and even language isolates such as Basque; however, they are formally absent from many of the world's major languages including Chinese, Japanese, Korean, Mongolian, Tibetan, many Turkic languages (including Tatar, Bashkir, Tuvan and Chuvash), many Uralic languages (incl. Finnic and Saami languages), Hindi-Urdu, Punjabi, the Dravidian languages (incl. Tamil, Telugu, and Kannada), the Baltic languages, the majority of Slavic languages, the Bantu languages (incl. Swahili). In some languages that do have articles, such as some North Caucasian languages, the use of articles is optional; however, in others like English and German it is mandatory in all cases.
Linguists believe the common ancestor of the Indo-European languages, Proto-Indo-European, did not have articles. Most of the languages in this family do not have definite or indefinite articles: there is no article in Latin or Sanskrit, nor in some modern Indo-European languages, such as the families of Slavic languages (except for Bulgarian and Macedonian, which are rather distinctive among the Slavic languages in their grammar, and some Northern Russian dialects), Baltic languages and many Indo-Aryan languages. Although Classical Greek had a definite article (which has survived into Modern Greek and which bears strong functional resemblance to the German definite article, which it is related to), the earlier Homeric Greek used this article largely as a pronoun or demonstrative, whereas the earliest known form of Greek known as Mycenaean Greek did not have any articles. Articles developed independently in several language families.
Not all languages have both definite and indefinite articles, and some languages have different types of definite and indefinite articles to distinguish finer shades of meaning: for example, French and Italian have a partitive article used for indefinite mass nouns, whereas Colognian has two distinct sets of definite articles indicating focus and uniqueness, and Macedonian uses definite articles in a demonstrative sense, with a tripartite distinction (proximal, medial, distal) based on distance from the speaker or interlocutor. The words this and that (and their plurals, these and those) can be understood in English as, ultimately, forms of the definite article the (whose declension in Old English included thaes, an ancestral form of this/that and these/those).
In many languages, the form of the article may vary according to the gender, number, or case of its noun. In some languages the article may be the only indication of the case. Many languages do not use articles at all, and may use other ways of indicating old versus new information, such as topic–comment constructions.
Tables
{| class="wikitable"
|+ The articles used in some languages
|-
! Language
! definite article
! partitive article
! indefinite article
|-
|Abkhaz
|a-
|
| -k
|-
|Afrikaans
|die
|
|'n
|-
| Albanian
| -a, -ja, -i, -ri, -ni, -u, -t, -in, -un, -n, -rin, -nin, -në, -ën, -s, -së, -ës, -të, -it, -ët (all suffixes)
| disa
| një
|-
| Arabic
| or el (prefix)
|
| -n
|-
| Assamese
| -tû, -ta, -ti, -khôn, -khini, -zôn, -zôni, -dal, -zûpa etc.
|
| êta, êkhôn, êzôn, êzôni, êdal, êzûpa etc.
|-
| Bengali
| -টা, -টি, -গুলো, -রা, -খানা (-ṭa, -ṭi, -gulo, -ra, -khana)
|
| একটি, একটা, কোন (ekôṭi, ekôṭa, konô)
|-
| Breton
| an, al, ar
|
| un, ul, ur
|-
| Bulgarian
| -та, -то, -ът, -ят, -те (all suffixes)
| няколко
| един/някакъв, една/някаква, едно/някакво, едни/някакви
|-
| Catalan
| el, la, l', els, lesses, lo, los, es, sa
|
| un, una uns, unes
|-
| Cornish
| an
|
|
|-
|Danish
|Singular: -en, -n -et, -t (all suffixes)
Plural: -ene, -ne (all suffixes)
|
|en, et
|-
| Dutch
| , het ('t); archaic since 1945/46 but still used in names and idioms: des, der, den
|
| een ('n)
|-
| English
| the
|
| a, an
|-
| Esperanto
| la
|
|
|-
| Finnish (colloquial)
| se
|
| yks(i)
|-
| French
| le, la, l', les
| , d', du, de la, des, de l
| un, une, des
|-
| German
| der, die, das des, dem, den
|
| ein, eine, einer, eines einem, einen
|-
| Greek
|
|
|
|-
| Hawaiian
| ka, ke nā
|
| he
|-
| Hebrew
| (prefix)
|
|
|-
| Hungarian
| a, az
|
| egy
|-
| Icelandic
| -(i)nn, -(i)n, -(i)ð, -(i)na, -num, -(i)nni, -nu, -(i)ns, -(i)nnar, -nir, -nar, -(u)num, -nna (all suffixes)
|
|
|-
| Interlingua
| le
|
| un
|-
| Irish
| an, na, a' (used colloquially)
|
|
|-
| Italian
| il, lo, la, l i, gli, le
| del, dello, della, dell dei, degli, degl', delle
| un, uno, una, un|-
| Khasi
| u, ka, i ki
|
|
|-
| Kurdish
| -eke -ekan
| hendê, birrê
| -êk -anêk
|-
| Latin
|
|
|
|-
| Luxembourgish
| den, déi (d'), dat (d') dem, der
| däers/es, däer/er
| en, eng engem, enger
|-
| Macedonian
| -от -ов -он -та -ва -на -то -во -но -те -ве -не -та -ва -на (all suffixes)
| неколку
| еден една едно едни
|-
| Manx
| y, yn, n, ny
|
|
|-
| Malay and Indonesian
| -nya (colloquial), before names: si (usually informal), sang (more formal)
|
| se- (+ classifiers)
|-
| Māori
| te (singular), ngā (plural)
|
| he (also for "some")
|-
| Maltese
| (i)l-, (i)ċ-, (i)d-, (i)n-, (i)r-, (i)s-, (i)t-, (i)x-, (i)z-, (i)ż- (all prefixes)
|
|
|-
| Nepali
|
|
|euta, euti, ek, anek, kunai
एउटा, एउटी, एक, अनेक, कुनै
|-
| Norwegian (Bokmål)
| Singular: -en, -et, -a (all suffixes)
Plural: -ene, -a (all suffixes)
|
| en, et, ei
|-
| Norwegian (Nynorsk)
| Singular: -en, -et, -a (all suffixes)
Plural: -ane, -ene, -a (all suffixes)
|
| ein, eit, ei
|-
|Papiamento
|e
|
|un
|-
| Pashto
|
|
| yaow, yaowə, yaowa, yaowey يو, يوهٔ, يوه, يوې
|-
| Persian
| in, ān (prepositive) -e (suffixed)
|
| ye(k) (prepositive) -i (suffixed)
|-
| Portuguese
| o, a os, as
|
| um, uma uns, umas
|-
| Quenya
| i, in, n
|
|
|-
| Romanian
| -(u)l, -le, -(u)a-(u)lui, -i, -lor (all suffixes)
|
| un, ounui, uneiniște, unor
|-
| Scots
| the
|
| a
|-
| Scottish Gaelic
| an, am, a, na, nam, nan
|
|
|-
| Sindarin
| i, in, -in, -n, en
|
|
|-
| Spanish
| el, la, lo, los, las
|
| un, una unos, unas
|-
|Swedish
|Singular: -en, -n, -et, -t (all suffixes)
Plural: -na, -a, -en (all suffixes)
|
|en, ett
|-
| Welsh
| y, yr, -'r
|
|
|-
|Yiddish
| דער (der), די (di), דאָס (dos), דעם (dem)
|
| אַ (a), אַן (an)
|}
The following examples show articles which are always suffixed to the noun:
Albanian: zog, a bird; zogu, the bird
Aramaic: שלם (shalam), peace; שלמא (shalma), the peace
Note: Aramaic is written from right to left, so an Aleph is added to the end of the word. ם becomes מ when it is not the final letter.
Assamese: "কিতাপ (kitap)", book; "কিতাপখন (kitapkhôn)": "The book"
Bengali: "বই (bôi)", book; "বইটি (bôiti)/বইটা (bôita)/বইখানা (bôikhana)" : "The Book"
Bulgarian: стол stol, chair; столът stolǎt, the chair (subject); стола stola, the chair (object)
Danish: hus, house; huset, the house; if there is an adjective: det gamle hus, the old house
Icelandic: hestur, horse; hesturinn, the horse
Macedonian: стол stol, chair; столот stolot, the chair; столов stolov, this chair; столон stolon, that chair
Persian: sib, apple. (There is no definite articles in the Standard Persian. It has one indefinite article 'yek' that means 'one'. In Standard Persian, if a noun is not indefinite, it is a definite noun. 'Sib e' man' means 'my apple'. Here, 'e' is like 'of' in English, so literally 'sib e man' means 'the apple of mine'. However, in Iranian Persian, "-e" is used as a definite article, quite different from Standard Persian. pesar, boy; pesare, the boy; pesare in'o be'm dād, the boy gave me this.)
Romanian: drum, road; drumul, the road (the article is just "l", "u" is a "connection vowel" )
Swedish and Norwegian: hus, house; huset, the house; if there is an adjective: det gamle (N)/gamla (S) huset, the old house
Examples of prefixed definite articles:
, transcribed as yeled, a boy; , transcribed as , the boy
, a book; , the book; , a donation; , the donation; , a key; , the key; , a house; , the house; , an ant; , the ant; , a head; , the head; , a bed; , the bed; , an apple; , the apple; , a month; , the month; , a carrot; , the carrot; , a time; , the time
A different way, limited to the definite article, is used by Latvian and Lithuanian.
The noun does not change but the adjective can be defined or undefined. In Latvian: galds, a table / the table; balts galds, a white table; baltais galds, the white table. In Lithuanian: stalas, a table / the table; baltas stalas, a white table; baltasis stalas, the white table.
Languages in the above table written in italics are constructed languages and are not natural, that is to say that they have been purposefully invented by an individual (or group of individuals) with some purpose in mind.
Tokelauan
When using a definite article in Tokelauan language, unlike in some languages like English, if the speaker is speaking of an item, they need not have referred to it previously as long as the item is specific. This is also true when it comes to the reference of a specific person. So, although the definite article used to describe a noun in the Tokelauan language is te, it can also translate to the indefinite article in languages that requires the item being spoken of to have been referenced prior. When translating to English, te could translate to the English definite article the, or it could also translate to the English indefinite article a. An example of how the definite article te can be used as an interchangeable definite or indefinite article in the Tokelauan language would be the sentence “Kua hau te tino”. In the English language, this could be translated as “A man has arrived” or “The man has arrived” where using te as the article in this sentence can represent any man or a particular man. The word he, which is the indefinite article in Tokelauan, is used to describe ‘any such item’, and is encountered most often with negatives and interrogatives. An example of the use of he as an indefinite article is “Vili ake oi k'aumai he toki ”, where ‘he toki ’ mean ‘an axe’. The use of he and te in Tokelauan are reserved for when describing a singular noun. However, when describing a plural noun, different articles are used. For plural definite nouns, rather than te, the article nā is used. ‘Vili ake oi k'aumai nā nofoa’ in Tokelauan would translate to “Do run and bring me the chairs” in English. There are some special cases in which instead of using nā, plural definite nouns have no article before them. The absence of an article is represented by 0. One way that it is usually used is if a large amount or a specific class of things are being described. Occasionally, such as if one was describing an entire class of things in a nonspecific fashion, the singular definite noun te would is used. In English, ‘Ko te povi e kai mutia’ means “Cows eat grass”. Because this is a general statement about cows, te is used instead of nā. The ko serves as a preposition to the “te” The article ni is used for describing a plural indefinite noun. ‘E i ei ni tuhi?’ translates to “Are there any books?'''”
Historical development
Articles often develop by specialization of adjectives or determiners. Their development is often a sign of languages becoming more analytic instead of synthetic, perhaps combined with the loss of inflection as in English, Romance languages, Bulgarian, Macedonian and Torlakian.
Joseph Greenberg in Universals of Human Language describes "the cycle of the definite article": Definite articles (Stage I) evolve from demonstratives, and in turn can become generic articles (Stage II) that may be used in both definite and indefinite contexts, and later merely noun markers (Stage III) that are part of nouns other than proper names and more recent borrowings. Eventually articles may evolve anew from demonstratives.
Definite articles
Definite articles typically arise from demonstratives meaning that. For example, the definite articles in most Romance languages—e.g., el, il, le, la, lo, a, o — derive from the Latin demonstratives ille (masculine), illa (feminine) and illud (neuter).
The English definite article the, written þe in Middle English, derives from an Old English demonstrative, which, according to gender, was written se (masculine), seo (feminine) (þe and þeo in the Northumbrian dialect), or þæt (neuter). The neuter form þæt also gave rise to the modern demonstrative that. The ye occasionally seen in pseudo-archaic usage such as "Ye Olde Englishe Tea Shoppe" is actually a form of þe, where the letter thorn (þ) came to be written as a y.
Multiple demonstratives can give rise to multiple definite articles. Macedonian, for example, in which the articles are suffixed, has столот (stolot), the chair; столов (stolov), this chair; and столон (stolon), that chair. These derive from the Proto-Slavic demonstratives *tъ "this, that", *ovъ "this here" and *onъ "that over there, yonder" respectively. Colognian prepositions articles such as in dat Auto, or et Auto, the car; the first being specifically selected, focused, newly introduced, while the latter is not selected, unfocused, already known, general, or generic.
Standard Basque distinguishes between proximal and distal definite articles in the plural (dialectally, a proximal singular and an additional medial grade may also be present). The Basque distal form (with infix -a-, etymologically a suffixed and phonetically reduced form of the distal demonstrative har-/hai-) functions as the default definite article, whereas the proximal form (with infix -o-, derived from the proximal demonstrative hau-/hon-) is marked and indicates some kind of (spatial or otherwise) close relationship between the speaker and the referent (e.g., it may imply that the speaker is included in the referent): etxeak ("the houses") vs. etxeok ("these houses [of ours]"), euskaldunak ("the Basque speakers") vs. euskaldunok ("we, the Basque speakers").
Speakers of Assyrian Neo-Aramaic, a modern Aramaic language that lacks a definite article, may at times use demonstratives aha and aya (feminine) or awa (masculine) – which translate to "this" and "that", respectively – to give the sense of "the". In Indonesian, the third person possessive suffix -nya could be also used as a definite article.
Indefinite articles
Indefinite articles typically arise from adjectives meaning one. For example, the indefinite articles in the Romance languages—e.g., un, una, une—derive from the Latin adjective unus. Partitive articles, however, derive from Vulgar Latin de illo, meaning (some) of the.
The English indefinite article an is derived from the same root as one. The -n came to be dropped before consonants, giving rise to the shortened form a. The existence of both forms has led to many cases of juncture loss, for example transforming the original a napron into the modern an apron.
The Persian indefinite article is yek'', meaning one.
See also
English articles
Al- (definite article in Arabic)
Definiteness
Definite description
False title
References
External links
"The Definite Article, 'The': The Most Frequently Used Word in World's Englishes"
Grammar
Parts of speech | Article (grammar) | [
"Technology"
] | 5,932 | [
"Parts of speech",
"Components"
] |
48,209 | https://en.wikipedia.org/wiki/Gas%20laws | The laws describing the behaviour of gases under fixed pressure, volume, amount of gas, and absolute temperature conditions are called gas laws. The basic gas laws were discovered by the end of the 18th century when scientists found out that relationships between pressure, volume and temperature of a sample of gas could be obtained which would hold to approximation for all gases. The combination of several empirical gas laws led to the development of the ideal gas law.
The ideal gas law was later found to be consistent with atomic and kinetic theory.
History
In 1643, the Italian physicist and mathematician, Evangelista Torricelli, who for a few months had acted as Galileo Galileo's secretary, conducted a celebrated experiment in Florence. He demonstrated that a column of mercury in an inverted tube can be supported by the pressure of air outside of the tube, with the creation of a small section of vacuum above the mercury. This experiment essentially paved the way towards the invention of the barometer, as well as drawing the attention of Robert Boyle, then a "skeptical" scientist working in England. Boyle was inspired by Torricelli's experiment to investigate how the elasticity of air responds to varying pressure, and he did this through a series of experiments with a setup reminiscent of that used by Torricelli. Boyle published his results in 1662.
Later on, in 1676, the French physicist Edme Mariotte, independently arrived at the same conclusions of Boyle, while also noting some dependency of air volume on temperature. However it took another century and a half for the development of thermometry and recognition of the absolute zero temperature scale, which eventually allowed the discovery of temperature-dependent gas laws.
Boyle's law
In 1662, Robert Boyle systematically studied the relationship between the volume and pressure of a fixed amount of gas at a constant temperature. He observed that the volume of a given mass of a gas is inversely proportional to its pressure at a constant temperature.
Boyle's law, published in 1662, states that, at a constant temperature, the product of the pressure and volume of a given mass of an ideal gas in a closed system is always constant. It can be verified experimentally using a pressure gauge and a variable volume container. It can also be derived from the kinetic theory of gases: if a container, with a fixed number of molecules inside, is reduced in volume, more molecules will strike a given area of the sides of the container per unit time, causing a greater pressure.
Statement
Boyle's law states that:
The concept can be represented with these formulae:
, meaning "Volume is inversely proportional to Pressure", or
, meaning "Pressure is inversely proportional to Volume", or
, or
where is the pressure, is the volume of a gas, and is the constant in this equation (and is not the same as the proportionality constants in the other equations).
Charles' law
Charles' law, or the law of volumes, was founded in 1787 by Jacques Charles. It states that, for a given mass of an ideal gas at constant pressure, the volume is directly proportional to its absolute temperature, assuming in a closed system.
The statement of Charles' law is as follows:
the volume (V) of a given mass of a gas, at constant pressure (P), is directly proportional to its temperature (T).
Statement
Charles' law states that:
Therefore,
, or
, or
,
where "V" is the volume of a gas, "T" is the absolute temperature and k2 is a proportionality constant (which is not the same as the proportionality constants in the other equations in this article).
Gay-Lussac's law
Gay-Lussac's law, Amontons' law or the pressure law was founded by Joseph Louis Gay-Lussac in 1808.
Statement
Gay-Lussac's law states that:
Therefore,
, or
, or
,
where P is the pressure, T is the absolute temperature, and k is another proportionality constant.
Avogadro's law
Avogadro's law, Avogadro's hypothesis, Avogadro's principle or Avogadro-Ampère's hypothesis is an experimental gas law which was hypothesized by Amedeo Avogadro in 1811. It related the volume of a gas to the amount of substance of gas present.
Statement
Avogadro's law states that:
This statement gives rise to the molar volume of a gas, which at STP (273.15 K, 1 atm) is about 22.4 L. The relation is given by:
, orwhere n is equal to the number of molecules of gas (or the number of moles of gas).
Combined and ideal gas laws
The combined gas law or general gas equation is obtained by combining Boyle's law, Charles's law, and Gay-Lussac's law. It shows the relationship between the pressure, volume, and temperature for a fixed mass of gas:
This can also be written as:
With the addition of Avogadro's law, the combined gas law develops into the ideal gas law:
where P is the pressure, V is volume, n is the number of moles, R is the universal gas constant and T is the absolute temperature.
The proportionality constant, now named R, is the universal gas constant with a value of 8.3144598 (kPa∙L)/(mol∙K).
An equivalent formulation of this law is:
where P is the pressure, V is the volume, N is the number of gas molecules, kB is the Boltzmann constant (1.381×10−23J·K−1 in SI units) and T is the absolute temperature.
These equations are exact only for an ideal gas, which neglects various intermolecular effects (see real gas). However, the ideal gas law is a good approximation for most gases under moderate pressure and temperature.
This law has the following important consequences:
If temperature and pressure are kept constant, then the volume of the gas is directly proportional to the number of molecules of gas.
If the temperature and volume remain constant, then the pressure of the gas changes is directly proportional to the number of molecules of gas present.
If the number of gas molecules and the temperature remain constant, then the pressure is inversely proportional to the volume.
If the temperature changes and the number of gas molecules are kept constant, then either pressure or volume (or both) will change in direct proportion to the temperature.
Other gas laws
Graham's law This law states that the rate at which gas molecules diffuse is inversely proportional to the square root of the gas density at a constant temperature. Combined with Avogadro's law (i.e. since equal volumes have an equal number of molecules) this is the same as being inversely proportional to the root of the molecular weight.
Dalton's law of partial pressures This law states that the pressure of a mixture of gases simply is the sum of the partial pressures of the individual components. Dalton's law is as follows:
and all component gases and the mixture are at the same temperature and volume
where Ptotal is the total pressure of the gas mixture
Pi is the partial pressure or pressure of the component gas at the given volume and temperature.
Amagat's law of partial volumes This law states that the volume of a mixture of gases (or the volume of the container) simply is the sum of the partial volumes of the individual components. Amagat's law is as follows:
and all component gases and the mixture are at the same temperature and pressure
where Vtotal is the total volume of the gas mixture or the volume of the container,
Vi is the partial volume, or volume of the component gas at the given pressure and temperature.
Henry's law This states that at constant temperature, the amount of a given gas dissolved in a given type and volume of liquid is directly proportional to the partial pressure of that gas in equilibrium with that liquid. The equation is as follows:
Real gas law This was formulated by Johannes Diderik van der Waals in 1873.
References
FSU(Florida State University)
External links
History of thermodynamics | Gas laws | [
"Physics",
"Chemistry"
] | 1,663 | [
"History of thermodynamics",
"Thermodynamics",
"Gas laws"
] |
48,211 | https://en.wikipedia.org/wiki/Golomb%20ruler | In mathematics, a Golomb ruler is a set of marks at integer positions along a ruler such that no two pairs of marks are the same distance apart. The number of marks on the ruler is its order, and the largest distance between two of its marks is its length. Translation and reflection of a Golomb ruler are considered trivial, so the smallest mark is customarily put at 0 and the next mark at the smaller of its two possible values. Golomb rulers can be viewed as a one-dimensional special case of Costas arrays.
The Golomb ruler was named for Solomon W. Golomb and discovered independently by and . Sophie Piccard also published early research on these sets, in 1939, stating as a theorem the claim that two Golomb rulers with the same distance set must be congruent. This turned out to be false for six-point rulers, but true otherwise.
There is no requirement that a Golomb ruler be able to measure all distances up to its length, but if it does, it is called a perfect Golomb ruler. It has been proved that no perfect Golomb ruler exists for five or more marks. A Golomb ruler is optimal if no shorter Golomb ruler of the same order exists. Creating Golomb rulers is easy, but proving the optimal Golomb ruler (or rulers) for a specified order is computationally very challenging.
Distributed.net has completed distributed massively parallel searches for optimal order-24 through order-28 Golomb rulers, each time confirming the suspected candidate ruler.
Currently, the complexity of finding optimal Golomb rulers (OGRs) of arbitrary order n (where n is given in unary) is unknown. In the past there was some speculation that it is an NP-hard problem. Problems related to the construction of Golomb rulers are provably shown to be NP-hard, where it is also noted that no known NP-complete problem has similar flavor to finding Golomb rulers.
Definitions
Golomb rulers as sets
A set of integers where is a Golomb ruler if and only if
The order of such a Golomb ruler is and its length is . The canonical form has and, if , . Such a form can be achieved through translation and reflection.
Golomb rulers as functions
An injective function with and is a Golomb ruler if and only if
The order of such a Golomb ruler is and its length is . The canonical form has
if .
Optimality
A Golomb ruler of order m with length n may be optimal in either of two respects:
It may be optimally dense, exhibiting maximal m for the specific value of n,
It may be optimally short, exhibiting minimal n for the specific value of m.
The general term optimal Golomb ruler is used to refer to the second type of optimality.
Practical applications
Information theory and error correction
Golomb rulers are used within information theory related to error correcting codes.
Radio frequency selection
Golomb rulers are used in the selection of radio frequencies to reduce the effects of intermodulation interference with both terrestrial and extraterrestrial applications.
Radio antenna placement
Golomb rulers are used in the design of phased arrays of radio antennas. In radio astronomy one-dimensional synthesis arrays can have the antennas in a Golomb ruler configuration in order to obtain minimum redundancy of the Fourier component sampling.
Current transformers
Multi-ratio current transformers use Golomb rulers to place transformer tap points.
Methods of construction
A number of construction methods produce asymptotically optimal Golomb rulers.
Erdős–Turán construction
The following construction, due to Paul Erdős and Pál Turán, produces a Golomb ruler for every odd prime p.
Known optimal Golomb rulers
The following table contains all known optimal Golomb rulers, excluding those with marks in the reverse order. The first four are perfect.
The optimal ruler would have been known before this date; this date represents that date when it was discovered to be optimal (because all other rulers were proved to not be smaller). For example, the ruler that turned out to be optimal for order 26 was recorded on , but it was not known to be optimal until all other possibilities were exhausted on .
See also
Costas array
Volunteer computing
BOINC
distributed.net
Perfect ruler
Sidon sequence
Sparse ruler
References
External links
James B. Shearer's Golomb ruler pages
distributed.net: Project OGR
In Search Of The Optimal 20, 21 & 22 Mark Golomb Rulers
Golomb rulers up to length of over 200
Antennas (radio)
Distributed computing projects
Length, distance, or range measuring devices
Number theory | Golomb ruler | [
"Mathematics",
"Engineering"
] | 958 | [
"Discrete mathematics",
"Distributed computing projects",
"Number theory",
"Information technology projects"
] |
48,230 | https://en.wikipedia.org/wiki/Polychlorinated%20biphenyl | Polychlorinated biphenyls (PCBs) are organochlorine compounds with the formula C12H10−xClx; they were once widely used in the manufacture of carbonless copy paper, as heat transfer fluids, and as dielectric and coolant fluids for electrical equipment. They are highly toxic and carcinogenic chemical compounds, formerly used in industrial and consumer electronic products, whose production was banned internationally by the Stockholm Convention on Persistent Organic Pollutants in 2001.
Because of their longevity, PCBs are still widely in use, even though their manufacture has declined drastically since the 1960s, when a host of problems were identified. With the discovery of PCBs' environmental toxicity, and classification as persistent organic pollutants, their production was banned for most uses by United States federal law on January 1, 1978.
The International Agency for Research on Cancer (IARC) rendered PCBs as definite carcinogens in humans. According to the U.S. Environmental Protection Agency (EPA), PCBs cause cancer in animals and are probable human carcinogens. Moreover, because of their use as a coolant in electric transformers, PCBs still persist in built environments.
Some PCBs share a structural similarity and toxic mode of action with dioxins. Other toxic effects such as endocrine disruption (notably blocking of thyroid system functioning) and neurotoxicity are known. The bromine analogues of PCBs are polybrominated biphenyls (PBBs), which have analogous applications and environmental concerns.
An estimated 1.2 million tons have been produced globally. Though the US EPA enforced the federal ban as of 1978, PCBs continued to create health problems in later years through their continued presence in soil and sediment, and from products which were made before 1979. In 1988, Japanese scientists Tanabe et al. estimated 370,000 tons were in the environment globally, and 780,000 tons were present in products, landfills, and dumps or kept in storage.
Physical and chemical properties
Physical properties
The compounds are pale-yellow viscous liquids. They are hydrophobic, with low water solubilities: 0.0027–0.42 ng/L for Aroclors brand, but they have high solubilities in most organic solvents, oils, and fats. They have low vapor pressures at room temperature. They have dielectric constants of 2.5–2.7, very high thermal conductivity, and high flash points (from 170 to 380 °C).
The density varies from 1.182 to 1.566 g/cm3. Other physical and chemical properties vary widely across the class. As the degree of chlorination increases, melting point and lipophilicity increase, and vapour pressure and water solubility decrease.
PCBs do not easily break down or degrade, which made them attractive for industries. PCB mixtures are resistant to acids, bases, oxidation, hydrolysis, and temperature change. They can generate extremely toxic dibenzodioxins and dibenzofurans through partial oxidation. Intentional degradation as a treatment of unwanted PCBs generally requires high heat or catalysis (see Methods of destruction below).
PCBs readily penetrate skin, PVC (polyvinyl chloride), and latex (natural rubber). PCB-resistant materials include Viton, polyethylene, polyvinyl acetate (PVA), polytetrafluoroethylene (PTFE), butyl rubber, nitrile rubber, and Neoprene.
Structure and toxicity
PCBs are derived from biphenyl, which has the formula C12H10, sometimes written (C6H5)2. In PCBs, some of the hydrogen atoms in biphenyl are replaced by chlorine atoms. There are 209 different chemical compounds in which one to ten chlorine atoms can replace hydrogen atoms. PCBs are typically used as mixtures of compounds and are given the single identifying CAS number . About 130 different individual PCBs are found in commercial PCB products.
Toxic effects vary depending on the specific PCB. In terms of their structure and toxicity, PCBs fall into two distinct categories, referred to as coplanar or non-ortho-substituted arene substitution patterns and noncoplanar or ortho-substituted congeners.
Coplanar or non-ortho
The coplanar group members have a fairly rigid structure, with their two phenyl rings in the same plane. It renders their structure similar to polychlorinated dibenzo-p-dioxins (PCDDs) and polychlorinated dibenzofurans, and allows them to act like PCDDs, as an agonist of the aryl hydrocarbon receptor (AhR) in organisms. They are considered as contributors to overall dioxin toxicity, and the term dioxins and dioxin-like compounds is often used interchangeably when the environmental and toxic impact of these compounds is considered.
Noncoplanar
Noncoplanar PCBs, with chlorine atoms at the ortho positions can cause neurotoxic and immunotoxic effects, but only at concentrations much higher than those normally associated with dioxins. However, as they are typically found at much higher levels in biological and environmental samples they also pose health concerns, particularly to developing animals (including humans). As they do not activate the AhR, they are not considered part of the dioxin group. Because of their lower overt toxicity, they have typically been of lesser concern to regulatory bodies.
Di-ortho-substituted, non-coplanar PCBs interfere with intracellular signal transduction dependent on calcium which may lead to neurotoxicity. ortho-PCBs can disrupt thyroid hormone transport by binding to transthyretin.
Mixtures and trade names
Commercial PCB mixtures were marketed under the following names:
Brazil
Ascarel
Czech Republic and Slovakia
Delor
France
Phenoclor
Pyralène (both used by Prodolec)
Germany
Clophen (used by Bayer)
Italy
Apirolio
Fenclor
Japan
Kanechlor (used by Kanegafuchi)
Santotherm (used by Mitsubishi)
Pyroclor
Former USSR
Sovol
Sovtol
United Kingdom
Aroclor xxxx (used by Monsanto Company)
Askarel
United States
Aroclor xxxx (used by Monsanto Company)
Asbestol
Askarel
Bakola131
Chlorextol – Allis-Chalmers trade name
Dykanol (Cornell-Dubilier)
Hydol
Inerteen (used by Westinghouse)
Noflamol
Pyranol/Pyrenol, Clorinol (used in General Electric's oil-filled "clorinol"-branded metal can capacitors. Utilized from the early 1960s to late 1970s in air conditioning units, Seeburg jukeboxes and Zenith televisions)
Saf-T-Kuhl
Therminol FR Series (Monsanto ceased production in 1971).
Aroclor mixtures
The only North American producer, Monsanto Company, marketed PCBs under the trade name Aroclor from 1930 to 1977. These were sold under trade names followed by a four-digit number. In general, the first two digits refer to the product series as designated by Monsanto (e.g. 1200 or 1100 series); the second two numbers indicate the percentage of chlorine by mass in the mixture. Thus, Aroclor 1260 is a 1200 series product and contains 60% chlorine by mass. It is a myth that the first two digits referred to the number of carbon atoms; the number of carbon atoms do not change in PCBs. The 1100 series was a crude PCB material which was distilled to create the 1200 series PCB product.
The exception to the naming system is Aroclor 1016 which was produced by distilling 1242 to remove the highly chlorinated congeners to make a more biodegradable product. "1016" was given to this product during Monsanto's research stage for tracking purposes but the name stuck after it was commercialized.
Different Aroclors were used at different times and for different applications. In electrical equipment manufacturing in the US, Aroclor 1260 and Aroclor 1254 were the main mixtures used before 1950; Aroclor 1242 was the main mixture used in the 1950s and 1960s until it was phased out in 1971 and replaced by Aroclor 1016.
Production
One estimate (2006) suggested that 1 million tonnes of PCBs had been produced. 40% of this material was thought to remain in use. Another estimate put the total global production of PCBs on the order of 1.5 million tonnes. The United States was the single largest producer with over 600,000 tonnes produced between 1930 and 1977. The European region follows with nearly 450,000 tonnes through 1984. It is unlikely that a full inventory of global PCB production will ever be accurately tallied, as there were factories in Poland, East Germany, and Austria that produced unknown amounts of PCBs. , there were still 21,500 tons of PCBs stored in the easternmost regions of Slovakia.
Although deliberate production of PCBs is banned by international treaty, significant amounts of PCBs are still being "inadvertently" produced. Research suggests that 45,000 tons of 'by-product' PCBs are legally produced per year in the US as part of certain chemical and product formulations.
Commercial production of PCBs was banned in the United States in 1979, with the passage of the Toxic Substances Control Act (TSCA).
Applications
The utility of PCBs is based largely on their chemical stability, including low flammability and high dielectric constant. In an electric arc, PCBs generate incombustible gases.
Use of PCBs is commonly divided into closed and open applications. Examples of closed applications include coolants and insulating fluids (transformer oil) for transformers and capacitors, such as those used in old fluorescent light ballasts, and hydraulic fluids considered a semi-closed application. In contrast, the major open application of PCBs was in carbonless copy ("NCR") paper, which even presently results in paper contamination.
Other open applications were lubricating and cutting oils, and as plasticizers in paints and cements, stabilizing additives in flexible PVC coatings of electrical cables and electronic components, pesticide extenders, reactive flame retardants and sealants for caulking, adhesives, wood floor finishes, such as Fabulon and other products of Halowax in the U.S., de-dusting agents, waterproofing compounds, casting agents. It was also used as a plasticizer in paints and especially "coal tars" that were used widely to coat water tanks, bridges and other infrastructure pieces.
Modern sources include pigments, which may be used in inks for paper or plastic products. PCBs are also still found in old equipment like capacitors, ballasts, X-ray machine, and other e-waste.
Environmental transport and transformations
PCBs have entered the environment through both use and disposal. The environmental fate of PCBs is complex and global in scale.
Water
Because of their low vapour pressure, PCBs accumulate primarily in the hydrosphere, despite their hydrophobicity, in the organic fraction of soil, and in organisms including the human body. The hydrosphere is the main reservoir. The immense volume of water in the oceans is still capable of dissolving a significant quantity of PCBs.
As the pressure of ocean water increases with depth, PCBs become heavier than water and sink to the deepest ocean trenches where they are concentrated.
Air
A small volume of PCBs has been detected throughout the Earth's atmosphere. The atmosphere serves as the primary route for global transport of PCBs, particularly for those congeners with one to four chlorine atoms.
In the atmosphere, PCBs may be degraded by hydroxyl radicals, or directly by photolysis of carbon–chlorine bonds (even if this is a less important process).
Atmospheric concentrations of PCBs tend to be lowest in rural areas, where they are typically in the picogram per cubic meter range, higher in suburban and urban areas, and highest in city centres, where they can reach 1 ng/m3 or more. In Milwaukee, an atmospheric concentration of 1.9 ng/m3 has been measured, and this source alone was estimated to account for 120 kg/year of PCBs entering Lake Michigan. In 2008, concentrations as high as 35 ng/m3, 10 times higher than the EPA guideline limit of 3.4 ng/m3, have been documented inside some houses in the U.S.
Volatilization of PCBs in soil was thought to be the primary source of PCBs in the atmosphere, but research suggests ventilation of PCB-contaminated indoor air from buildings is the primary source of PCB contamination in the atmosphere.
Biosphere
In the biosphere, PCBs can be degraded by the sun, bacteria or eukaryotes, but the speed of the reaction depends on both the number and the disposition of chlorine atoms in the molecule: less substituted, meta- or para-substituted PCBs undergo biodegradation faster than more substituted congeners.
In bacteria, PCBs may be dechlorinated through reductive dechlorination, or oxidized by dioxygenase enzyme. In eukaryotes, PCBs may be oxidized by the cytochrome P450 enzyme.
Like many lipophilic toxins, PCBs undergo biomagnification and bioaccumulation primarily due to the fact that they are easily retained within organisms.
Plastic pollution, specifically microplastics, are a major contributor of PCBs into the biosphere and especially into marine environments. PCBs concentrate in marine environments because freshwater systems, like rivers, act as a bridge for plastic pollution to be transported from terrestrial environments into marine environments. It has been estimated that 88–95% of marine plastic is exported into the ocean by just 10 major rivers.
An organism can accumulate PCBs by consuming other organisms that have previously ingested PCBs from terrestrial, freshwater, or marine environments. The concentration of PCBs within an organism will increase over their lifetime; this process is called bioaccumulation. PCB concentrations within an organism also change depending upon which trophic level they occupy. When an organism occupies a high trophic level, like orcas or humans, they will accumulate more PCBs than an organism that occupies a low trophic level, like phytoplankton. If enough organisms with a trophic level are killed due to the accumulation of toxins, like PCB, a trophic cascade can occur.
PCBs can cause harm to human health or even death when eaten. PCBs can be transported by birds from aquatic sources onto land via feces and carcasses.
Biochemical metabolism
Overview
PCBs undergo xenobiotic biotransformation, a mechanism used to make lipophilic toxins more polar and more easily excreted from the body. The biotransformation is dependent on the number of chlorine atoms present, along with their position on the rings. Phase I reactions occur by adding an oxygen to either of the benzene rings by Cytochrome P450. The type of P450 present also determines where the oxygen will be added; phenobarbital (PB)-induced P450s catalyze oxygenation to the meta-para positions of PCBs while 3-methylcholanthrene (3MC)-induced P450s add oxygens to the ortho–meta positions. PCBs containing ortho–meta and meta–para protons can be metabolized by either enzyme, making them the most likely to leave the organism. However, some metabolites of PCBs containing ortho–meta protons have increased steric hindrance from the oxygen, causing increased stability and an increased chance of accumulation.
Species dependent
Metabolism is also dependent on the species of organism; different organisms have slightly different P450 enzymes that metabolize certain PCBs better than others. Looking at the PCB metabolism in the liver of four sea turtle species (green, olive ridley, loggerhead and hawksbill), green and hawksbill sea turtles have noticeably higher hydroxylation rates of PCB 52 than olive ridley or loggerhead sea turtles. This is because the green and hawksbill sea turtles have higher P450 2-like protein expression. This protein adds three hydroxyl groups to PCB 52, making it more polar and water-soluble. P450 3-like protein expression that is thought to be linked to PCB 77 metabolism, something that was not measured in this study.
Temperature dependent
Temperature plays a key role in the ecology, physiology and metabolism of aquatic species. The rate of PCB metabolism was temperature dependent in yellow perch (Perca flavescens). In fall and winter, only 11 out of 72 introduced PCB congeners were excreted and had halflives of more than 1,000 days. During spring and summer when the average daily water temperature was above 20 °C, persistent PCBs had halflives of 67 days. The main excretion processes were fecal egestion, growth dilution and loss across respiratory surfaces. The excretion rate of PCBs matched with the perch's natural bioenergetics, where most of their consumption, respiration and growth rates occur during the late spring and summer. Since the perch is performing more functions in the warmer months, it naturally has a faster metabolism and has less PCB accumulation. However, multiple cold-water periods mixed with toxic PCBs with coplanar chlorine molecules can be detrimental to perch health.
Sex dependent
Enantiomers of chiral compounds have similar chemical and physical properties, but can be metabolized by the body differently. This was looked at in bowhead whales (Balaena mysticetus) for two main reasons: they are large animals with slow metabolisms (meaning PCBs will accumulate in fatty tissue) and few studies have measured chiral PCBs in cetaceans. They found that the average PCB concentrations in the blubber were approximately four times higher than the liver; however, this result is most likely age- and sex-dependent. As reproductively active females transferred PCBs and other poisonous substances to the fetus, the PCB concentrations in the blubber were significantly lower than males of the same body length (less than 13 meters).
Health effects
The toxicity of PCBs varies considerably among congeners. The coplanar PCBs, known as nonortho PCBs because they are not substituted at the ring positions ortho to (next to) the other ring, (such as PCBs 77, 126 and 169), tend to have dioxin-like properties, and generally are among the most toxic congeners. Because PCBs are almost invariably found in complex mixtures, the concept of toxic equivalency factors (TEFs) has been developed to facilitate risk assessment and regulation, where more toxic PCB congeners are assigned higher TEF values on a scale from 0 to 1. One of the most toxic compounds known, 2,3,7,8-tetrachlorodibenzo[p]dioxin, a PCDD, is assigned a TEF of 1. In June 2020, State Impact of Pennsylvania stated that "In 1979, the EPA banned the use of PCBs, but they still exist in some products produced before 1979. They persist in the environment because they bind to sediments and soils. High exposure to PCBs can cause birth defects, developmental delays, and liver changes."
Exposure and excretion
In general, people are exposed to PCBs overwhelmingly through food, much less so by breathing contaminated air, and least by skin contact. Once exposed, some PCBs may change to other chemicals inside the body. These chemicals or unchanged PCBs can be excreted in feces or may remain in a person's body for years, with half lives estimated at 10–15 years. PCBs collect in body fat and milk fat. PCBs biomagnify up the food web and are present in fish and overflow of contaminated aquifers. Human infants are exposed to PCBs through breast milk or by intrauterine exposure through transplacental transfer of PCBs and are at the top of the food chain.
Workers recycling old equipment in the electronics recycling industry can also be exposed to PCBs.
Signs and symptoms
Humans
The most commonly observed health effects in people exposed to extremely high levels of PCBs are skin conditions, such as chloracne and rashes, but these were known to be symptoms of acute systemic poisoning dating back to 1922. Studies in workers exposed to PCBs have shown changes in blood and urine that may indicate liver damage. In Japan in 1968, 280 kg of PCB-contaminated rice bran oil was used as chicken feed, resulting in a mass poisoning, known as Yushō disease, in over 1800 people. Common symptoms included dermal and ocular lesions, irregular menstrual cycles and lowered immune responses. Other symptoms included fatigue, headaches, coughs, and unusual skin sores. Additionally, in children, there were reports of poor cognitive development. Women exposed to PCBs before or during pregnancy can give birth to children with lowered cognitive ability, immune compromise, and motor control problems.
There is evidence that crash dieters that have been exposed to PCBs have an elevated risk of health complications. Stored PCBs in the adipose tissue become mobilized into the blood when individuals begin to crash diet.
PCBs have shown toxic and mutagenic effects by interfering with hormones in the body. PCBs, depending on the specific congener, have been shown to both inhibit and imitate estradiol, the main sex hormone in females. Imitation of the estrogen compound can feed estrogen-dependent breast cancer cells, and possibly cause other cancers, such as uterine or cervical. Inhibition of estradiol can lead to serious developmental problems for both males and females, including sexual, skeletal, and mental development issues. In a cross-sectional study, PCBs were found to be negatively associated with testosterone levels in adolescent boys.
High PCB levels in adults have been shown to result in reduced levels of the thyroid hormone triiodothyronine, which affects almost every physiological process in the body, including growth and development, metabolism, body temperature, and heart rate. It also resulted in reduced immunity and increased thyroid disorders.
Animals
Animals that eat PCB-contaminated food, even for short periods of time, suffer liver damage and may die. In 1968 in Japan, 400,000 birds died after eating poultry feed that was contaminated with PCBs. Animals that ingest smaller amounts of PCBs in food over several weeks or months develop various health effects, including anemia; acne-like skin conditions (chloracne); liver, stomach, and thyroid gland injuries (including hepatocarcinoma), and thymocyte apoptosis. Other effects of PCBs in animals include changes in the immune system, behavioral alterations, and impaired reproduction. PCBs that have dioxin-like activity are known to cause a variety of teratogenic effects in animals. Exposure to PCBs causes hearing loss and symptoms similar to hypothyroidism in rats.
Cancer
In 2013, the International Agency for Research on Cancer (IARC) classified dioxin-like PCBs as human carcinogens.
According to the U.S. EPA, PCBs have been shown to cause cancer in animals and evidence supports a cancer-causing effect in humans. Per the EPA, studies have found increases in malignant melanoma and rare liver cancers in PCB workers.
In 2013, the IARC determined that the evidence for PCBs causing non-Hodgkin lymphoma is "limited" and "not consistent". In contrast an association between elevated blood levels of PCBs and non-Hodgkin lymphoma had been previously accepted. PCBs may play a role in the development of cancers of the immune system because some tests of laboratory animals subjected to very high doses of PCBs have shown effects on the animals' immune system, and some studies of human populations have reported an association between environmental levels of PCBs and immune response.
Lawsuits related to health effects
In the early 1990s, Monsanto faced several lawsuits over harm caused by PCBs from workers at companies such as Westinghouse that bought PCBs from Monsanto and used them to build electrical equipment. Monsanto and its customers, such as Westinghouse and GE, also faced litigation from third parties, such as workers at scrap yards that bought used electrical equipment and broke them down to reclaim valuable metals. Monsanto settled some of these cases and won the others, on the grounds that it had clearly told its customers that PCBs were dangerous chemicals and that protective procedures needed to be implemented.
In 2003, Monsanto and Solutia Inc., a Monsanto corporate spin-off, reached a $700 million settlement with the residents of West Anniston, Alabama, who had been affected by the manufacturing and dumping of PCBs. In a trial lasting six weeks, the jury found that "Monsanto had engaged in outrageous behavior, and held the corporations and its corporate successors liable on all six counts it considered – including negligence, nuisance, wantonness and suppression of the truth."
In 2014, the Los Angeles Superior Court found that Monsanto was not liable for cancers claimed to be from PCBs permeating the food supply of three plaintiffs who had developed non-Hodgkin's lymphoma. After a four-week trial, the jury found that Monsanto's production and sale of PCBs between 1935 and 1977 were not substantial causes of the cancer.
In 2015, the cities of Spokane, San Diego, and San Jose initiated lawsuits against Monsanto to recover cleanup costs for PCB contaminated sites, alleging that Monsanto continued to sell PCBs without adequate warnings after they knew of their toxicity. Monsanto issued a media statement concerning the San Diego case, claiming that improper use or disposal by third-parties, of a lawfully sold product, was not the company's responsibility.
In July 2015, a St Louis county court in Missouri found that Monsanto, Solutia, Pharmacia and Pfizer were not liable for a series of deaths and injuries caused by PCBs manufactured by Monsanto Chemical Company until 1977. The trial took nearly a month and the jury took a day of deliberations to return a verdict against the plaintiffs from throughout the USA. Similar cases are ongoing. "The evidence simply doesn't support the assertion that the historic use of PCB products was the cause of the plaintiffs' harms. We are confident that the jury will conclude, as two other juries have found in similar cases, that the former Monsanto Company is not responsible for the alleged injuries," a Monsanto statement said.
In May 2016, a Missouri state jury ordered Monsanto to pay $46.5 million to three plaintiffs whose exposure to PCB caused non-Hodgkin lymphoma.
In December 2016, the state of Washington filed suit in King County. The state sought damages and clean up costs related to PCBs. In March 2018 Ohio Attorney General Mike DeWine also filed a lawsuit against Monsanto over health issues posed by PCBs.
On November 21, 2019, a federal judge denied a bid by Monsanto to dismiss a lawsuit filed by LA County calling the company to clean up cancer-causing PCBs from Los Angeles County waterways and storm sewer pipelines. The lawsuit calls for Monsanto to pay for cleanup of PCBs from dozens of waterways, including the LA River, San Gabriel River and the Dominguez Watershed.
In June 2020, Bayer agreed to pay $650 million to settle local lawsuits related to Monsanto's pollution of public waters in various areas of the United States with PCBs.
In 2023, over 90 Vermont school districts joined a lawsuit against Monsanto alleging that PCBs created by the company were used in the construction of their schools. The Vermont Attorney General's office also filed its own lawsuit against Monsanto related to the use of its PCBs.
History
In 1865, the first "PCB-like" chemical was discovered, and was found to be a byproduct of coal tar. Years later in 1876, German chemist Oscar Döbner (Doebner) synthesized the first PCB in a laboratory. Since then, large amounts of PCBs were released into the environment, to the extent that there are even measurable amounts of PCBs in feathers of birds currently held in museums before the production of PCBs peaked.
In 1935, Monsanto Chemical Company (later Solutia Inc) took over commercial production of PCBs from Swann Chemical Company which had begun in 1929. PCBs, originally termed "chlorinated diphenyls", were commercially produced as mixtures of isomers at different degrees of chlorination. The electric industry used PCBs as a non-flammable replacement for mineral oil to cool and insulate industrial transformers and capacitors. PCBs were also commonly used as heat stabilizer in cables and electronic components to enhance the heat and fire resistance of PVC.
In the 1930s, the toxicity associated with PCBs and other chlorinated hydrocarbons, including polychlorinated naphthalenes, was recognized because of a variety of industrial incidents. Between 1936 and 1937, there were several medical cases and papers released on the possible link between PCBs and its detrimental health effects. In 1936, a U.S. Public health Service official described the wife and child of a worker from the Monsanto Industrial Chemical Company who exhibited blackheads and pustules on their skin. The official attributed these symptoms to contact with the worker's clothing after he returned from work. In 1937, a conference about the hazards was organized at Harvard School of Public Health, and a number of publications referring to the toxicity of various chlorinated hydrocarbons were published before 1940.
In 1947, Robert Brown reminded chemists that Arochlors were "objectionably toxic": "Thus the maximum permissible concentration for an 8-hr. day is 1 mg. per cu.m. [] of air. They also produce a serious and disfiguring dermatitis".
In 1954, Kanegafuchi Chemical Co. Ltd. (Kaneka Corporation) first produced PCBs, and continued until 1972.
Through the 1960s Monsanto Chemical Company knew increasingly more about PCBs' harmful effects on humans and the environment, per internal leaked documents released in 2002, yet PCB manufacture and use continued with few restraints until the 1970s.
In 1966, PCBs were determined by Swedish chemist Sören Jensen to be an environmental contaminant. Jensen, according to a 1994 article in Sierra, named chemicals PCBs, which previously, had simply been called "phenols" or referred to by various trade names, such as Aroclor, Kanechlor, Pyrenol, Chlorinol and others. In 1972, PCB production plants existed in Austria, West Germany, France, the UK, Italy, Japan, Spain, the USSR and the US.
In the early 1970s, Ward B. Stone of the New York State Department of Environmental Conservation (NYSDEC) first published his findings that PCBs were leaking from transformers and had contaminated the soil at the bottom of utility poles.
There have been allegations that Industrial Bio-Test Laboratories engaged in data falsification in testing relating to PCBs. In 2003, Monsanto and Solutia Inc., a Monsanto corporate spinoff, reached a US$700 million settlement with the residents of West Anniston, Alabama who had been affected by the manufacturing and dumping of PCBs. In a trial lasting six weeks, the jury found that "Monsanto had engaged in outrageous behavior, and held the corporations and its corporate successors liable on all six counts it considered – including negligence, nuisance, wantonness and suppression of the truth."
Existing products containing PCBs which are "totally enclosed uses" such as insulating fluids in transformers and capacitors, vacuum pump fluids, and hydraulic fluid, are allowed to remain in use in the US. The public, legal, and scientific concerns about PCBs arose from research indicating they are likely carcinogens having the potential to adversely impact the environment and, therefore, undesirable as commercial products. Despite active research spanning five decades, extensive regulatory actions, and an effective ban on their production since the 1970s, PCBs still persist in the environment and remain a focus of attention.
Pollution due to PCBs
Belgium
In 1999, the Dioxin Affair occurred when 50 kg of PCB transformer oils were added to a stock of recycled fat used for the production of 500 tonnes of animal feed, eventually affecting around 2,500 farms in several countries. The name Dioxin Affair was coined from early misdiagnosis of dioxins as the primary contaminants, when in fact they turned out to be a relatively small part of the contamination caused by thermal reactions of PCBs. The PCB congener pattern suggested the contamination was from a mixture of Aroclor 1260 and 1254. Over 9 million chickens, and 60,000 pigs were destroyed because of the contamination. The extent of human health effects has been debated, in part because of the use of differing risk assessment methods. One group predicted increased cancer rates, and increased rates of neurological problems in those exposed as neonates. A second study suggested carcinogenic effects were unlikely and that the primary risk would be associated with developmental effects due to exposure in pregnancy and neonates. Two businessmen who knowingly sold the contaminated feed ingredient received two-year suspended sentences for their role in the crisis.
Italy
The Italian company Caffaro, located in Brescia, specialized in producing PCBs from 1938 to 1984, following the acquisition of the exclusive rights to use the patent in Italy from Monsanto. The pollution resulting from this factory and the case of Anniston, in the US, are the largest known cases in the world of PCB contamination in water and soil, in terms of the amount of toxic substance dispersed, size of the area contaminated, number of people involved and duration of production.
The values reported by the local health authority (ASL) of Brescia since 1999 are 5,000 times above the limits set by Ministerial Decree 471/1999 (levels for residential areas, 0.001 mg/kg). As a result of this and other investigations, in June 2001, a complaint of an environmental disaster was presented to the Public Prosecutor's Office of Brescia. Research on the adult population of Brescia showed that residents of some urban areas, former workers of the plant, and consumers of contaminated food, have PCB levels in their bodies that are in many cases 10–20 times higher than reference values in comparable general populations. PCBs entered the human food supply by animals grazing on contaminated pastures near the factory, especially in local veal mostly eaten by farmers' families. The exposed population showed an elevated risk of Non-Hodgkin lymphoma, but not for other specific cancers.
Japan
In 1968, a mixture of dioxins and PCBs got into rice bran oil produced in northern Kyushu. Contaminated cooking oil sickened more than 1,860 people. The symptoms were called Yushō disease.
In Okinawa, high levels of PCB contamination in soil on Kadena Air Base were reported in 1987 at thousands of parts per million, some of the highest levels found in any pollution site in the world.
Republic of Ireland
In December 2008, a number of Irish news sources reported testing had revealed "extremely high" levels of dioxins, by toxic equivalent, in pork products, ranging from 80 to 200 times the EU's upper safe limit of 1.5 pg WHO-TEQDFP/μg i.e. 0.12 to 0.3 parts per billion.
Brendan Smith, the Minister for Agriculture, Fisheries and Food, stated the pork contamination was caused by PCB-contaminated feed that was used on 9 of Ireland's 400 pig farms, and only one feed supplier was involved. Smith added that 38 beef farms also used the same contaminated feed, but those farms were quickly isolated and no contaminated beef entered the food chain. While the contamination was limited to just 9 pig farms, the Irish government requested the immediate withdrawal and disposal of all pork-containing products produced in Ireland and purchased since September 1, 2008. This request for withdrawal of pork products was confirmed in a press release by the Food Safety Authority of Ireland on December 6.
It is thought that the incident resulted from the contamination of fuel oil used in a drying burner at a single feed processor, with PCBs. The resulting combustion produced a highly toxic mixture of PCBs, dioxins and furans, which was included in the feed produced and subsequently fed to a large number of pigs.
Kenya
In Kenya, a number of cases have been reported in the 2010s of thieves selling transformer oil, stolen from electric transformers, to the operators of roadside food stalls for use in deep frying. When used for frying, it is reported that transformer oil lasts much longer than regular cooking oil. The downside of this misuse of the transformer oil is the threat to the health of the consumers, due to the presence of PCBs.
Slovakia
The chemical plant Chemko in Strážske (east Slovakia) was an important producer of polychlorinated biphenyls for the former communist bloc (Comecon) until 1984. Chemko contaminated a large part of east Slovakia, especially the sediments of the Laborec river and reservoir Zemplínska šírava.
Slovenia
Between 1962 and 1983, the Iskra Kondenzatorji company in Semič (White Carniola, Southeast Slovenia) manufactured capacitors using PCBs. Due to the wastewater and improperly disposed waste products, the area (including the Krupa and Lahinja rivers) became highly contaminated with PCBs. The pollution was discovered in 1983, when the Krupa river was meant to become a water supply source. The area was sanitized then, but the soil and water are still highly polluted. Traces of PCBs were found in food (eggs, cow milk, walnuts) and Krupa is still the most PCB-polluted river in the world.
Spain and Portugal
Several cetacean species have very high mean blubber PCB concentrations likely to cause population declines and suppress population recovery. Striped dolphins, bottlenose dolphins and orcas were found to have mean levels that markedly exceeded all known marine mammal PCB toxicity thresholds. The western Mediterranean Sea and the south-west Iberian Peninsula were identified as "hotspots".
United Kingdom
Monsanto manufactured PCBs at its chemical plant in Newport, South Wales, until the mid- to late-1970s. During this period, waste matter, including PCBs, from the Newport site was dumped at a disused quarry near Groes-faen, west of Cardiff, and Penhros landfill site from where it continues to be released in waste water discharges.
United States
Monsanto was the only company that manufactured PCBs in the US. Its production was entirely halted in 1977. (Kimbrough, 1987, 1995) On November 25, 2020, U.S. District Judge Fernando M. Olguin rejected a proposed $650 million settlement from Bayer, the company which acquired Monsanto in 2018, and allowed Monsanto-related lawsuits involving PCB to proceed.
Alabama
PCBs originating from Monsanto Chemical Company in Anniston, Alabama, were dumped into Snow Creek, which then spread to Choccolocco Creek, then Logan Martin Lake. In the early 2000s, class action lawsuits were settled by local land owners, including those on Logan Martin Lake, and Lay Reservoir (downstream on the Coosa River), for the PCB pollution. Donald Stewart, former Senator from Alabama, first learned of the concerns of hundreds of west Anniston residents after representing a church which had been approached about selling its property by Monsanto. Stewart went on to be the pioneer and lead attorney in the first and majority of cases against Monsanto and focused on residents in the immediate area known to be most polluted. Other attorneys later joined in to file suits for those outside the main immediate area around the plant; one of these was the late Johnnie Cochran.
In 2007, the highest pollution levels remained concentrated in Snow and Choccolocco Creeks. Concentrations in fish have declined and continue to decline over time; sediment disturbance, however, can resuspend the PCBs from the sediment back into the water column and food web.
California
San Francisco Bay has been contaminated by PCBs, "a legacy of PCBs spread widely across the land surface of the watershed, mixed deep into the sediment of the Bay, and contaminating the Bay food web". Levels of PCBs in fish and shellfish exceed thresholds for safe consumption. Signs around the Bay warn anglers of which species to avoid. State water quality regulators set a Total Maximum Daily Load for PCBs require city and county governments around the Bay to implement control measures to limit PCBs in urban runoff. An important part of the second, revised version of this permit was the requirement for municipalities to install green infrastructure with a goal of reducing pollutant levels in stormwater.
Connecticut
In New Haven, the decommissioned English Station has a high concentration of PCB contamination due to the chemicals used in the running of the plant. This, along with asbestos contamination, has made cleaning and demolishing the abandoned site extremely difficult. The PCB contamination has spread to the soil, and to the river, where locals will sometimes fish unaware of the danger.
Great Lakes
In 1976, environmentalists found PCBs in the sludge at Waukegan Harbor, the southwest end of Lake Michigan. They were able to trace the source of the PCBs back to the Outboard Marine Corporation that was producing boat motors next to the harbor. By 1982, the Outboard Marine Corporation was court-ordered to release quantitative data referring to their PCB waste released. The data stated that from 1954 they released 100,000 tons of PCB into the environment, and that the sludge contained PCBs in concentrations as high as 50%.
In 1989, during construction near the Zilwaukee bridge, workers uncovered an uncharted landfill containing PCB-contaminated waste which cost $100,000 to clean up.
Much of the Great Lakes area were still heavily polluted with PCBs in 1988, despite extensive remediation work.
Indiana
From the late 1950s through 1977, Westinghouse Electric used PCBs in the manufacture of capacitors in its Bloomington, Indiana, plant. Reject capacitors were hauled and dumped in area salvage yards and landfills, including Bennett's Dump, Neal's Landfill and Lemon Lane Landfill. Workers also dumped PCB oil down factory drains, which contaminated the city sewage treatment plant. The City of Bloomington gave away the sludge to area farmers and gardeners, creating anywhere from 200 to 2,000 sites, which remain unaddressed.
Over 1,000 tons of PCBs were estimated to have been dumped in Monroe and Owen counties. Although federal and state authorities have been working on the sites' environmental remediation, many areas remain contaminated. Concerns have been raised regarding the removal of PCBs from the karst limestone topography, and regarding the possible disposal options. To date, the Westinghouse Bloomington PCB Superfund site case does not have a Remedial Investigation/Feasibility Study (RI/FS) and Record of Decision (ROD), although Westinghouse signed a US Department of Justice Consent Decree in 1985. The 1985 consent decree required Westinghouse to construct an incinerator that would incinerate PCB-contaminated materials. Because of public opposition to the incinerator, however, the State of Indiana passed a number of laws that delayed and blocked its construction. The parties to the consent decree began to explore alternative remedies in 1994 for six of the main PCB contaminated sites in the consent decree. Hundreds of sites remain unaddressed as of 2014. Monroe County will never be PCB-free, as noted in a 2014 Indiana University program about the local contamination.
On February 15, 2008, Monroe County approved a plan to clean up the three remaining contaminated sites in the City of Bloomington, at a cost of $9.6 million to CBS Corp., the successor of Westinghouse. In 1999, Viacom bought CBS, so they are current responsible party for the PCB sites.
Massachusetts
Pittsfield, in western Massachusetts, was home to the General Electric (GE) transformer, capacitor, and electrical generating equipment divisions. The electrical generating division built and repaired equipment that was used to power the electrical utility grid throughout the nation. PCB-contaminated oil routinely migrated from GE's industrial plant located in the very center of the city to the surrounding groundwater, nearby Silver Lake, and to the Housatonic River, which flows through Massachusetts, Connecticut, and down to Long Island Sound. PCB-containing solid material was widely used as fill, including oxbows of the Housatonic River. Fish and waterfowl which live in and around the river contain significant levels of PCBs and are not safe to eat. EPA designated the Pittsfield plant and several miles of the river as a Superfund site in 1997, and ordered GE to remediate the site. EPA and GE began a cleanup of the area in 1999.
New Bedford Harbor, which is a listed Superfund site, contained some of the highest sediment concentrations of PCBs in the marine environment. Cleanup of the area began in 1994 and is mostly complete as of 2020.
Investigations into historic waste dumping in the Bliss Corner neighborhood have revealed the existence of PCBs, among other hazardous materials, buried in soil and waste material.
Missouri
In 1982, Martha C. Rose Chemical Inc. began processing and disposing of materials contaminated with PCBs in Holden, Missouri, a small rural community about east of Kansas City. From 1982 until 1986, nearly 750 companies, including General Motors Corp., Commonwealth Edison, Illinois Power Co. and West Texas Utilities, sent millions of pounds of PCB contaminated materials to Holden for disposal. Instead, according to prosecutors, the company began storing the contaminated materials while falsifying its reports to the EPA to show they had been removed. After investigators learned of the deception, Rose Chemical was closed and filed for bankruptcy. The site had become the nation's largest waste site for the chemical PCB. In the four years the company was operational, the EPA inspected it four times and assessed $206,000 in fines but managed to collect only $50,000.
After the plant closed the state environmental agency found PCB contamination in streams near the plant and in the city's sewage treatment sludge. A 100,000 square-foot warehouse and unknown amounts of contaminated soil and water around the site had to be cleaned up. Most of the surface debris, including close to 13 million pounds of contaminated equipment, carcasses and tanks of contaminated oil, had to be removed. Walter C. Carolan, owner of Rose Chemical, and five others pleaded guilty in 1989 to committing fraud or falsifying documents. Carolan and two other executives served sentences of less than 18 months; the others received fines and were placed on probation. Cleanup costs at the site are estimated at $35 million.
Montana
Two launch facilities at Malmstrom Air Force Base showed PCB levels higher than the thresholds recommended by the Environmental Protection Agency when extensive sampling began of active U.S. intercontinental ballistic missile bases to address specific cancer concerns in 2023.
New York
Pollution of the Hudson River is largely due to dumping of PCBs by General Electric from 1947 to 1977. GE dumped an estimated 1.3 million pounds of PCBs into the Hudson River during these years. The PCBs came from the company's two capacitor manufacturing plants at Hudson Falls and Fort Edward, New York. This pollution caused a range of harmful effects to wildlife and people who eat fish from the river or drink the water. In 1984, EPA declared a 200-mile (320 km) stretch of the river, from Hudson Falls to New York City, to be a Superfund site requiring cleanup. Extensive remediation actions on the river began in the 1970s with the implementation of wastewater discharge permits and consequent control or reduction of wastewater discharges, and sediment removal operations, which have continued into the 21st century.
Love Canal is a neighborhood in Niagara Falls, New York, that was heavily contaminated with toxic waste including PCBs. Eighteen Mile Creek in Lockport, New York, is an EPA Superfund site for PCBs contamination.
PCB pollution at the State Office Building in Binghamton was responsible for what is now considered to be the first indoor environmental disaster in the United States. In 1981, a transformer explosion in the basement spewed PCBs throughout the entire 18-story building. The contamination was so severe that cleanup efforts kept the building closed for 13 years.
North Carolina
One of the largest deliberate PCB spills in American history occurred in the summer of 1978 when 31,000 gallons (117 m^3) of PCB-contaminated oil were illegally sprayed by the Ward PCB Transformer Company in swaths along the roadsides of some of North Carolina highway shoulders in 14 counties and at the Fort Liberty Army Base. The crime, known as "the midnight dumpings", occurred over nearly two weeks, as drivers of a black-painted tanker truck drove down one side of rural Piedmont highways spraying PCB-laden waste and then up the other side the following night.
Under Governor James B. Hunt, Jr., state officials then erected large, yellow warning signs along the contaminated highways that read: "CAUTION: PCB Chemical Spills Along Highway Shoulders". The illegal dumping is believed to have been motivated by the passing of the Toxic Substances Control Act (TSCA), which became effective on August 2, 1978, and increased the expense of chemical waste disposal.
Within a couple of weeks of the crime, Robert Burns and his sons, Timothy and Randall, were arrested for dumping the PCBs along the roadsides. Burns was a business partner of Robert "Buck" Ward Jr., of the Ward PCB Transformer Company, in Raleigh. Burns and sons pleaded guilty to state and Federal criminal charges; Burns received a three to five-year prison sentence. Ward was acquitted of state charges in the dumping, but was sentenced to 18 months prison time for violation of TSCA.
Cleanup and disposal of the roadside PCBs generated controversy, as the Governor's plan to pick up the roadside PCBs and to bury them in a landfill in rural Warren County were strongly opposed in 1982 by local residents.
In October 2013, at the request of the South Carolina Department of Health and Environmental Control (SCDHEC), the City of Charlotte, North Carolina, decided to stop applying sewage sludge to land while authorities investigated the source of PCB contamination.
In February 2014, the City of Charlotte admitted PCBs have entered their sewage treatment centers as well.
After the 2013 SCDHEC had issued emergency regulations, the City of Charlotte discovered high levels of PCBs entering its sewage waste water treatment plants, where sewage is converted to sewage sludge. The city at first denied it had a problem, then admitted an "event" occurred in February 2014, and in April that the problem had occurred much earlier. The city stated that its very first test with a newly changed test method revealed very high PCB levels in its sewage sludge farm field fertilizer. Because of the widespread use of the contaminated sludge, SCDHEC subsequently issued PCB fish advisories for nearly all streams and rivers bordering farm fields that had been applied with city waste.
Ohio
The Clyde cancer cluster (also known as the Sandusky County cancer cluster) is a childhood cancer cluster that has affected many families in Clyde, Ohio, and surrounding areas. PCBs were found in soil in a public park within the area of the cancer cluster.
In Akron, Ohio, soil was contaminated and noxious PCB-laden fumes had been put into the air by an electrical transformer deconstruction operation from the 1930s to the 1960s.
South Carolina
From 1955 until 1977, the Sangamo Weston plant in Pickens, South Carolina, used PCBs to manufacture capacitors, and dumped 400,000 pounds of PCB contaminated wastewater into the Twelve Mile Creek. In 1990, the EPA declared the site of the capacitor plant, its landfills and the polluted watershed, which stretches nearly downstream to Lake Hartwell as a Superfund site. Two dams on the Twelve Mile Creek are to be removed and on Feb. 22, 2011 the first of two dams began to be dismantled. Some contaminated sediment is being removed from the site and hauled away, while other sediment is pumped into a series of settling ponds.
In 2013, the state environmental regulators issued a rare emergency order, banning all sewage sludge from being land applied or deposited on landfills, as it contained very high levels of PCBs. The problem had not been discovered until thousands of acres of farm land in the state had been contaminated by the hazardous sludge. A criminal investigation to determine the perpetrator of this crime was launched.
Washington
As of 2015, several bodies of water in the state of Washington were contaminated with PCBs, including the Columbia River, the Duwamish River, Green Lake, Lake Washington, the Okanogan River, Puget Sound, the Spokane River, the Walla Walla River, the Wenatchee River, and the Yakima River. A study by Washington State published in 2011 found that the two largest sources of PCB flow into the Spokane River were City of Spokane stormwater (44%) and municipal and industrial discharges (20%).
PCBs entered the environment through paint, hydraulic fluids, sealants, inks and have been found in river sediment and wildlife. Spokane utilities will spend $300 million to prevent PCBs from entering the river in anticipation of a 2017 federal deadline to do so. In August 2015 Spokane joined other U.S. cities like San Diego and San Jose, California, and Westport, Massachusetts, in seeking damages from Monsanto.
Wisconsin
From 1954 until 1971, the Fox River in Appleton, Wisconsin, had PCBs deposited into it from Appleton Paper/NCR, P.H. Gladfelter, Georgia-Pacific and other notable local paper manufacturing facilities. The Wisconsin DNR estimates that after wastewater treatment the PCB discharges to the Fox River due to production losses ranged from 81,000 kg to 138,000 kg. (178,572 lbs. to 304,235 lbs). The production of Carbon Copy Paper and its byproducts led to the discharge into the river. Fox River clean up is ongoing.
Pacific Ocean
Polychlorinated biphenyls have been discovered in organisms living in the Mariana Trench in the Pacific Ocean. Levels were as high as 1,900 nanograms per gram of amphipod tissue in the organisms analyzed.
Regulation
Japan
In 1972 the Japanese government banned the production, use, and import of PCBs.
Sweden
In 1973, the use of PCBs in "open" or "dissipative" sources (such as plasticisers in paints and cements, casting agents, fire retardant fabric treatments and heat stabilizing additives for PVC electrical insulation, adhesives, paints and waterproofing, railroad ties) was banned in Sweden.
United Kingdom
In 1981, the UK banned closed uses of PCBs in new equipment, and nearly all UK PCB synthesis ceased; closed uses in existing equipment containing in excess of 5 litres of PCBs were not stopped until December 2000.
United States
In 1976, concern over the toxicity and persistence (chemical stability) of PCBs in the environment led the United States Congress to ban their domestic production, effective January 1, 1978, pursuant to the Toxic Substances Control Act. To implement the law, EPA banned new manufacturing of PCBs, but issued regulations that allowed for their continued use in electrical equipment for economic reasons. EPA began issuing regulations for PCB usage and disposal in 1979. The agency has issued guidance publications for safe removal and disposal of PCBs from existing equipment.
EPA defined the "maximum contaminant level goal" for public water systems as zero, but because of the limitations of water treatment technologies, a level of 0.5 parts per billion is the actual regulated level (maximum contaminant level).
Methods of destruction
Physical
PCBs were technically attractive because of their inertness, which includes their resistance to combustion. Nonetheless, they can be effectively destroyed by incineration at 1000 °C. When combusted at lower temperatures, they convert in part to more hazardous unintentional persistent organic pollutants, including polychlorinated dibenzofurans and dibenzo-p-dioxins. When conducted properly, the combustion products are water, carbon dioxide, and hydrogen chloride. In some cases, the PCBs are combusted as a solution in kerosene. PCBs have also been destroyed by pyrolysis in the presence of alkali metal carbonates.
Thermal desorption is highly effective at removing PCBs from soil.
Chemical
PCBs are fairly chemically unreactive, this property being attractive for its application as an inert material. They resist oxidation.
Many chemical compounds are available to destroy or reduce the PCBs. Commonly, PCBs are degraded by basic mixtures of glycols, which displace some or all chloride. Also effective are reductants such as sodium or sodium naphthalene. Vitamin B12 has also shown promise.
Microbial
The use of microorganisms to degrade PCBs from contaminated sites, relying on multiple microorganisms' co-metabolism, is known as bioremediation of polychlorinated biphenyl. Some micro-organisms degrade PCBs by reducing the C-Cl bonds. Microbial dechlorination tends to be rather slow-acting in comparison to other methods. Enzymes extracted from microbes can show PCB activity. In 2005, Shewanella oneidensis biodegraded a high percentage of PCBs in soil samples. A low voltage current can stimulate the microbial degradation of PCBs.
Fungal
There is research showing that some ligninolytic fungi can degrade PCBs.
Bioremediation
The remediation, or removal, of PCBs from estuarian and coastal river sediments is quite difficult due to the overlying water column and the potential for resuspension of contaminants during the removal process. The most common method of PCB extraction from sediments is to dredge an area and dispose of the sediments in a landfill. This method is troubling for a number of reasons, namely that it has a risk of resuspension of the chemicals as the sediments are disturbed, and this method can be very damaging to ecosystems.
A potential cost effective, low risk remediation technique is bioremediation. Bioremediation involves the use of biota to remediate sediments. Phytoremediation, the use of plants to remediate soils, has been found to be effective for a broad range of contaminants such as mercury PCB and PAHs in terrestrial soils. A promising study conducted in New Bedford Harbor found that Ulva rigida, a type of seaweed common throughout the world, is effective at removing PCB from sediments. During a typical bloom in New Bedford Harbor, U. rigida forms a thick mat that lies on top of and in contact with the sediment. This allows for U. rigida to uptake large amounts of PCB from the sediment with concentrations of PCB in U. rigida reaching 1580 μg kg−1 within 24 hours of the bloom. Live tissue tended to take up higher concentrations of PCB than dead tissue, but this is not to say that dead tissue did not still take up large amounts of PCB as well.
Homologs
For a complete list of the 209 PCB congeners, see PCB congener list. Note that biphenyl, while not technically a PCB congener because of its lack of chlorine substituents, is still typically included in the literature.
See also
Bay mud
Organochlorine compound
Polybrominated biphenyl
Zodiac, a novel by Neal Stephenson which involves PCBs and their impact on the environment.
References
External links
ATSDR Toxicological Profile U.S. Department of Health and Human Services
IARC PCB Monograph
PCBs – US EPA
National Toxicology Program technical reports searched for "PCB"
Polychlorinated Byphenyls: Human Health Aspects by the WHO
Current Intelligence Bulletin 7: Polychlorinated (PCBs)—NIOSH/CDC (1975)
It's Your Health – PCBs (Health Canada)
Chloroarenes
Flame retardants
Endocrine disruptors
Hazardous air pollutants
IARC Group 2A carcinogens
Soil contamination
Synthetic materials
Electric transformers
Suspected testicular toxicants
Suspected fetotoxicants
Suspected female reproductive toxicants
Persistent organic pollutants under the Convention on Long-Range Transboundary Air Pollution
Persistent organic pollutants under the Stockholm Convention
Monsanto
Biphenyls | Polychlorinated biphenyl | [
"Chemistry",
"Environmental_science"
] | 12,717 | [
"Persistent organic pollutants under the Stockholm Convention",
"Endocrine disruptors",
"Persistent organic pollutants under the Convention on Long-Range Transboundary Air Pollution",
"Synthetic materials",
"Environmental chemistry",
"Soil contamination",
"Chemical synthesis"
] |
48,232 | https://en.wikipedia.org/wiki/Phenyl%20group | In organic chemistry, the phenyl group, or phenyl ring, is a cyclic group of atoms with the formula , and is often represented by the symbol Ph (archaically φ) or Ø. The phenyl group is closely related to benzene and can be viewed as a benzene ring, minus a hydrogen, which may be replaced by some other element or compound to serve as a functional group. A phenyl group has six carbon atoms bonded together in a hexagonal planar ring, five of which are bonded to individual hydrogen atoms, with the remaining carbon bonded to a substituent. Phenyl groups are commonplace in organic chemistry. Although often depicted with alternating double and single bonds, the phenyl group is chemically aromatic and has equal bond lengths between carbon atoms in the ring.
Nomenclature
Usually, a "phenyl group" is synonymous with and is represented by the symbol Ph (archaically, Φ), or Ø. Benzene is sometimes denoted as PhH. Phenyl groups are generally attached to other atoms or groups. For example, triphenylmethane () has three phenyl groups attached to the same carbon center. Many or even most phenyl compounds are not described with the term "phenyl". For example, the chloro derivative is normally called chlorobenzene, although it could be called phenyl chloride. In special (and rare) cases, isolated phenyl groups are detected: the phenyl anion (), the phenyl cation (), and the phenyl radical ().
Although Ph and phenyl uniquely denote , substituted derivatives also are described using the phenyl terminology. For example, is nitrophenyl, and is pentafluorophenyl. Monosubstituted phenyl groups (that is, disubstituted benzenes) are associated with electrophilic aromatic substitution reactions and the products follow the arene substitution pattern. So, a given substituted phenyl compound has three isomers, ortho (1,2-disubstitution), meta (1,3-disubstitution) and para (1,4-disubstitution). A disubstituted phenyl compound (trisubstituted benzene) may be, for example, 1,3,5-trisubstituted or 1,2,3-trisubstituted. Higher degrees of substitution, of which the pentafluorophenyl group is an example, exist and are named according to IUPAC nomenclature.
Etymology
Phenyl is derived , which in turn derived , as the first phenyl compounds named were byproducts of making and refining various gases used for lighting. According to McMurry, "The word is derived , commemorating the discovery of benzene by Michael Faraday in 1825 from the oily residue left by the illuminating gas used in London street lamps."
Structure, bonding, and characterization
Phenyl compounds are derived from benzene (), at least conceptually and often in terms of their production. In terms of its electronic properties, the phenyl group is related to a vinyl group. It is generally considered an inductively withdrawing group (-I), because of the higher electronegativity of sp2 carbon atoms, and a resonance donating group (+M), due to the ability of its π system to donate electron density when conjugation is possible. The phenyl group is hydrophobic. Phenyl groups tend to resist oxidation and reduction. Phenyl groups (like all aromatic compounds) have enhanced stability in comparison to equivalent bonding in aliphatic (non-aromatic) groups. This increased stability is due to the unique properties of aromatic molecular orbitals.
The bond lengths between carbon atoms in a phenyl group are approximately 1.4 Å.
In 1H-NMR spectroscopy, protons of a phenyl group typically have chemical shifts around 7.27 ppm. These chemical shifts are influenced by aromatic ring current and may change depending on substituents.
Preparation, occurrence, and applications
Phenyl groups are usually introduced using reagents that behave as sources of the phenyl anion or the phenyl cation. Representative reagents include phenyllithium () and phenylmagnesium bromide (). Electrophiles are attacked by benzene to give phenyl derivatives:
C6H6 + E+ -> C6H5E + H+
where (the "electrophile") = . These reactions are called electrophilic aromatic substitutions.
Phenyl groups are found in many organic compounds, both natural and synthetic (see figure). Most common among natural products is the amino acid phenylalanine, which contains a phenyl group. A major product of the petrochemical industry is "BTX" consisting of benzene, toluene, and xylene - all of which are building blocks for phenyl compounds. The polymer polystyrene is derived from a phenyl-containing monomer and owes its properties to the rigidity and hydrophobicity of the phenyl groups. Many drugs as well as many pollutants contain phenyl rings.
One of the simplest phenyl-containing compounds is phenol, . It is often said the resonance stability of phenol makes it a stronger acid than that of aliphatic alcohols such as ethanol (pKa = 10 vs. 16–18). However, a significant contribution is the greater electronegativity of the sp2 alpha carbon in phenol compared to the sp3 alpha carbon in aliphatic alcohols.
References
External links
Aryl groups | Phenyl group | [
"Chemistry"
] | 1,206 | [
"Substituents",
"Aryl groups",
"Functional groups"
] |
48,239 | https://en.wikipedia.org/wiki/Celestial%20sphere | In astronomy and navigation, the celestial sphere is an abstract sphere that has an arbitrarily large radius and is concentric to Earth. All objects in the sky can be conceived as being projected upon the inner surface of the celestial sphere, which may be centered on Earth or the observer. If centered on the observer, half of the sphere would resemble a hemispherical screen over the observing location.
The celestial sphere is a conceptual tool used in spherical astronomy to specify the position of an object in the sky without consideration of its linear distance from the observer. The celestial equator divides the celestial sphere into northern and southern hemispheres.
Description
Because astronomical objects are at such remote distances, casual observation of the sky offers no information on their actual distances. All celestial objects seem equally far away, as if fixed onto the inside of a sphere with a large but unknown radius, which appears to rotate westward overhead; meanwhile, Earth underfoot seems to remain still. For purposes of spherical astronomy, which is concerned only with the directions to celestial objects, it makes no difference if this is actually the case or if it is Earth that is rotating while the celestial sphere is stationary.
The celestial sphere can be considered to be infinite in radius. This means any point within it, including that occupied by the observer, can be considered the center. It also means that all parallel lines, be they millimetres apart or across the Solar System from each other, will seem to intersect the sphere at a single point, analogous to the vanishing point of graphical perspective. All parallel planes will seem to intersect the sphere in a coincident great circle (a "vanishing circle").
Conversely, observers looking toward the same point on an infinite-radius celestial sphere will be looking along parallel lines, and observers looking toward the same great circle, along parallel planes. On an infinite-radius celestial sphere, all observers see the same things in the same direction.
For some objects, this is over-simplified. Objects which are relatively near to the observer (for instance, the Moon) will seem to change position against the distant celestial sphere if the observer moves far enough, say, from one side of planet Earth to the other. This effect, known as parallax, can be represented as a small offset from a mean position. The celestial sphere can be considered to be centered at the Earth's center, the Sun's center, or any other convenient location, and offsets from positions referred to these centers can be calculated.
In this way, astronomers can predict geocentric or heliocentric positions of objects on the celestial sphere, without the need to calculate the individual geometry of any particular observer, and the utility of the celestial sphere is maintained. Individual observers can work out their own small offsets from the mean positions, if necessary. In many cases in astronomy, the offsets are insignificant.
Determining location of objects
The celestial sphere can thus be thought of as a kind of astronomical shorthand, and is applied very frequently by astronomers. For instance, the Astronomical Almanac for 2010 lists the apparent geocentric position of the Moon on January 1 at 00:00:00.00 Terrestrial Time, in equatorial coordinates, as right ascension 6h 57m 48.86s, declination +23° 30' 05.5". Implied in this position is that it is as projected onto the celestial sphere; any observer at any location looking in that direction would see the "geocentric Moon" in the same place against the stars. For many rough uses (e.g. calculating an approximate phase of the Moon), this position, as seen from the Earth's center, is adequate.
For applications requiring precision (e.g. calculating the shadow path of an eclipse), the Almanac gives formulae and methods for calculating the topocentric coordinates, that is, as seen from a particular place on the Earth's surface, based on the geocentric position. This greatly abbreviates the amount of detail necessary in such almanacs, as each observer can handle their own specific circumstances.
Greek history on celestial spheres
Celestial spheres (or celestial orbs) were envisioned to be perfect and divine entities initially from Greek astronomers such as Aristotle. He composed a set of principles called Aristotelian physics that outlined the natural order and structure of the world. Like other Greek astronomers, Aristotle also thought the "...celestial sphere as the frame of reference for their geometric theories of the motions of the heavenly bodies". With his adoption of Eudoxus of Cnidus' theory, Aristotle had described celestial bodies within the Celestial sphere to be filled with pureness, perfect and quintessence (the fifth element that was known to be divine and purity according to Aristotle). Aristotle deemed the Sun, Moon, planets and the fixed stars to be perfectly concentric spheres in a superlunary region above the sublunary sphere. Aristotle had asserted that these bodies (in the superlunary region) are perfect and cannot be corrupted by any of the classical elements: fire, water, air, and earth. Corruptible elements were only contained in the sublunary region and incorruptible elements were in the superlunary region of Aristotle's geocentric model. Aristotle had the notion that celestial orbs must exhibit celestial motion (a perfect circular motion) that goes on for eternity. He also argued that the behavior and property follows strictly to a principle of natural place where the quintessential element moves freely of divine will, while other elements, fire, air, water and earth, are corruptible, subject to change and imperfection. Aristotle's key concepts rely on the nature of the five elements distinguishing the Earth and the Heavens in the astronomical reality, taking Eudoxus's model of separate spheres.
Numerous discoveries from Aristotle and Eudoxus (approximately 395 B.C. to 337 B.C.) have sparked differences in both of their models and sharing similar properties simultaneously. Aristotle and Eudoxus claimed two different counts of spheres in the heavens. According to Eudoxus, there were only 27 spheres in the heavens, while there are 55 spheres in Aristotle's model. Eudoxus attempted to construct his model mathematically from a treatise known as On Speeds () and asserted the shape of the hippopede or lemniscate was associated with planetary retrogression. Aristotle emphasized that the speed of the celestial orbs is unchanging, like the heavens, while Eudoxus emphasized that the orbs are in a perfect geometrical shape. Eudoxus's spheres would produce undesirable motions to the lower region of the planets, while Aristotle introduced unrollers between each set of active spheres to counteract the motions of the outer set, or else the outer motions will be transferred to the outer planets. Aristotle would later observe "...the motions of the planets by using the combinations of nested spheres and circular motions in creative ways, but further observations kept undoing their work".
Aside from Aristotle and Eudoxus, Empedocles gave an explanation that the motion of the heavens, moving about it at divine (relatively high) speed, puts the Earth in a stationary position due to the circular motion preventing the downward movement from natural causes. Aristotle criticized Empedocles's model, arguing that all heavy objects go towards the Earth and not the whirl itself coming to Earth. He ridiculed it and claimed that Empedocles's statement was extremely absurd. Anything that defied the motion of natural place and the unchanging heavens (including the celestial spheres) was criticized immediately by Aristotle.
==Celestial coordinate systems==
These concepts are important for understanding celestial coordinate systems, frameworks for measuring the positions of objects in the sky. Certain reference lines and planes on Earth, when projected onto the celestial sphere, form the bases of the reference systems. These include the Earth's equator, axis, and orbit. At their intersections with the celestial sphere, these form the celestial equator, the north and south celestial poles, and the ecliptic, respectively. As the celestial sphere is considered arbitrary or infinite in radius, all observers see the celestial equator, celestial poles, and ecliptic at the same place against the background stars.
From these bases, directions toward objects in the sky can be quantified by constructing celestial coordinate systems. Similar to geographic longitude and latitude, the equatorial coordinate system specifies positions relative to the celestial equator and celestial poles, using right ascension and declination. The ecliptic coordinate system specifies positions relative to the ecliptic (Earth's orbit), using ecliptic longitude and latitude. Besides the equatorial and ecliptic systems, some other celestial coordinate systems, like the galactic coordinate system, are more appropriate for particular purposes.
History
The ancient Greeks assumed the literal truth of stars attached to a celestial sphere, revolving about the Earth in one day, and a fixed Earth.
The Eudoxan planetary model, on which the Aristotelian and Ptolemaic models were based, was the first geometric explanation for the "wandering" of the classical planets. The outermost of these "crystal spheres" was thought to carry the fixed stars. Eudoxus used 27 concentric spherical solids to answer Plato's challenge: "By the assumption of what uniform and orderly motions can the apparent motions of the planets be accounted for?"
Anaxagoras in the mid 5th century BC was the first known philosopher to suggest that the stars were "fiery stones" too far away for their heat to be felt. Similar ideas were expressed by Aristarchus of Samos. However, they did not enter mainstream European and Islamic astronomy of the late ancient and medieval period.
Copernican heliocentrism did away with the planetary spheres, but it did not necessarily preclude the existence of a sphere for the fixed stars. The first astronomer of the European Renaissance to suggest that the stars were distant suns was Giordano Bruno in his De l'infinito universo et mondi (1584). This idea was among the charges, albeit not in a prominent position, brought against him by the Inquisition.
The idea became mainstream in the later 17th century, especially following the publication of Conversations on the Plurality of Worlds by Bernard Le Bovier de Fontenelle (1686), and by the early 18th century it was the default working assumptions in stellar astronomy.
Star globe
A celestial sphere can also refer to a physical model of the celestial sphere or celestial globe.
Such globes map the constellations on the outside of a sphere, resulting in a mirror image of the constellations as seen from Earth. The oldest surviving example of such an artifact is the globe of the Farnese Atlas sculpture, a 2nd-century copy of an older (Hellenistic period, ca. 120 BCE) work.
Bodies other than Earth
Observers on other worlds would, of course, see objects in that sky under much the same conditions – as if projected onto a dome. Coordinate systems based on the sky of that world could be constructed. These could be based on the equivalent "ecliptic", poles and equator, although the reasons for building a system that way are as much historic as technical.
See also
Horizontal coordinate system
Equatorial coordinate system
Hour angle
Pole star
Polar alignment
Equatorial mount
Equinox (celestial coordinates)
Spherical astronomy
Ecliptic
Zodiac
Orbital pole
Stellar parallax, a type of short-term motion of distant stars
Proper motion, a type of longer-term motion of distant stars
Firmament
Fixed stars, about the old concept of the celestial sphere to be a material, physical entity.
Notes
References
Bibliography (References) for Wikipedia assignment on Celestial Sphere. (APA6 format). Crowe, M. J. (2001). Theories of the world from antiquity to the Copernican revolution. Mineola, NY: Dover Publications.
External links
MEASURING THE SKY A Quick Guide to the Celestial Sphere – Jim Kaler, University of Illinois
General Astronomy/The Celestial Sphere – Wikibooks
Rotating Sky Explorer – University of Nebraska-Lincoln
Monthly skymaps – for every location on Earth
Sphere
Spherical astronomy
Spheres | Celestial sphere | [
"Astronomy",
"Mathematics"
] | 2,476 | [
"Astronomical coordinate systems",
"Coordinate systems"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.